Feynman’s Lecture on Superconductivity

The ultimate challenge for students of Feynman’s iconic Lectures series is, of course, to understand his final one: A Seminar on Superconductivity. As he notes in his introduction to this formidably dense piece, the text does not present the detail of each and every step in the development and, therefore, we’re not supposed to immediately understand everything. As Feynman puts it: we should just believe (more or less) that things would come out if we would be able to go through each and every step. Well… Let’s see. It took me one long maddening day to figure out the first formula:f1It says that the amplitude for a particle to go from to in a vector potential (think of a classical magnetic field) is the amplitude for the same particle to go from to b when there is no field (A = 0) multiplied by the exponential of the line integral of the vector potential times the electric charge divided by Planck’s constant.

Of course, after a couple of hours, I recognized the formula for the magnetic effect on an amplitude, which I described in my previous post, which tells us that a magnetic field will shift the phase of the amplitude of a particle with an amount equal to:

integral

Hence, if we write 〈b|a〉 for A = 0 as 〈b|aA = 0 = C·eiθ, then 〈b|a〉 in A will, naturally, be equal to 〈b|a〉 in A = C·ei(θ+φ) = C·eiθ·eiφ = 〈b|aA = 0 ·eiφ, and so that explains it. 🙂 Alright… Next.

The Schrödinger equation in an electromagnetic field

Feynman then jots down Schrödinger’s equation for the same particle (with charge q) moving in an electromagnetic field that is characterized not only by a vector potential but also by the (scalar) potential Φ:

schrodinger

Now where does that come from? We know the standard formula in an electric field, right? It’s the formula we used to find the energy states of electrons in a hydrogen atom:

i·ħ·∂ψ/∂t = −(1/2)·(ħ2/m)∇2ψ + V·ψ

Of course, it is easy to see that we replaced V by q·Φ, which makes sense: the potential of a charge in an electric field is the product of the charge (q) and the (electric) potential (Φ), because Φ is, obviously, the potential energy of the unit charge. It’s also easy to see we can re-write −ħ2·∇2ψ as [(ħ/i)·∇]·[(ħ/i)·∇]ψ because (1/i)·(1/i) = 1/i2 = 1/(−1) = −1. 🙂 Alright. So it’s just that −q·A term in the (ħ/i)∇ − q·A expression that we need to explain now.

Unfortunately, that explanation is not so easy. Feynman basically re-derives Schrödinger’s equation using his trade-mark historical argument – which did not include any magnetic field – with a vector potential. The re-derivation is rather annoying, and I didn’t have the courage to go through it myself, so you should – just like me – just believe Feynman when he says that, when there’s a vector potential – i.e. when there’s a magnetic field – then that ħ/i)·∇ operator – which is the momentum operator– ought to be replaced by a new momentum operator:

new-momentum-operator

So… Well… There we are… 🙂 So far, so good.

Local conservation of probability

The title of this section in Feynman’s Lecture (yes, still the same Lecture – we’re not switching topics here) is the equation of continuity for probabilities. I find it brilliant, because it confirms my interpretation of the wave function as describing some kind of energy flow. Let me quote Feynman on his endeavor here:

“An important part of the Schrödinger equation for a single particle is the idea that the probability to find the particle at a position is given by the absolute square of the wave function. It is also characteristic of the quantum mechanics that probability is conserved in a local sense. When the probability of finding the electron somewhere decreases, while the probability of the electron being elsewhere increases (keeping the total probability unchanged), something must be going on in between. In other words, the electron has a continuity in the sense that if the probability decreases at one place and builds up at another place, there must be some kind of flow between. If you put a wall, for example, in the way, it will have an influence and the probabilities will not be the same. So the conservation of probability alone is not the complete statement of the conservation law, just as the conservation of energy alone is not as deep and important as the local conservation of energy. If energy is disappearing, there must be a flow of energy to correspond. In the same way, we would like to find a “current” of probability such that if there is any change in the probability density (the probability of being found in a unit volume), it can be considered as coming from an inflow or an outflow due to some current.”

This is it, really ! The wave function does represent some kind of energy flow – between a so-called ‘real’ and a so-called ‘imaginary’ space, which are to be defined in terms of directional versus rotational energy, as I try to point out – admittedly: more by appealing to intuition than to mathematical rigor – in that post of mine on the meaning of the wavefunction.

So what is the flow – or probability current as Feynman refers to it? Well… Here’s the formula:

probability-current-2

Huh? Yes. Don’t worry too much about it right now. The essential point is to understand what this current – denoted by J – actually stands for:

probability-current-1

So what’s next? Well… Nothing. I’ll actually refer you to Feynman now, because I can’t improve on how he explains how pairs of electrons start behaving when temperatures are low enough to render Boltzmann’s Law irrelevant: the kinetic energy that’s associated with temperature can no longer break up electron pairs if temperature comes close to the zero point.

Huh? What? Electron pairs? Electrons are not supposed to form pairs, are they? They carry the same charge and are, therefore, supposed to repel each other. Well… Yes and no. In my post on the electron orbitals in a hydrogen atom – which just presented Feynman’s presentation on the subject-matter in a, hopefully, somewhat more readable format – we calculated electron orbitals neglecting spin. In Feynman’s words:

“We make another approximation by forgetting that the electron has spin. […] The non-relativistic Schrödinger equation disregards magnetic effects. [However] Small magnetic effects [do] occur because, from the electron’s point-of-view, the proton is a circulating charge which produces a magnetic field. In this field the electron will have a different energy with its spin up than with it down. [Hence] The energy of the atom will be shifted a little bit from what we will calculate. We will ignore this small energy shift. Also we will imagine that the electron is just like a gyroscope moving around in space always keeping the same direction of spin. Since we will be considering a free atom in space the total angular momentum will be conserved. In our approximation we will assume that the angular momentum of the electron spin stays constant, so all the rest of the angular momentum of the atom—what is usually called “orbital” angular momentum—will also be conserved. To an excellent approximation the electron moves in the hydrogen atom like a particle without spin—the angular momentum of the motion is a constant.”

To an excellent approximation… But… Well… Electrons in a metal do form pairs, because they can give up energy in that way and, hence, they are more stable that way. Feynman does not go into the details here – I guess because that’s way beyond the undergrad level – but refers to the Bardeen-Coopers-Schrieffer (BCS) theory instead – the authors of which got a Nobel Prize in Physics in 1972 (that’s a decade or so after Feynman wrote this particular Lecture), so I must assume the theory is well accepted now. 🙂

Of course, you’ll shout now: Hey! Hydrogen is not a metal! Well… Think again: the latest breakthrough in physics is making hydrogen behave like a metal. 🙂 And I am really talking the latest breakthrough: Science just published the findings of this experiment last month! 🙂 🙂 In any case, we’re not talking hydrogen here but superconducting materials, to which – as far as we know – the BCS theory does apply.

So… Well… I am done. I just wanted to show you why it’s important to work your way through Feynman’s last Lecture because… Well… Quantum mechanics does explain everything – although the nitty-gritty of it (the Meissner effect, the London equation, flux quantization, etc.) are rather hard bullets to bite. 😦

Don’t give up ! I am struggling with the nitty-gritty too ! 🙂

The Aharonov-Bohm effect

This title sounds very exciting. It is – or was, I should say – one of these things I thought I would never ever understand, until I started studying physics, that is. 🙂

Having said that, there is – incidentally – nothing very special about the Aharonov-Bohm effect. As Feynman puts it: “The theory was known from the beginning of quantum mechanics in 1926. […] The implication was there all the time, but no one paid attention to it.”

To be fair, he also admits the experiment itself – proving the effect – is “very, very difficult”, which is why the first experiment that claimed to confirm the predicted effect was set up in 1960 only. In fact, some claim the results of that experiment were ambiguous, and that it was only in 1986, with the experiment of Akira Tonomura, that the Aharonov-Bohm effect was unambiguously demonstrated. So what is it about?

In essence, it proves the reality of the vector potential—and of the (related) magnetic field. What do we mean with a real field? To put it simply, a real field cannot act on some particle from a distance through some kind of spooky ‘action-at-a-distance’: real fields must be specified at the position of the particle itself and describe what happens there. Now you’ll immediately wonder: so what’s a non-real field? Well… Some field that does act through some kind of spooky ‘action-at-a-distance.’ As for an example… Well… I can’t give you one because we’ve only been discussing real fields so far. 🙂

So it’s about what a magnetic (or an electric) field does in terms influencing motion and/or quantum-mechanical amplitudes. In fact, we discussed this matter  quite a while ago (check my 2015 post on it). Now, I don’t want to re-write that post, but let me just remind you of the essentials. The two equations for the magnetic field (B) in Maxwell’s set of four equations (the two others specify the electric field E) are: (1) B = 0 and (2) c2×B = j0 + ∂E/ ∂t. Now, you can temporarily forget about the second equation, but you should note that the B = 0 equation is always true (unlike the ×E = 0 expression, which is true for electrostatics only, when there are no moving charges). So it says that the divergence of B is zero, always.

Now, from our posts on vector calculus, you may or may not remember that the divergence of the curl of a vector field is always zero. We wrote: div (curl A) = •(×A) = 0, always. Now, there is another theorem that we can now apply, which says the following: if the divergence of a vector field, say D, is zero – so if D = 0, then D will be the curl of some other vector field C, so we can write: D×C. When we now apply this to our B = 0 equation, we can confidently state the following: 

If B = 0, then there is an A such that B×A

We can also write this as follows:·B = ·(×A) = 0 and, hence, B×A. Now, it’s this vector field A that is referred to as the (magnetic) vector potential, and so that’s what we want to talk about here. As a start, it may be good to write out all of the components of our B×A vector:

formula for B

In that 2015 post, I answered the question as to why we’d need this new vector field in a way that wasn’t very truthful: I just said that, in many situations, it would be more convenient – from a mathematical point of view, that is – to first find A, and then calculate the derivatives above to get B.

Now, Feynman says the following about this argument in his Lecture on the topic: “It is true that in many complex problems it is easier to work with A, but it would be hard to argue that this ease of technique would justify making you learn about one more vector field. […] We have introduced A because it does have an important physical significance: it is a real physical field.” Let us follow his argument here.

Quantum-mechanical interference effects

Let us first remind ourselves of the quintessential electron interference experiment illustrated below. [For a much more modern rendering of this experiment, check out the  Tout Est Quantique video on it. It’s much more amusing than my rather dry exposé here, but it doesn’t give you the math.]

interference

We have electrons, all of (nearly) the same energy, which leave the source – one by one – and travel towards a wall with two narrow slits. Beyond the wall is a backstop with a movable detector which measures the rate, which we call I, at which electrons arrive at a small region of the backstop at the distance x from the axis of symmetry. The rate (or intensityI is proportional to the probability that an individual electron that leaves the source will reach that region of the backstop. This probability has the complicated-looking distribution shown in the illustration, which we understand is due to the interference of two amplitudes, one from each slit. So we associate the two trajectories with two amplitudes, which Feynman writes as A1eiΦ1 and A2eiΦ2 respectively.

As usual, Feynman abstracts away from the time variable here because it is, effectively, not relevant: the interference pattern depends on distances and angles only. Having said that, for a good understanding, we should – perhaps – write our two wavefunctions as A1ei(ωt + Φ1and A2ei(ωt + Φ2respectively. The point is: we’ve got two wavefunctions – one for each trajectory – even if it’s only one electron going through the slit: that’s the mystery of quantum mechanics. 🙂 We need to add these waves so as to get the interference effect:

R = A1ei(ωt + Φ1A2ei(ωt + Φ2= [A1eiΦ1 A2eiΦ2eiωt

Now, we know we need to take the absolute square of this thing to get the intensity – or probability (before normalization). The absolute square of a product, is the product of the absolute squares of the factors, and we also know that the absolute square of any complex number is just the product of the same number with its complex conjugate. Hence, the absolute square of the eiωt factor is equal to |eiωt|2 = eiωteiωt = e= 1. So the time-dependent factor doesn’t matter: that’s why we can always abstract away from it. Let us now take the absolute square of the [A1eiΦ1 A2eiΦ2] factor, which we can write as:

|R|= |A1eiΦ1 A2eiΦ2|= (A1eiΦ1 A2eiΦ2)·(A1eiΦ1 A2eiΦ2)

= A1+ A2+ 2·A1·A2·cos(Φ1−Φ2) = A1+ A2+ 2·A1·A2·cosδ with δ = Φ1−Φ2

OK. This is probably going a bit quick, but you should be able to figure it out, especially when remembering that eiΦ eiΦ = 2·cosΦ and cosΦ = cos(−Φ). The point to note is that the intensity is equal to the sum of the intensities of both waves plus a correction factor, which is equal to 2·A1·A2·cos(Φ1−Φ2) and, hence, ranges from −2·A1·A2 to +2·A1·A2. Now, it takes a bit of geometrical wizardry to be able to write the phase difference δ = Φ1−Φas

δ = 2π·a/λ = 2π·(x/L)·d/λ

—but it can be done. 🙂 Well… […] OK. 🙂 Let me quickly help you here by copying another diagram from Feynman – one he uses to derive the formula for the phase difference on arrival between the signals from two oscillators. A1 and A2 are equal here (A1 = A2 = A) so that makes the situation below somewhat simpler to analyze. However, instead, we have the added complication of a phase difference (α) at the origin – which Feynman refers to as an intrinsic relative phasetriangle

When we apply the geometry shown above to our electron passing through the slits, we should, of course, equate α to zero. For the rest, the picture is pretty similar as the two-slit picture. The distance in the two-slit – i.e. the difference in the path lengths for the two trajectories of our electron(s) – is, obviously, equal to the d·sinθ factor in the oscillator picture. Also, because L is huge as compared to x, we may assume that trajectory 1 and 2 are more or less parallel and, importantly, that the triangles in the picture – small and large – are rectangular. Now, trigonometry tells us that sinθ is equal to the ratio of the opposite side of the triangle and the hypotenuse (i.e. the longest side of the rectangular triangle). The opposite side of the triangle is x and, because is very, very small as compared to L, we may approximate the length of the hypotenuse with L. [I know—a lot of approximations here, but… Well… Just go along with it as for now…] Hence, we can equate sinθ to x/L and, therefore, d·x/L. Now we need to calculate the phase difference. How many wavelengths do we have in a? That’s simple: a/λ, i.e. the total distance divided by the wavelength. Now these wavelengths correspond to 2π·aradians (one cycle corresponds to one wavelength which, in turn, corresponds to 2π radians). So we’re done. We’ve got the formula: δ = Φ1−Φ= 2π·a/λ = 2π·(x/L)·d/λ.

Huh? Yes. Just think about it. I need to move on. The point is: when is equal to zero, the two waves are in phase, and the probability will have a maximum. When δ = π, then the waves are out of phase and interfere destructively (cosπ = −1), so the intensity (and, hence, the probability) reaches a minimum. 

So that’s pretty obvious – or should be pretty obvious if you’ve understood some of the basics we presented in this blog. We now move to the non-standard stuff, i.e. the Aharonov-Bohm effect(s).

Interference in the presence of an electromagnetic field

In essence, the Aharonov-Bohm effect is nothing special: it is just a law – two laws, to be precise – that tells us how the phase of our wavefunction changes because of the presence of a magnetic and/or electric field. As such, it is not very different from previous analyses and presentations, such as those showing how amplitudes are affected by a potential − such as an electric potential, or a gravitational field, or a magnetic field − and how they relate to a classical analysis of the situation (see, for example, my November 2015 post on this topic). If anything, it’s just a more systematic approach to the topic and – importantly – an approach centered around the use of the vector potential A (and the electric potential Φ). Let me give you the formulas:

f1

f2

The first formula tells us that the phase of the amplitude for our electron (or whatever charged particle) to arrive at some location via some trajectory is changed by an amount that is equal to the integral of the vector potential along the trajectory times the charge of the particle over Planck’s constant. I know that’s quite a mouthful but just read it a couple of times.

The second formula tells us that, if there’s an electrostatic field, it will produce a phase change given by the negative of the time integral of the (scalar) potential Φ.

These two expressions – taken together – tell us what happens for any electromagnetic field, static or dynamic. In fact, they are really the (two) law(s) replacing the q(v×B) expression in classical mechanics.

So how does it work? Let me further follow Feynman’s treatment of the matter—which analyzes what happens when we’d have some magnetic field in the two-slit experiment (so we assume there’s no electric field: we only look at some magnetic field). We said Φ1 was the phase of the wave along trajectory 1, and Φ2 was the phase of the wave along trajectory 2. Without magnetic field, that is, so B = 0. Now, the (first) formula above tells us that, when the field is switched on, the new phases will be the following:

f3

f4

Hence, the phase difference δ = Φ1−Φwill now be equal to:

f5

Now, we can combine the two integrals into one that goes forward along trajectory 1 and comes back along trajectory 2. We’ll denote this path as 1-2 and write the new integral as follows:

f6

Note that we’re using a notation here which suggests that the 1-2 path is closed, which is… Well… Yet another approximation of the Master. In fact, his assumption that the new 1-2 path is closed proves to be essential in the argument that follows the one we presented above, in which he shows that the inherent arbitrariness in our choice of a vector potential function doesn’t matter, but… Well… I don’t want to get too technical here.

Let me conclude this post by noting we can re-write our grand formula above in terms of the flux of the magnetic field B:

f7

So… Well… That’s it, really. I’ll refer you to Feynman’s Lecture on this matter for a detailed description of the 1960 experiment itself, which involves a magnetized iron whisker that acts like a tiny solenoid—small enough to match the tiny scale of the interference experiment itself. I must warn you though: there is a rather long discussion in that Lecture on the ‘reality’ of the magnetic and the vector potential field which – unlike Feynman’s usual approach to discussions like this – is rather philosophical and partially misinformed, as it assumes there is zero magnetic field outside of a solenoid. That’s true for infinitely long solenoids, but not true for real-life solenoids: if we have some A, then we must also have some B, and vice versa. Hence, if the magnetic field (B) is a real field (in the sense that it cannot act on some particle from a distance through some kind of spooky ‘action-at-a-distance’), then the vector potential A is an equally real field—and vice versa. Feynman admits as much as he concludes his rather lengthy philosophical excursion with the following conclusion (out of which I already quoted one line in my introduction to this post):

“This subject has an interesting history. The theory we have described was known from the beginning of quantum mechanics in 1926. The fact that the vector potential appears in the wave equation of quantum mechanics (called the Schrödinger equation) was obvious from the day it was written. That it cannot be replaced by the magnetic field in any easy way was observed by one man after the other who tried to do so. This is also clear from our example of electrons moving in a region where there is no field and being affected nevertheless. But because in classical mechanics A did not appear to have any direct importance and, furthermore, because it could be changed by adding a gradient, people repeatedly said that the vector potential had no direct physical significance—that only the magnetic and electric fields are “real” even in quantum mechanics. It seems strange in retrospect that no one thought of discussing this experiment until 1956, when Bohm and Aharonov first suggested it and made the whole question crystal clear. The implication was there all the time, but no one paid attention to it. Thus many people were rather shocked when the matter was brought up. That’s why someone thought it would be worthwhile to do the experiment to see if it was really right, even though quantum mechanics, which had been believed for so many years, gave an unequivocal answer. It is interesting that something like this can be around for thirty years but, because of certain prejudices of what is and is not significant, continues to be ignored.”

Well… That’s it, folks! Enough for today! 🙂

An interpretation of the wavefunction

This is my umpteenth post on the same topic. 😦 It is obvious that this search for a sensible interpretation is consuming me. Why? I am not sure. Studying physics is frustrating. As a leading physicist puts it:

“The teaching of quantum mechanics these days usually follows the same dogma: firstly, the student is told about the failure of classical physics at the beginning of the last century; secondly, the heroic confusions of the founding fathers are described and the student is given to understand that no humble undergraduate student could hope to actually understand quantum mechanics for himself; thirdly, a deus ex machina arrives in the form of a set of postulates (the Schrödinger equation, the collapse of the wavefunction, etc); fourthly, a bombardment of experimental verifications is given, so that the student cannot doubt that QM is correct; fifthly, the student learns how to solve the problems that will appear on the exam paper, hopefully with as little thought as possible.”

That’s obviously not the way we want to understand quantum mechanics. [With we, I mean, me, of course, and you, if you’re reading this blog.] Of course, that doesn’t mean I don’t believe Richard Feynman, one of the greatest physicists ever, when he tells us no one, including himself, understands physics quite the way we’d like to understand it. Such statements should not prevent us from trying harder. So let’s look for better metaphors. The animation below shows the two components of the archetypal wavefunction – a simple sine and cosine. They’re the same function actually, but their phases differ by 90 degrees (π/2).

circle_cos_sin

It makes me think of a V-2 engine with the pistons at a 90-degree angle. Look at the illustration below, which I took from a rather simple article on cars and engines that has nothing to do with quantum mechanics. Think of the moving pistons as harmonic oscillators, like springs.

two-timer-576-px-photo-369911-s-original

We will also think of the center of each cylinder as the zero point: think of that point as a point where – if we’re looking at one cylinder alone – the internal and external pressure balance each other, so the piston would not move… Well… If it weren’t for the other piston, because the second piston is not at the center when the first is. In fact, it is easy to verify and compare the following positions of both pistons, as well as the associated dynamics of the situation:

Piston 1

Piston 2

Motion of Piston 1

Motion Piston 2

Top

Center

Compressed air will push piston down

Piston moves down against external pressure

Center

Bottom

Piston moves down against external pressure

External air pressure will push piston up

Bottom

Center

External air pressure will push piston up

Piston moves further up and compresses the air

Center

Top

Piston moves further up and compresses the air

Compressed air will push piston down

When the pistons move, their linear motion will be described by a sinusoidal function: a sine or a cosine. In fact, the 90-degree V-2 configuration ensures that the linear motion of the two pistons will be exactly the same, except for a phase difference of 90 degrees. [Of course, because of the sideways motion of the connecting rods, our sine and cosine function describes the linear motion only approximately, but you can easily imagine the idealized limit situation. If not, check Feynman’s description of the harmonic oscillator.]

The question is: if we’d have a set-up like this, two springs – or two harmonic oscillators – attached to a shaft through a crank, would this really work as a perpetuum mobile? We obviously talk energy being transferred back and forth between the rotating shaft and the moving pistons… So… Well… Let’s model this: the total energy, potential and kinetic, in each harmonic oscillator is constant. Hence, the piston only delivers or receives kinetic energy from the rotating mass of the shaft.

Now, in physics, that’s a bit of an oxymoron: we don’t think of negative or positive kinetic (or potential) energy in the context of oscillators. We don’t think of the direction of energy. But… Well… If we’ve got two oscillators, our picture changes, and so we may have to adjust our thinking here.

Let me start by giving you an authoritative derivation of the various formulas involved here, taking the example of the physical spring as an oscillator—but the formulas are basically the same for any harmonic oscillator.

energy harmonic oscillator

The first formula is a general description of the motion of our oscillator. The coefficient in front of the cosine function (a) is the maximum amplitude. Of course, you will also recognize ω0 as the natural frequency of the oscillator, and Δ as the phase factor, which takes into account our t = 0 point. In our case, for example, we have two oscillators with a phase difference equal to π/2 and, hence, Δ would be 0 for one oscillator, and –π/2 for the other. [The formula to apply here is sinθ = cos(θ – π/2).] Also note that we can equate our θ argument to ω0·t. Now, if = 1 (which is the case here), then these formulas simplify to:

  1. K.E. = T = m·v2/2 = m·ω02·sin2(θ + Δ) = m·ω02·sin20·t + Δ)
  2. P.E. = U = k·x2/2 = k·cos2(θ + Δ)

The coefficient k in the potential energy formula characterizes the force: F = −k·x. The minus sign reminds us our oscillator wants to return to the center point, so the force pulls back. From the dynamics involved, it is obvious that k must be equal to m·ω02., so that gives us the famous T + U = m·ω02/2 formula or, including once again, T + U = m·a2·ω02/2.

Now, if we normalize our functions by equating k to one (k = 1), then the motion of our first oscillator is given by the cosθ function, and its kinetic energy will be equal to sin2θ. Hence, the (instantaneous) change in kinetic energy at any point in time will be equal to:

d(sin2θ)/dθ = 2∙sinθ∙d(sinθ)/dt = 2∙sinθ∙cosθ

Let’s look at the second oscillator now. Just think of the second piston going up and down in our V-twin engine. Its motion is given by the sinθ function which, as mentioned above, is equal to cos(θ−π /2). Hence, its kinetic energy is equal to sin2(θ−π /2), and how it changes – as a function of θ – will be equal to:

2∙sin(θ−π /2)∙cos(θ−π /2) = = −2∙cosθ∙sinθ = −2∙sinθ∙cosθ

We have our perpetuum mobile! While transferring kinetic energy from one piston to the other, the rotating shaft moves at constant speed. Linear motion becomes circular motion, and vice versa, in a frictionless Universe. We have the metaphor we were looking for!

Somehow, in this beautiful interplay between linear and circular motion, energy is being borrowed from one place to another, and then returned. From what place to what place? I am not sure. We may call it the real and imaginary energy space respectively, but what does that mean? One thing is for sure, however: the interplay between the real and imaginary part of the wavefunction describes how energy propagates through space!

How exactly? Again, I am not sure. Energy is, obviously, mass in motion – as evidenced by the E = m·c2 equation, and it may not have any direction (when everything is said and done, it’s a scalar quantity without direction), but the energy in a linear motion is surely different from that in a circular motion, and our metaphor suggests we need to think somewhat more along those lines. Perhaps we will, one day, able to square this circle. 🙂

Schrödinger’s equation

Let’s analyze the interplay between the real and imaginary part of the wavefunction through an analysis of Schrödinger’s equation, which we write as:

i·ħ∙∂ψ/∂t = –(ħ2/2m)∙∇2ψ + V·ψ

We can do a quick dimensional analysis of both sides:

  • [i·ħ∙∂ψ/∂t] = N∙m∙s/s = N∙m
  • [–(ħ2/2m)∙∇2ψ] = N∙m3/m2 = N∙m
  • [V·ψ] = N∙m

Note the dimension of the ‘diffusion’ constant ħ2/2m: [ħ2/2m] = N2∙m2∙s2/kg = N2∙m2∙s2/(N·s2/m) = N∙m3. Also note that, in order for the dimensions to come out alright, the dimension of V – the potential – must be that of energy. Hence, Feynman’s description of it as the potential energy – rather than the potential tout court – is somewhat confusing but correct: V must equal the potential energy of the electron. Hence, V is not the conventional (potential) energy of the unit charge (1 coulomb). Instead, the natural unit of charge is used here, i.e. the charge of the electron itself.

Now, Schrödinger’s equation – without the V·ψ term – can be written as the following pair of equations:

  1. Re(∂ψ/∂t) = −(1/2)∙(ħ/m)∙Im(∇2ψ)
  2. Im(∂ψ/∂t) = (1/2)∙(ħ/m)∙Re(∇2ψ)

This closely resembles the propagation mechanism of an electromagnetic wave as described by Maxwell’s equation for free space (i.e. a space with no charges), but E and B are vectors, not scalars. How do we get this result. Well… ψ is a complex function, which we can write as a + i∙b. Likewise, ∂ψ/∂t is a complex function, which we can write as c + i∙d, and ∇2ψ can then be written as e + i∙f. If we temporarily forget about the coefficients (ħ, ħ2/m and V), then Schrödinger’s equation – including V·ψ term – amounts to writing something like this:

i∙(c + i∙d) = –(e + i∙f) + (a + i∙b) ⇔ a + i∙b = i∙c − d + e+ i∙f  ⇔ a = −d + e and b = c + f

Hence, we can now write:

  1. V∙Re(ψ) = −ħ∙Im(∂ψ/∂t) + (1/2)∙( ħ2/m)∙Re(∇2ψ)
  2. V∙Im(ψ) = ħ∙Re(∂ψ/∂t) + (1/2)∙( ħ2/m)∙Im(∇2ψ)

This simplifies to the two equations above for V = 0, i.e. when there is no potential (electron in free space). Now we can bring the Re and Im operators into the brackets to get:

  1. V∙Re(ψ) = −ħ∙∂Im (ψ)/∂t + (1/2)∙( ħ2/m)∙∇2Re(ψ)
  2. V∙Im(ψ) = ħ∙∂Re(ψ)/∂t + (1/2)∙( ħ2/m)∙∇2Im(ψ)

This is very interesting, because we can re-write this using the quantum-mechanical energy operator H = –(ħ2/2m)∙∇2 + V· (note the multiplication sign after the V, which we do not have – for obvious reasons – for the –(ħ2/2m)∙∇2 expression):

  1. H[Re (ψ)] = −ħ∙∂Im(ψ)/∂t
  2. H[Im(ψ)] = ħ∙∂Re(ψ)/∂t

A dimensional analysis shows us both sides are, once again, expressed in N∙m. It’s a beautiful expression because – if we write the real and imaginary part of ψ as r∙cosθ and r∙sinθ, we get:

  1. H[cosθ] = −ħ∙∂sinθ/∂t = E∙cosθ
  2. H[sinθ] = ħ∙∂cosθ/∂t = E∙sinθ

Indeed, θ = (E∙t − px)/ħ and, hence, −ħ∙∂sinθ/∂t = ħ∙cosθ∙E/ħ = E∙cosθ and ħ∙∂cosθ/∂t = ħ∙sinθ∙E/ħ = E∙sinθ.  Now we can combine the two equations in one equation again and write:

H[r∙(cosθ + i∙sinθ)] = r∙(E∙cosθ + i∙sinθ) ⇔ H[ψ] = E∙ψ

The operator H – applied to the wavefunction – gives us the (scalar) product of the energy E and the wavefunction itself. Isn’t this strange?

Hmm… I need to further verify and explain this result… I’ll probably do so in yet another post on the same topic… 🙂

Post scriptum: The symmetry of our V-2 engine – or perpetuum mobile – is interesting: its cross-section has only one axis of symmetry. Hence, we may associate some angle with it, so as to define its orientation in the two-dimensional cross-sectional plane. Of course, the cross-sectional plane itself is at right angles to the crankshaft axis, which we may also associate with some angle in three-dimensional space. Hence, its geometry defines two orthogonal directions which, in turn, define a spherical coordinate system, as shown below.

558px-3d_spherical

We may, therefore, say that three-dimensional space is actually being implied by the geometry of our V-2 engine. Now that is interesting, isn’t it? 🙂

Quantum-mechanical operators

I wrote a post on quantum-mechanical operators some while ago but, when re-reading it now, I am not very happy about it, because it tries to cover too much ground in one go. In essence, I regret my attempt to constantly switch between the matrix representation of quantum physics – with the | state 〉 symbols – and the wavefunction approach, so as to show how the operators work for both cases. But then that’s how Feynman approaches this.

However, let’s admit it: while Heisenberg’s matrix approach is equivalent to Schrödinger’s wavefunction approach – and while it’s the only approach that works well for n-state systems – the wavefunction approach is more intuitive, because:

  1. Most practical examples of quantum-mechanical systems (like the description of the electron orbitals of an atomic system) involve continuous coordinate spaces, so we have an infinite number of states and, hence, we need to describe it using the wavefunction approach.
  2. Most of us are much better-versed in using derivatives and integrals, as opposed to matrix operations.
  3. A more intuitive statement of the same argument above is the following: the idea of one state flowing into another, rather than being transformed through some matrix, is much more appealing. 🙂

So let’s stick to the wavefunction approach here. So, while you need to remember that there’s a ‘matrix equivalent’ for each of the equations we’re going to use in this post, we’re not going to talk about it.

The operator idea

In classical physics – high school physics, really – we would describe a pointlike particle traveling in space by a function relating its position (x) to time (t): x = x(t). Its (instantaneous) velocity is, obviously, v(t) = dx/dt. Simple. Obvious. Let’s complicate matters now by saying that the idea of a velocity operator would sort of generalize the v(t) = dx/dt velocity equation by making abstraction of the specifics of the x = x(t) function.

Huh? Yes. We could define a velocity ‘operator’ as:

velocity operator

Now, you may think that’s a rather ridiculous way to describe what an operator does, but – in essence – it’s correct. We have some function – describing an elementary particle, or a system, or an aspect of the system – and then we have some operator, which we apply to our function, to extract the information from it that we want: its velocity, its momentum, its energy. Whatever. Hence, in quantum physics, we have an energy operator, a position operator, a momentum operator, an angular momentum operator and… Well… I guess I listed the most important ones. 🙂

It’s kinda logical. Our velocity operator looks at one particular aspect of whatever it is that’s going on: the time rate of change of position. We do refer to that as the velocity. Our quantum-mechanical operators do the same: they look at one aspect of what’s being described by the wavefunction. [At this point, you may wonder what the other properties of our classical ‘system’ – i.e. other properties than velocity – because we’re just looking at a pointlike particle here, but… Well… Think of electric charge and forces acting on it, so it accelerates and decelerates in all kinds of ways, and we have kinetic and potential energy and all that. Or momentum. So it’s just the same: the x = x(t) function may cover a lot of complexities, just like the wavefunction does!]

The Wikipedia article on the momentum operator is, for a change (I usually find Wikipedia quite abstruse on these matters), quite simple – and, therefore – quite enlightening here. It applies the following simple logic to the elementary wavefunction ψ = ei·(ω·t − k∙x), with the de Broglie relations telling us that ω = E/ħ and k = p/ħ:

mom op 1

Note we forget about the normalization coefficient a here. It doesn’t matter: we can always stuff it in later. The point to note is that we can sort of forget about ψ (or abstract away from it—as mathematicians and physicists would say) by defining the momentum operator, which we’ll write as:

mom op 2

Its three-dimensional equivalent is calculated in very much the same way:

wiki

So this operator, when operating on a particular wavefunction, gives us the (expected) momentum when we would actually catch our particle there, provided the momentum doesn’t vary in time. [Note that it may – and actually is likely to – vary in space!]

So that’s the basic idea of an operator. However, the comparison goes further. Indeed, a superficial reading of what operators are all about gives you the impression we get all these observables (or properties of the system) just by applying the operator to the (wave)function. That’s not the case. There is the randomness. The uncertainty. Actual wavefunctions are superpositions of several elementary waves with various coefficients representing their amplitudes. So we need averages, or expected values: E[X] Even our velocity operator ∂/∂t – in the classical world – gives us an instantaneous velocity only. To get the average velocity (in quantum mechanics, we’ll be interested in the the average momentum, or the average position, or the average energy – rather than the average velocity), we’re going to have the calculate the total distance traveled. Now, that’s going to involve a line integral:

= ∫ds.

The principle is illustrated below.

line integral

You’ll say: this is kids stuff, and it is. Just note how we write the same integral in terms of the x and t coordinate, and using our new velocity operator:

integral

Kids stuff. Yes. But it’s good to think about what it represents really. For example, the simplest quantum-mechanical operator is the position operator. It’s just for the x-coordinate, for the y-coordinate, and z for the z-coordinate. To get the average position of a stationary particle – represented by the wavefunction ψ(r, t) – in three-dimensional space, we need to calculate the following volume integral:

position operator 3D V2

Simple? Yes and no. The r·|ψ(r)|2 integrand is obvious: we multiply each possible position (r) by its probability (or likelihood), which is equal to P(r) = |ψ(r)|2. However, look at the assumptions: we already omitted the time variable. Hence, the particle we’re describing here must be stationary, indeed! So we’ll need to re-visit the whole subject allowing for averages to change with time. We’ll do that later. I just wanted to show you that those integrals – even with very simple operators, like the position operator – can become very complicated. So you just need to make sure you know what you’re looking at.

One wavefunction—or two? Or more?

There is another reason why, with the immeasurable benefit of hindsight, I now feel that my earlier post is confusing: I kept switching between the position and the momentum wavefunction, which gives the impression we have different wavefunctions describing different aspects of the same thing. That’s just not true. The position and momentum wavefunction describe essentially the same thing: we can go from one to the other, and back again, by a simple mathematical manipulation. So I should have stuck to descriptions in terms of ψ(x, t), instead of switching back and forth between the ψ(x, t) and φ(x, t) representations.

In any case, the damage is done, so let’s move forward. The key idea is that, when we know the wavefunction, we know everything. I tried to convey that by noting that the real and imaginary part of the wavefunction must, somehow, represent the total energy of the particle. The structural similarity between the mass-energy equivalence relation (i.e. Einstein’s formula: E = m·c2) and the energy formulas for oscillators and spinning masses is too obvious:

  1. The energy of any oscillator is given by the E = m·ω02/2. We may want to liken the real and imaginary component of our wavefunction to two oscillators and, hence, add them up. The E = m·ω02 formula we get is then identical to the E = m·c2 formula.
  2. The energy of a spinning mass is given by an equivalent formula: E = I·ω2/2 (I is the moment of inertia in this formula). The same 1/2 factor tells us our particle is, somehow, spinning in two dimensions at the same time (i.e. a ‘real’ as well as an ‘imaginary’ space—but both are equally real, because amplitudes interfere), so we get the E = I·ω2 formula. 

Hence, the formulas tell us we should imagine an electron – or an electron orbital – as a very complicated two-dimensional standing wave. Now, when I write two-dimensional, I refer to the real and imaginary component of our wavefunction, as illustrated below. What I am asking you, however, is to not only imagine these two components oscillating up and down, but also spinning about. Hence, if we think about energy as some oscillating mass – which is what the E = m·c2 formula tells us to do, we should remind ourselves we’re talking very complicated motions here: mass oscillates, swirls and spins, and it does so both in real as well as in imaginary space.  rising_circular

What I like about the illustration above is that it shows us – in a very obvious way – why the wavefunction depends on our reference frame. These oscillations do represent something in absolute space, but how we measure it depends on our orientation in that absolute space. But so I am writing this post to talk about operators, not about my grand theory about the essence of mass and energy. So let’s talk about operators now. 🙂

In that post of mine, I showed how the position, momentum and energy operator would give us the average position, momentum and energy of whatever it was that we were looking at, but I didn’t introduce the angular momentum operator. So let me do that now. However, I’ll first recapitulate what we’ve learnt so far in regard to operators.

The energy, position and momentum operators

The equation below defines the energy operator, and also shows how we would apply it to the wavefunction:

energy operator

To the purists: sorry for not (always) using the hat symbol. [I explained why in that post of mine: it’s just too cumbersome.] The others 🙂 should note the following:

  • Eaverage is also an expected value: Eav = E[E]
  • The * symbol tells us to take the complex conjugate of the wavefunction.
  • As for the integral, it’s an integral over some volume, so that’s what the d3r shows. Many authors use double or triple integral signs (∫∫ or ∫∫∫) to show it’s a surface or a volume integral, but that makes things look very complicated, and so I don’t that. I could also have written the integral as ∫ψ(r)*·H·ψ(r) dV, but then I’d need to explain that the dV stands for dVolume, not for any (differental) potential energy (V).
  • We must normalize our wavefunction for these formulas to work, so all probabilities over the volume add up to 1.

OK. That’s the energy operator. As you can see, it’s a pretty formidable beast, but then it just reflects Schrödinger’s equation which, as I explained a couple of times already, we can interpret as an energy propagation mechanism, or an energy diffusion equation, so it is actually not that difficult to memorize the formula: if you’re able to remember Schrödinger’s equation, then you’ll also have the operator. If not… Well… Then you won’t pass your undergrad physics exam. 🙂

I already mentioned that the position operator is a much simpler beast. That’s because it’s so intimately related to our interpretation of the wavefunction. It’s the one thing you know about quantum mechanics: the absolute square of the wavefunction gives us the probability density function. So, for one-dimensional space, the position operator is just:

position operator

The equivalent operator for three-dimensional space is equally simple:

position operator 3D V2

Note how the operator, for the one- as well as for the three-dimensional case, gets rid of time as a variable. In fact, the idea itself of an average makes abstraction of the temporal aspect. Well… Here, at least—because we’re looking at some box in space, rather than some box in spacetime. We’ll re-visit that rather particular idea of an average, and allow for averages that change with time, in a short while.

Next, we introduced the momentum operator in that post of mine. For one dimension, Feynman shows this operator is given by the following formula:

momentum operator

Now that does not look very simple. You might think that the ∂/∂x operator reflects our velocity operator, but… Well… No: ∂/∂t gives us a time rate of change, while ∂/∂x gives us the spatial variation. So it’s not the same. Also, that ħ/i factor is quite intriguing, isn’t it? We’ll come back to it in the next section of this post. Let me just give you the three-dimensional equivalent which, remembering that 1/i = −i, you’ll understand to be equal to the following vector operator:

momentum vector operator

Now it’s time to define the operator we wanted to talk about, i.e. the angular momentum operator.

The angular momentum operator

The formula for the angular momentum operator is remarkably simple:

angular momentum operator

Why do I call this a simple formula? Because it looks like the familiar formula of classical mechanics for the z-component of the classical angular momentum L = r × p. I must assume you know how to calculate a vector cross product. If not, check one of my many posts on vector analysis. I must also assume you remember the L = r × p formula. If not, the following animation might bring it all back. If that doesn’t help, check my post on gyroscopes. 🙂

torque_animation-1.gif

Now, spin is a complicated phenomenon, and so, to simplify the analysis, we should think of orbital angular momentum only. This is a simplification, because electron spin is some complicated mix of intrinsic and orbital angular momentum. Hence, the angular momentum operator we’re introducing here is only the orbital angular momentum operator. However, let us not get bogged down in all of the nitty-gritty and, hence, let’s just go along with it for the time being.

I am somewhat hesitant to show you how we get that formula for our operator, but I’ll try to show you using an intuitive approach, which uses only bits and pieces of Feynman’s more detailed derivation. It will, hopefully, give you a bit of an idea of how these differential operators work. Think about a rotation of our reference frame over an infinitesimally small angle – which we’ll denote as ε – as illustrated below.

rotation

Now, the whole idea is that, because of that rotation of our reference frame, our wavefunction will look different. It’s nothing fundamental, but… Well… It’s just because we’re using a different coordinate system. Indeed, that’s where all these complicated transformation rules for amplitudes come in.  I’ve spoken about these at length when we were still discussing n-state systems. In contrast, the transformation rules for the coordinates themselves are very simple:

rotation

Now, because ε is an infinitesimally small angle, we may equate cos(θ) = cos(ε) to 1, and cos(θ) = sin(ε) to ε. Hence, x’ and y’ are then written as x’+ εy and y’− εx, while z‘ remains z. Vice versa, we can also write the old coordinates in terms of the new ones: x = x’ − εy, y = y’ + εx, and zThat’s obvious. Now comes the difficult thing: you need to think about the two-dimensional equivalent of the simple illustration below.

izvod

If we have some function y = f(x), then we know that, for small Δx, we have the following approximation formula for f(x + Δx): f(x + Δx) ≈ f(x) + (dy/dx)·Δx. It’s the formula you saw in high school: you would then take a limit (Δ0), and define dy/dx as the Δy/Δx ratio for Δ0. You would this after re-writing the f(x + Δx) ≈ f(x) + (dy/dx)·Δx formula as:

Δy = Δf = f(x + Δx) − f(x) ≈ (dy/dx)·Δx

Now you need to substitute f for ψ, and Δx for ε. There is only one complication here: ψ is a function of two variables: x and y. In fact, it’s a function of three variables – x, y and z – but we keep constant. So think of moving from and to + εy = + Δand to + Δ− εx. Hence, Δ= εy and Δ= −εx. It then makes sense to write Δψ as:

angular momentum operator v2

If you agree with that, you’ll also agree we can write something like this:

formula 2

Now that implies the following formula for Δψ:

repair

This looks great! You can see we get some sort of differential operator here, which is what we want. So the next step should be simple: we just let ε go to zero and then we’re done, right? Well… No. In quantum mechanics, it’s always a bit more complicated. But it’s logical stuff. Think of the following:

1. We will want to re-write the infinitesimally small ε angle as a fraction of i, i.e. the imaginary unit.

Huh? Yes. This little represents many things. In this particular case, we want to look at it as a right angle. In fact, you know multiplication with i amounts to a rotation by 90 degrees. So we should replace ε by ε·i. It’s like measuring ε in natural units. However, we’re not done.

2. We should also note that Nature measures angles clockwise, rather than counter-clockwise, as evidenced by the fact that the argument of our wavefunction rotates clockwise as time goes by. So our ε is, in fact, a −ε. We will just bring the minus sign inside of the brackets to solve this issue.

Huh? Yes. Sorry. I told you this is a rather intuitive approach to getting what we want to get. 🙂

3. The third modification we’d want to make is to express ε·i as a multiple of Planck’s constant.

Huh? Yes. This is a very weird thing, but it should make sense—intuitively: we’re talking angular momentum here, and its dimension is the same as that of physical action: N·m·s. Therefore, Planck’s quantum of action (ħ = h/2π ≈ 1×10−34 J·s ≈ 6.6×10−16 eV·s) naturally appears as… Well… A natural unit, or a scaling factor, I should say.

To make a long story short, we’ll want to re-write ε as −(i/ħ)·ε. However, there is a thing called mathematical consistency, and so, if we want to do such substitutions and prepare for that limit situation (ε → 0), we should re-write that Δψ equation as follows:

final

So now – finally! – we do have the formula we wanted to find for our angular momentum operator:

final 2

The final substitution, which yields the formula we just gave you when commencing this section, just uses the formula for the linear momentum operator in the x– and y-direction respectively. We’re done! 🙂 Finally! 

Well… No. 🙂 The question, of course, is the same as always: what does it all mean, really? That’s always a great question. 🙂 Unfortunately, the answer is rather boring: we can calculate the average angular momentum in the z-direction, using a similar integral as the one we used to get the average energy, or the average linear momentum in some direction. That’s basically it.

To compensate for that very boring answer, however, I will show you something that is far less boring. 🙂

Quantum-mechanical weirdness

I’ll shameless copy from Feynman here. He notes that many classical equations get carried over into a quantum-mechanical form (I’ll copy some of his illustrations later). But then there are some that don’t. As Feynman puts it—rather humorously: “There had better be some that don’t come out right, because if everything did, then there would be nothing different about quantum mechanics. There would be no new physics.” He then looks at the following super-obvious equation in classical mechanics:

x·p− px·x = 0

In fact, this equation is so super-obvious that it’s almost meaningless. Almost. It’s super-obvious because multiplication is commutative (for real as well for complex numbers). However, when we replace x and pby the position and momentum operator, we get an entirely different result. You can verify the following yourself:

strange

This is plain weird! What does it mean? I am not sure. Feynman’s take on it is nice but leaves us in the dark on it:

Feynman quote 2

He adds: “If Planck’s constant were zero, the classical and quantum results would be the same, and there would be no quantum mechanics to learn!” Hmm… What does it mean, really? Not sure. Let me make two remarks here:

1. We should not put any dot (·) between our operators, because they do not amount to multiplying one with another. We just apply operators successively. Hence, commutativity is not what we should expect.

2. Note that Feynman forgot to put the subscript in that quote. When doing the same calculations for the equivalent of the x·p− py·x expression, we do get zero, as shown below:

not strange

These equations – zero or not – are referred to as ‘commutation rules’. [Again, I should not have used any dot between x and py, because there is no multiplication here. It’s just a separation mark.] Let me quote Feynman on it, so the matter is dealt with:

quote

OK. So what do we conclude? What are we talking about?

Conclusions

Some of the stuff above was really intriguing. For example, we found that the linear and angular momentum operators are differential operators in the true sense of the word. The angular momentum operator shows us what happens to the wavefunction if we rotate our reference frame over an infinitesimally small angle ε. That’s what’s captured by the formulas we’ve developed, as summarized below:

angular momentum

Likewise, the linear momentum operator captures what happens to the wavefunction for an infinitesimally small displacement of the reference frame, as shown by the equivalent formulas below:

linear momentum

What’s the interpretation for the position operator, and the energy operator? Here we are not so sure. The integrals above make sense, but these integrals are used to calculate averages values, as opposed to instantaneous values. So… Well… There is not all that much I can say about the position and energy operator right now, except… Well… We now need to explore the question of how averages could possibly change over time. Let’s do that now.

Averages that change with time

I know: you are totally quantum-mechanicked out by now. So am I. But we’re almost there. In fact, this is Feynman’s last Lecture on quantum mechanics and, hence, I think I should let the Master speak here. So just click on the link and read for yourself. It’s a really interesting chapter, as he shows us the equivalent of Newton’s Law in quantum mechanics, as well as the quantum-mechanical equivalent of other standard equations in classical mechanics. However, I need to warn you: Feynman keeps testing the limits of our intellectual absorption capacity by switching back and forth between matrix and wave mechanics. Interesting, but not easy. For example, you’ll need to remind yourself of the fact that the Hamiltonian matrix is equal to its own complex conjugate (or – because it’s a matrix – its own conjugate transpose.

Having said that, it’s all wonderful. The time rate of change of all those average values is denoted by using the over-dot notation. For example, the time rate of change of the average position is denoted by:

p1

Once you ‘get’ that new notation, you will quickly understand the derivations. They are not easy (what derivations are in quantum mechanics?), but we get very interesting results. Nice things to play with, or think about—like this identity:

formula2

It takes a while, but you suddenly realize this is the equivalent of the classical dx/dtv = p/m formula. 🙂

Another sweet result is the following one:

formula3

This is the quantum-mechanical equivalent of Newton’s force law: F = m·a. Huh? Yes. Think of it: the spatial derivative of the (potential) energy is the force. Now just think of the classical dp/dt = d(m·v) = m·dv/dt = m·a formula. […] Can you see it now? Isn’t this just Great Fun?

Note, however, that these formulas also show the limits of our analysis so far, because they treat m as some constant. Hence, we’ll need to relativistically correct them. But that’s complicated, and so we’ll postpone that to another day.

[…]

Well… That’s it, folks! We’re really through! This was the last of the last of Feynman’s Lectures on Physics. So we’re totally done now. Isn’t this great? What an adventure! I hope that, despite the enormous mental energy that’s required to digest all this stuff, you enjoyed it as much as I did. 🙂

Post scriptum 1: I just love Feynman but, frankly, I think he’s sometimes somewhat sloppy with terminology. In regard to what these operators really mean, we should make use of better terminology: an average is something else than an expected value. Our momentum operator, for example, as such returns an expected value – not an average momentum. We need to deepen the analysis here somewhat, but I’ll also leave that for later.

Post scriptum 2: There is something really interesting about that i·ħ or −(i/ħ) scaling factor – or whatever you want to call it – appearing in our formulas. Remember the Schrödinger equation can also be written as:

i·ħ·∂ψ/∂t = −(1/2)·(ħ2/m)∇2ψ + V·ψ = Hψ

This is interesting in light of our interpretation of the Schrödinger equation as an energy propagation mechanism. If we write Schrödinger’s equation like we write it here, then we have the energy on the right-hand side – which is time-independent. How do we interpret the left-hand side now? Well… It’s kinda simple, but we just have the time rate of change of the real and imaginary part of the wavefunction here, and the i·ħ factor then becomes a sort of unit in which we measure the time rate of change. Alternatively, you may think of ‘splitting’ Planck’s constant in two: Planck’s energy, and Planck’s time unit, and then you bring the Planck energy unit to the other side, so we’d express the energy in natural units. Likewise, the time rate of change of the components of our wavefunction would also be measured in natural time units if we’d do that.

I know this is all very abstract but, frankly, it’s crystal clear to me. This formula tells us that the energy of the particle that’s being described by the wavefunction is being carried by the oscillations of the wavefunction. In fact, the oscillations are the energy. You can play with the mass factor, by moving it to the left-hand side too, or by using Einstein’s mass-energy equivalence relation. The interpretation remains consistent.

In fact, there is something really interesting here. You know that we usually separate out the spatial and temporal part of the wavefunction, so we write: ψ(r, t) = ψ(rei·(E/ħ)·t. In fact, it is quite common to refer to ψ(r) – rather than to ψ(r, t) – as the wavefunction, even if, personally, I find that quite confusing and misleading (see my page onSchrödinger’s equation). Now, we may want to think of what happens when we’d apply the energy operator to ψ(r) rather than to ψ(r, t). We may think that we’d get a time-independent value for the energy at that point in space, so energy is some function of position only, not of time. That’s an interesting thought, and we should explore it. For example, we then may think of energy as an average that changes with position—as opposed to the (average) position and momentum, which we like to think of as averages than change with time, as mentioned above. I will come back to this later – but perhaps in another post or so. Not now. The only point I want to mention here is the following: you cannot use ψ(r) in Schrödinger’s equation. Why? Well… Schrödinger’s equation is no longer valid when substituting ψ for ψ(r), because the left-hand side is always zero, as ∂ψ(r)/∂t is zero – for any r.

There is another, related, point to this observation. If you think that Schrödinger’s equation implies that the operators on both sides of Schrödinger’s equation must be equivalent (i.e. the same), you’re wrong:

i·ħ·∂/∂t ≠ H = −(1/2)·(ħ2/m)∇2 + V

It’s a basic thing, really: Schrödinger’s equation is not valid for just any function. Hence, it does not work for ψ(r). Only ψ(r, t) makes it work, because… Well… Schrödinger’s equation gave us ψ(r, t)!

The energy and 1/2 factor in Schrödinger’s equation

Schrödinger’s equation, for a particle moving in free space (so we have no external force fields acting on it, so V = 0 and, therefore, the Vψ term disappears) is written as:

∂ψ(x, t)/∂t = i·(1/2)·(ħ/meff)·∇2ψ(x, t)

We already noted and explained the structural similarity with the ubiquitous diffusion equation in physics:

∂φ(x, t)/∂t = D·∇2φ(x, t) with x = (x, y, z)

The big difference between the wave equation and an ordinary diffusion equation is that the wave equation gives us two equations for the price of one: ψ is a complex-valued function, with a real and an imaginary part which, despite their name, are both equally fundamental, or essential. Whatever word you prefer. 🙂 That’s also what the presence of the imaginary unit (i) in the equation tells us. But for the rest it’s the same: the diffusion constant (D) in Schrödinger’s equation is equal to (1/2)·(ħ/meff).

Why the 1/2 factor? It’s ugly. Think of the following: If we bring the (1/2)·(ħ/meff) to the other side, we can write it as meff/(ħ/2). The ħ/2 now appears as a scaling factor in the diffusion constant, just like ħ does in the de Broglie equations: ω = E/ħ and k = p/ħ, or in the argument of the wavefunction: θ = (E·t − p∙x)/ħ. Planck’s constant is, effectively, a physical scaling factor. As a physical scaling constant, it usually does two things:

  1. It fixes the numbers (so that’s its function as a mathematical constant).
  2. As a physical constant, it also fixes the physical dimensions. Note, for example, how the 1/ħ factor in ω = E/ħ and k = p/ħ ensures that the ω·t = (E/ħ)·t and k·x = (p/ħ)·x terms in the argument of the wavefunction are both expressed as some dimensionless number, so they can effectively be added together. Physicists don’t like adding apples and oranges.

The question is: why did Schrödinger use ħ/2, rather than ħ, as a scaling factor? Let’s explore the question.

The 1/2 factor

We may want to think that 1/2 factor just echoes the 1/2 factor in the Uncertainty Principle, which we should think of as a pair of relations: σx·σp ≥ ħ/2 and σE·σ≥ ħ/2. However, the 1/2 factor in those relations only makes sense because we chose to equate the fundamental uncertainty (Δ) in x, p, E and t with the mathematical concept of the standard deviation (σ), or the half-width, as Feynman calls it in his wonderfully clear exposé on it in one of his Lectures on quantum mechanics (for a summary with some comments, see my blog post on it). We may just as well choose to equate Δ with the full-width of those probability distributions we get for x and p, or for E and t. If we do that, we get σx·σp ≥ ħ and σE·σ≥ ħ.

It’s a bit like measuring the weight of a person on an old-fashioned (non-digital) bathroom scale with 1 kg marks only: do we say this person is x kg ± 1 kg, or x kg ± 500 g? Do we take the half-width or the full-width as the margin of error? In short, it’s a matter of appreciation, and the 1/2 factor in our pair of uncertainty relations is not there because we’ve got two relations. Likewise, it’s not because I mentioned we can think of Schrödinger’s equation as a pair of relations that, taken together, represent an energy propagation mechanism that’s quite similar in its structure to Maxwell’s equations for an electromagnetic wave (as shown below), that we’d insert (or not) that 1/2 factor: either of the two representations below works. It just depends on our definition of the concept of the effective mass.

The 1/2 factor is really a matter of choice, because the rather peculiar – and flexible – concept of the effective mass takes care of it. However, we could define some new effective mass concept, by writing: meffNEW = 2∙meffOLD, and then Schrödinger’s equation would look more elegant:

∂ψ/∂t = i·(ħ/meffNEW)·∇2ψ

Now you’ll want the definition, of course! What is that effective mass concept? Feynman talks at length about it, but his exposé is embedded in a much longer and more general argument on the propagation of electrons in a crystal lattice, which you may not necessarily want to go through right now. So let’s try to answer that question by doing something stupid: let’s substitute ψ in the equation for ψ = a·ei·[E·t − p∙x]/ħ (which is an elementary wavefunction), calculate the time derivative and the Laplacian, and see what we get. If we do that, the ∂ψ/∂t = i·(1/2)·(ħ/meff)·∇2ψ equation becomes:

i·a·(E/ħei∙(E·t − p∙x)/ħ = i·a·(1/2)·(ħ/meff)(p2/ħ2ei∙(E·t − p∙x) 

⇔ E = (1/2)·p2/meff = (1/2)·(m·v)2/meff ⇔ meff = (1/2)·(m/E)·m·v2

⇔ meff = (1/c2)·(m·v2/2) = m·β2/2

Hence, the effective mass appears in this equation as the equivalent mass of the kinetic energy (K.E.) of the elementary particle that’s being represented by the wavefunction. Now, you may think that sounds good – and it does – but you should note the following:

1. The K.E. = m·v2/2 formula is only correct for non-relativistic speeds. In fact, it’s the kinetic energy formula if, and only if, if m ≈ m0. The relativistically correct formula for the kinetic energy calculates it as the difference between (1) the total energy (which is given by the E = m·c2 formula, always) and (2) its rest energy, so we write:

K.E. = E − E0 = mv·c2 − m0·c2 = m0·γ·c2 − m0·c2 = m0·c2·(γ − 1)

2. The energy concept in the wavefunction ψ = a·ei·[E·t − p∙x]/ħ is, obviously, the total energy of the particle. For non-relativistic speeds, the kinetic energy is only a very small fraction of the total energy. In fact, using the formula above, you can calculate the ratio between the kinetic and the total energy: you’ll find it’s equal to 1 − 1/γ = 1 − √(1−v2/c2), and its graph goes from 0 to 1.

graph

Now, if we discard the 1/2 factor, the calculations above yield the following:

i·a·(E/ħ)·ei∙(E·t − p∙x)/ħ = −i·a·(ħ/meff)(p22ei∙(E·t − p∙x)/ħ 

⇔ E = p2/meff = (m·v)2/meff ⇔ meff = (m/E)·m·v2

⇔ meff = m·v2/c= m·β2

In fact, it is fair to say that both definitions are equally weird, even if the dimensions come out alright: the effective mass is measured in old-fashioned mass units, and the βor β2/2 factor appears as a sort of correction factor, varying between 0 and 1 (for β2) or between 0 and 1/2 (for β2/2). I prefer the new definition, as it ensures that meff becomes equal to m in the limit for the velocity going to c. In addition, if we bring the ħ/meff or (1/2)∙ħ/meff factor to the other side of the equation, the choice becomes one between a meffNEW/ħ or a 2∙meffOLD/ħ coefficient.

It’s a choice, really. Personally, I think the equation without the 1/2 factor – and, hence, the use of ħ rather than ħ/2 as the scaling factor – looks better, but then you may argue that – if half of the energy of our particle is in the oscillating real part of the wavefunction, and the other is in the imaginary part – then the 1/2 factor should stay, because it ensures that meff becomes equal to m/2 as v goes to c (or, what amounts to the same, β goes to 1). But then that’s the argument about whether or not we should have a 1/2 factor because we get two equations for the price of one, like we did for the Uncertainty Principle.

So… What to do? Let’s first ask ourselves whether that derivation of the effective mass actually makes sense. Let’s therefore look at both limit situations.

1. For v going to c (or β = v/c going to 1), we do not have much of a problem: meff just becomes the total mass of the particle that we’re looking at, and Schrödinger’s equation can easily be interpreted as an energy propagation mechanism. Our particle has zero rest mass in that case ( we may also say that the concept of a rest mass is meaningless in this situation) and all of the energy – and, therefore, all of the equivalent mass – is kinetic: m = E/cand the effective mass is just the mass: meff = m·c2/c= m. Hence, our particle is everywhere and nowhere. In fact, you should note that the concept of velocity itself doesn’t make sense in this rather particular case. It’s like a photon (but note it’s not a photon: we’re talking some theoretical particle here with zero spin and zero rest mass): it’s a wave in its own frame of reference, but as it zips by at the speed of light, we think of it as a particle.

2. Let’s look at the other limit situation. For v going to 0 (or β = v/c going to 0), Schrödinger’s equation no longer makes sense, because the diffusion constant goes to zero, so we get a nonsensical equation. Huh? What’s wrong with our analysis?

Well… I must be honest. We started off on the wrong foot. You should note that it’s hard – in fact, plain impossible – to reconcile our simple a·ei·[E·t − p∙x]/ħ function with the idea of the classical velocity of our particle. Indeed, the classical velocity corresponds to a group velocity, or the velocity of a wave packet, and so we just have one wave here: no group. So we get nonsense. You can see the same when equating p to zero in the wave equation: we get another nonsensical equation, because the Laplacian is zero! Check it. If our elementary wavefunction is equal to ψ = a·ei·(E/ħ)·t, then that Laplacian is zero.

Hence, our calculation of the effective mass is not very sensical. Why? Because the elementary wavefunction is a theoretical concept only: it may represent some box in space, that is uniformly filled with energy, but it cannot represent any actual particle. Actual particles are always some superposition of two or more elementary waves, so then we’ve got a wave packet (as illustrated below) that we can actually associate with some real-life particle moving in space, like an electron in some orbital indeed. 🙂

wave-packet

I must credit Oregon State University for the animation above. It’s quite nice: a simple particle in a box model without potential. As I showed on my other page (explaining various models), we must add at least two waves – traveling in opposite directions – to model a particle in a box. Why? Because we represent it by a standing wave, and a standing wave is the sum of two waves traveling in opposite directions.

So, if our derivation above was not very meaningful, then what is the actual concept of the effective mass?

The concept of the effective mass

I am afraid that, at this point, I do have to direct you back to the Grand Master himself for the detail. Let me just try to sum it up very succinctly. If we have a wave packet, there is – obviously – some energy in it, and it’s energy we may associate with the classical concept of the velocity of our particle – because it’s the group velocity of our wave packet. Hence, we have a new energy concept here – and the equivalent mass, of course. Now, Feynman’s analysis – which is Schrödinger’s analysis, really – shows we can write that energy as:

E = meff·v2/2

So… Well… That’s the classical kinetic energy formula. And it’s the very classical one, because it’s not relativistic. 😦 But that’s OK for relatively small-moving electrons! [Remember the typical (relative) velocity is given by the fine-structure constant: α = β = v/c. So that’s impressive (about 2,188 km per second), but it’s only a tiny fraction of the speed of light, so non-relativistic formulas should work.]

Now, the meff factor in this equation is a function of the various parameters of the model he uses. To be precise, we get the following formula out of his model (which, as mentioned above, is a model of electrons propagating in a crystal lattice):

meff = ħ2/(2·A·b2 )

Now, the b in this formula is the spacing between the atoms in the lattice. The A basically represents an energy barrier: to move from one atom to another, the electron needs to get across it. I talked about this in my post on it, and so I won’t explain the graph below – because I did that in that post. Just note that we don’t need that factor 2: there is no reason whatsoever to write E+ 2·A and E2·A. We could just re-define a new A: (1/2)·ANEW = AOLD. The formula for meff then simplifies to ħ2/(2·AOLD·b2) = ħ2/(ANEW·b2). We then get an Eeff = meff·vformula for the extra energy.

energy

Eeff = meff·v2?!? What energy formula is that? Schrödinger must have thought the same thing, and so that’s why we have that ugly 1/2 factor in his equation. However, think about it. Our analysis shows that it is quite straightforward to model energy as a two-dimensional oscillation of mass. In this analysis, both the real and the imaginary component of the wavefunction each store half of the total energy of the object, which is equal to E = m·c2. Remember, indeed, that we compared it to the energy in an oscillator, which is equal to the sum of kinetic and potential energy, and for which we have the T + U = m·ω02/2 formula. But so we have two oscillators here and, hence, twice the energy. Hence, the E = m·c2 corresponds to m·ω0and, hence, we may think of as the natural frequency of the vacuum.

Therefore, the Eeff = meff·v2 formula makes much more sense. It nicely mirrors Einstein’s E = m·c2 formula and, in fact, naturally merges into E = m·c for v approaching c. But, I admit, it is not so easy to interpret. It’s much easier to just say that the effective mass is the mass of our electron as it appears in the kinetic energy formula, or – alternatively – in the momentum formula. Indeed, Feynman also writes the following formula:

meff·v = p = ħ·k

Now, that is something we easily recognize! 🙂

So… Well… What do we do now? Do we use the 1/2 factor or not?

It would be very convenient, of course, to just stick with tradition and use meff as everyone else uses it: it is just the mass as it appears in whatever medium we happen to look it, which may be a crystal lattice (or a semi-conductor), or just free space. In short, it’s the mass of the electron as it appears to us, i.e. as it appears in the (non-relativistic) kinetic energy formula (K.E. = meff·v2/2), the formula for the momentum of an electron (p = meff·v), or in the wavefunction itself (k = p/ħ = (meff·v)/ħ. In fact, in his analysis of the electron orbitals, Feynman (who just follows Schrödinger here) drops the eff subscript altogether, and so the effective mass is just the mass: meff = m. Hence, the apparent mass of the electron in the hydrogen atom serves as a reference point, and the effective mass in a different medium (such as a crystal lattice, rather than free space or, I should say, a hydrogen atom in free space) will also be different.

The thing is: we get the right results out of Schrödinger’s equation, with the 1/2 factor in it. Hence, Schrödinger’s equation works: we get the actual electron orbitals out of it. Hence, Schrödinger’s equation is true – without any doubt. Hence, if we take that 1/2 factor out, then we do need to use the other effective mass concept. We can do that. Think about the actual relation between the effective mass and the real mass of the electron, about which Feynman writes the following: “The effective mass has nothing to do with the real mass of an electron. It may be quite different—although in commonly used metals and semiconductors it often happens to turn out to be the same general order of magnitude: about 0.1 to 30 times the free-space mass of the electron.” Hence, if we write the relation between meff and m as meff = g(m), then the same relation for our meffNEW = 2∙meffOLD becomes meffNEW = 2·g(m), and the “about 0.1 to 30 times” becomes “about 0.2 to 60 times.”

In fact, in the original 1963 edition, Feynman writes that the effective mass is “about 2 to 20 times” the free-space mass of the electron. Isn’t that interesting? I mean… Note that factor 2! If we’d write meff = 2·m, then we’re fine. We can then write Schrödinger’s equation in the following two equivalent ways:

  1. (meff/ħ)·∂ψ/∂t = i·∇2ψ
  2. (2m/ħ)·∂ψ/∂t = i·∇2ψ

Both would be correct, and it explains why Schrödinger’s equation works. So let’s go for that compromise and write Schrödinger’s equation in either of the two equivalent ways. 🙂 The question then becomes: how to interpret that factor 2? The answer to that question is, effectively, related to the fact that we get two waves for the price of one here. So we have two oscillators, so to speak. Now that‘s quite deep, and I will explore that in one of my next posts.

Let me now address the second weird thing in Schrödinger’s equation: the energy factor. I should be more precise: the weirdness arises when solving Schrödinger’s equation. Indeed, in the texts I’ve read, there is this constant switching back and forth between interpreting E as the energy of the atom, versus the energy of the electron. Now, both concepts are obviously quite different, so which one is it really?

The energy factor E

It’s a confusing point—for me, at least and, hence, I must assume for students as well. Let me indicate, by way of example, how the confusion arises in Feynman’s exposé on the solutions to the Schrödinger equation. Initially, the development is quite straightforward. Replacing V by −e2/r, Schrödinger’s equation becomes:

Eq1

As usual, it is then assumed that a solution of the form ψ (r, t) =  e−(i/ħ)·E·t·ψ(r) will work. Apart from the confusion that arises because we use the same symbol, ψ, for two different functions (you will agree that ψ (r, t), a function in two variables, is obviously not the same as ψ(r), a function in one variable only), this assumption is quite straightforward and allows us to re-write the differential equation above as:

de

To get this, you just need to actually to do that time derivative, noting that the ψ in our equation is now ψ(r), not ψ (r, t). Feynman duly notes this as he writes: “The function ψ(rmust solve this equation, where E is some constant—the energy of the atom.” So far, so good. In one of the (many) next steps, we re-write E as E = ER·ε, with E= m·e4/2ħ2. So we just use the Rydberg energy (E≈ 13.6 eV) here as a ‘natural’ atomic energy unit. That’s all. No harm in that.

Then all kinds of complicated but legitimate mathematical manipulations follow, in an attempt to solve this differential equation—attempt that is successful, of course! However, after all these manipulations, one ends up with the grand simple solution for the s-states of the atom (i.e. the spherically symmetric solutions):

En = −ER/nwith 1/n= 1, 1/4, 1/9, 1/16,…, 1

So we get: En = −13.6 eV, −3.4 eV, −1.5 eV, etcetera. Now how is that possible? How can the energy of the atom suddenly be negative? More importantly, why is so tiny in comparison with the rest energy of the proton (which is about 938 mega-electronvolt), or the electron (0.511 MeV)? The energy levels above are a few eV only, not a few million electronvolt. Feynman answers this question rather vaguely when he states the following:

“There is, incidentally, nothing mysterious about negative numbers for the energy. The energies are negative because when we chose to write V = −e2/r, we picked our zero point as the energy of an electron located far from the proton. When it is close to the proton, its energy is less, so somewhat below zero. The energy is lowest (most negative) for n = 1, and increases toward zero with increasing n.”

We picked our zero point as the energy of an electron located far away from the proton? But we were talking the energy of the atom all along, right? You’re right. Feynman doesn’t answer the question. The solution is OK – well, sort of, at least – but, in one of those mathematical complications, there is a ‘normalization’ – a choice of some constant that pops up when combining and substituting stuff – that is not so innocent. To be precise, at some point, Feynman substitutes the ε variable for the square of another variable – to be even more precise, he writes: ε = −α2. He then performs some more hat tricks – all legitimate, no doubt – and finds that the only sensible solutions to the differential equation require α to be equal to 1/n, which immediately leads to the above-mentioned solution for our s-states.

The real answer to the question is given somewhere else. In fact, Feynman casually gives us an explanation in one of his very first Lectures on quantum mechanics, where he writes the following:

“If we have a “condition” which is a mixture of two different states with different energies, then the amplitude for each of the two states will vary with time according to an equation like a·eiωt, with ħ·ω = E0 = m·c2. Hence, we can write the amplitude for the two states, for example as:

ei(E1/ħ)·t and ei(E2/ħ)·t

And if we have some combination of the two, we will have an interference. But notice that if we added a constant to both energies, it wouldn’t make any difference. If somebody else were to use a different scale of energy in which all the energies were increased (or decreased) by a constant amount—say, by the amount A—then the amplitudes in the two states would, from his point of view, be

ei(E1+A)·t/ħ and ei(E2+A)·t/ħ

All of his amplitudes would be multiplied by the same factor ei(A/ħ)·t, and all linear combinations, or interferences, would have the same factor. When we take the absolute squares to find the probabilities, all the answers would be the same. The choice of an origin for our energy scale makes no difference; we can measure energy from any zero we want. For relativistic purposes it is nice to measure the energy so that the rest mass is included, but for many purposes that aren’t relativistic it is often nice to subtract some standard amount from all energies that appear. For instance, in the case of an atom, it is usually convenient to subtract the energy Ms·c2, where Ms is the mass of all the separate pieces—the nucleus and the electrons—which is, of course, different from the mass of the atom. For other problems, it may be useful to subtract from all energies the amount Mg·c2, where Mg is the mass of the whole atom in the ground state; then the energy that appears is just the excitation energy of the atom. So, sometimes we may shift our zero of energy by some very large constant, but it doesn’t make any difference, provided we shift all the energies in a particular calculation by the same constant.”

It’s a rather long quotation, but it’s important. The key phrase here is, obviously, the following: “For other problems, it may be useful to subtract from all energies the amount Mg·c2, where Mg is the mass of the whole atom in the ground state; then the energy that appears is just the excitation energy of the atom.” So that’s what he’s doing when solving Schrödinger’s equation. However, I should make the following point here: if we shift the origin of our energy scale, it does not make any difference in regard to the probabilities we calculate, but it obviously does make a difference in terms of our wavefunction itself. To be precise, its density in time will be very different. Hence, if we’d want to give the wavefunction some physical meaning – which is what I’ve been trying to do all along – it does make a huge difference. When we leave the rest mass of all of the pieces in our system out, we can no longer pretend we capture their energy.

This is a rather simple observation, but one that has profound implications in terms of our interpretation of the wavefunction. Personally, I admire the Great Teacher’s Lectures, but I am really disappointed that he doesn’t pay more attention to this. 😦

The Essence of Reality

I know it’s a crazy title. It has no place in a physics blog, but then I am sure this article will go elsewhere. […] Well… […] Let me be honest: it’s probably gonna go nowhere. Whatever. I don’t care too much. My life is happier than Wittgenstein’s. 🙂

My original title for this post was: discrete spacetime. That was somewhat less offensive but, while being less offensive, it suffered from the same drawback: the terminology was ambiguous. The commonly accepted term for discrete spacetime is the quantum vacuum. However, because I am just an arrogant bastard trying to establish myself in this field, I am telling you that term is meaningless. Indeed, wouldn’t you agree that, if the quantum vacuum is a vacuum, then it’s empty. So it’s nothing. Hence, it cannot have any properties and, therefore, it cannot be discrete – or continuous, or whatever. We need to put stuff in it to make it real.

Therefore, I’d rather distinguish mathematical versus physical space. Of course, you are smart, and so you now you’ll say that my terminology is as bad as that of the quantum vacuumists. And you are right. However, this is a story that am writing, and so I will write it the way want to write it. 🙂 So where were we? Spacetime! Discrete spacetime.

Yes. Thank you! Because relativity tells us we should think in terms of four-vectors, we should not talk about space but about spacetime. Hence, we should distinguish mathematical spacetime from physical spacetime. So what’s the definitional difference?

Mathematical spacetime is just what it is: a coordinate space – Cartesian, polar, or whatever – which we define by choosing a representation, or a base. And all the other elements of the set are just some algebraic combination of the base set. Mathematical space involves numbers. They don’t – let me emphasize that: they do not!– involve the physical dimensions of the variables. Always remember: math shows us the relations, but it doesn’t show us the stuff itself. Think of it: even if we may refer to the coordinate axes as time, or distance, we do not really think of them as something physical. In math, the physical dimension is just a label. Nothing more. Nothing less.

In contrast, physical spacetime is filled with something – with waves, or with particles – so it’s spacetime filled with energy and/or matter. In fact, we should analyze matter and energy as essentially the same thing, and please do carefully re-read what I wrote: I said they are essentially the same. I did not say they are the same. Energy and mass are equivalent, but not quite the same. I’ll tell you what that means in a moment.

These waves, or particles, come with mass, energy and momentum. There is an equivalence between mass and energy, but they are not the same. There is a twist – literally (only after reading the next paragraphs, you’ll realize how literally): even when choosing our time and distance units such that is numerically equal to 1 – e.g. when measuring distance in light-seconds (or time in light-meters), or when using Planck units – the physical dimension of the cfactor in Einstein’s E = mcequation doesn’t vanish: the physical dimension of energy is kg·m2/s2.

Using Newton’s force law (1 N = 1 kg·m/s2), we can easily see this rather strange unit is effectively equivalent to the energy unit, i.e. the joule (1 J = 1 kg·m2/s2 = 1 (N·s2/m)·m2/s= 1 N·m), but that’s not the point. The (m/s)2 factor – i.e. the square of the velocity dimension – reflects the following:

  1. Energy is nothing but mass in motion. To be precise, it’s oscillating mass. [And, yes, that’s what string theory is all about, but I didn’t want to mention that. It’s just terminology once again: I prefer to say ‘oscillating’ rather than ‘vibrating’. :-)]
  2. The rapidly oscillating real and imaginary component of the matter-wave (or wavefunction, we should say) each capture half of the total energy of the object E = mc2.
  3. The oscillation is an oscillation of the mass of the particle (or wave) that we’re looking at.

In the mentioned publication, I explore the structural similarity between:

  1. The oscillating electric and magnetic field vectors (E and B) that represent the electromagnetic wave, and
  2. The oscillating real and imaginary part of the matter-wave.

The story is simple or complicated, depending on what you know already, but it can be told in an abnoxiously easy way. Note that the associated force laws do not differ in their structure:

Coulomb Law

gravitation law

The only difference is the dimension of m versus q: mass – the measure of inertia -versus charge. Mass comes in one color only, so to speak: it’s always positive. In contrast, electric charge comes in two colors: positive and negative. You can guess what comes next, but I won’t talk about that here.:-) Just note the absolute distance between two charges (with the same or the opposite sign) is twice the distance between 0 and 1, which must explains the rather mysterious 2 factor I get for the Schrödinger equation for the electromagnetic wave (but I still need to show how that works out exactly).

The point is: remembering that the physical dimension of the electric field is N/C (newton per coulomb, i.e. force per unit of charge) it should not come as a surprise that we find that the physical dimension of the components of the matter-wave is N/kg: newton per kg, i.e. force per unit of mass. For the detail, I’ll refer you to that article of mine (and, because I know you will not want to work your way through it, let me tell you it’s the last chapter that tells you how to do the trick).

So where were we? Strange. I actually just wanted to talk about discrete spacetime here, but I realize I’ve already dealt with all of the metaphysical questions you could possible have, except the (existential) Who Am I? question, which I cannot answer on your behalf. 🙂

I wanted to talk about physical spacetime, so that’s sanitized mathematical space plus something. A date without logistics. Our mind is a lazy host, indeed.

Reality is the guest that brings all of the wine and the food to the party.

In fact, it’s a guest that brings everything to the party: you – the observer – just need to set the time and the place. In fact, in light of what Kant – and many other eminent philosophers – wrote about space and time being constructs of the mind, that’s another statement which you should interpret literally. So physical spacetime is spacetime filled with something – like a wave, or a field. So how does that look like? Well… Frankly, I don’t know! But let me share my idea of it.

Because of the unity of Planck’s quantum of action (ħ ≈ 1.0545718×10−34 N·m·s), a wave traveling in spacetime might be represented as a set of discrete spacetime points and the associated amplitudes, as illustrated below. [I just made an easy Excel graph. Nothing fancy.]

spacetime

The space in-between the discrete spacetime points, which are separated by the Planck time and distance units, is not real. It is plain nothingness, or – if you prefer that term – the space in-between in is mathematical space only: a figment of the mind – nothing real, because quantum theory tells us that the real, physical, space is discontinuous.

Why is that so? Well… Smaller time and distance units cannot exist, because we would not be able to pack Planck’s quantum of action in them: a box of the Planck scale, with ħ in it, is just a black hole and, hence, nothing could go from here to there, because all would be trapped. Of course, now you’ll wonder what it means to ‘pack‘ Planck’s quantum of action in a Planck-scale spacetime box. Let me try  to explain this. It’s going to be a rather rudimentary explanation and, hence, it may not satisfy you. But then the alternative is to learn more about black holes and the Schwarzschild radius, which I warmly recommend for two equivalent reasons:

  1. The matter is actually quite deep, and I’d recommend you try to fully understand it by reading some decent physics course.
  2. You’d stop reading this nonsense.

If, despite my warning, you would continue to read what I write, you may want to note that we could also use the logic below to define Planck’s quantum of action, rather than using it to define the Planck time and distance unit. Everything is related to everything in physics. But let me now give the rather naive explanation itself:

  • Planck’s quantum of action (ħ ≈ 1.0545718×10−34 N·m·s) is the smallest thing possible. It may express itself as some momentum (whose physical dimension is N·s) over some distance (Δs), or as some amount of energy (whose dimension is N·m) over some time (Δt).
  • Now, energy is an oscillation of mass (I will repeat that a couple of times, and show you the detail of what that means in the last chapter) and, hence, ħ must necessarily express itself both as momentum as well as energy over some time and some distance. Hence, it is what it is: some force over some distance over some time. This reflects the physical dimension of ħ, which is the product of force, distance and time. So let’s assume some force ΔF, some distance Δs, and some time Δt, so we can write ħ as ħ = ΔF·Δs·Δt.
  • Now let’s pack that into a traveling particle – like a photon, for example – which, as you know (and as I will show in this publication) is, effectively, just some oscillation of mass, or an energy flow. Now let’s think about one cycle of that oscillation. How small can we make it? In spacetime, I mean.
  • If we decrease Δs and/or Δt, then ΔF must increase, so as to ensure the integrity (or unity) of ħ as the fundamental quantum of action. Note that the increase in the momentum (ΔF·Δt) and the energy (ΔF·Δs) is proportional to the decrease in Δt and Δs. Now, in our search for the Planck-size spacetime box, we will obviously want to decrease Δs and Δt simultaneously.
  • Because nothing can exceed the speed of light, we may want to use equivalent time and distance units, so the numerical value of the speed of light is equal to 1 and all velocities become relative velocities. If we now assume our particle is traveling at the speed of light – so it must be a photon, or a (theoretical) matter-particle with zero rest mass (which is something different than a photon) – then our Δs and Δt should respect the following condition: Δs/Δt = c = 1.
  • Now, when Δs = 1.6162×10−35 m and Δt = 5.391×10−44 s, we find that Δs/Δt = c, but ΔF = ħ/(Δs·Δt) = (1.0545718×10−34 N·m·s)/[(1.6162×10−35 m)·(5.391×10−44 s)] ≈ 1.21×1044 N. That force is monstrously huge. Think of it: because of gravitation, a mass of 1 kg in our hand, here on Earth, will exert a force of 9.8 N. Now note the exponent in that 1.21×1044 number.
  • If we multiply that monstrous force with Δs – which is extremely tiny – we get the Planck energy: (1.6162×10−35 m)·(1.21×1044 N) ≈ 1.956×109 joule. Despite the tininess of Δs, we still get a fairly big value for the Planck energy. Just to give you an idea, it’s the energy that you’d get out of burning 60 liters of gasoline—or the mileage you’d get out of 16 gallons of fuel! In fact, the equivalent mass of that energy, packed in such tiny space, makes it a black hole.
  • In short, the conclusion is that our particle can’t move (or, thinking of it as a wave, that our wave can’t wave) because it’s caught in the black hole it creates by its own energy: so the energy can’t escape and, hence, it can’t flow. 🙂

Of course, you will now say that we could imagine half a cycle, or a quarter of that cycle. And you are right: we can surely imagine that, but we get the same thing: to respect the unity of ħ, we’ll then have to pack it into half a cycle, or a quarter of a cycle, which just means the energy of the whole cycle is 2·ħ, or 4·ħ. However, our conclusion still stands: we won’t be able to pack that half-cycle, or that quarter-cycle, into something smaller than the Planck-size spacetime box, because it would make it a black hole, and so our wave wouldn’t go anywhere, and the idea of our wave itself – or the particle – just doesn’t make sense anymore.

This brings me to the final point I’d like to make here. When Maxwell or Einstein, or the quantum vacuumists – or I 🙂 – say that the speed of light is just a property of the vacuum, then that’s correct and not correct at the same time. First, we should note that, if we say that, we might also say that ħ is a property of the vacuum. All physical constants are. Hence, it’s a pretty meaningless statement. Still, it’s a statement that helps us to understand the essence of reality. Second, and more importantly, we should dissect that statement. The speed of light combines two very different aspects:

  1. It’s a physical constant, i.e. some fixed number that we will find to be the same regardless of our reference frame. As such, it’s as essential as those immovable physical laws that we find to be the same in each and every reference frame.
  2. However, its physical dimension is the ratio of the distance and the time unit: m/s. We may choose other time and distance units, but we will still combine them in that ratio. These two units represent the two dimensions in our mind that – as Kant noted – structure our perception of reality: the temporal and spatial dimension.

Hence, we cannot just say that is ‘just a property of the vacuum’. In our definition of as a velocity, we mix reality – the ‘outside world’ – with our perception of it. It’s unavoidable. Frankly, while we should obviously try – and we should try very hard! – to separate what’s ‘out there’ versus ‘how we make sense of it’, it is and remains an impossible job because… Well… When everything is said and done, what we observe ‘out there’ is just that: it’s just what we – humans – observe. 🙂

So, when everything is said and done, the essence of reality consists of four things:

  1. Nothing
  2. Mass, i.e. something, or not nothing
  3. Movement (of something), from nowhere to somewhere.
  4. Us: our mind. Or God’s Mind. Whatever. Mind.

The first is like yin and yang, or manicheism, or whatever dualistic religious system. As for Movement and Mind… Hmm… In some very weird way, I feel they must be part of one and the same thing as well. 🙂 In fact, we may also think of those four things as:

  1. 0 (zero)
  2. 1 (one), or as some sine or a cosine, which is anything in-between 0 and 1.
  3. Well… I am not sure! I can’t really separate point 3 and point 4, because they combine point 1 and point 2.

So we’ve don’t have a quadrupality, right? We do have Trinity here, don’t we? […] Maybe. I won’t comment, because I think I just found Unity here. 🙂

The wavefunction and relativity

When reading about quantum theory, and wave mechanics, you will often encounter the rather enigmatic statement that the Schrödinger equation is not relativistically correct. What does that mean?

In my previous post on the wavefunction and relativity, I boldly claimed that relativity theory had been around for quite a while when the young Comte Louis de Broglie wrote his short groundbreaking PhD thesis, back in 1924. Moreover, it is more than likely that he suggested the θ = ω∙t – kx = (E∙t – px)/ħ formula for the argument of the wavefunction exactly because relativity theory had already established the invariance of the four-vector product pμxμ = E∙t – px = pμ‘xμ‘ = E’∙t’ – p’x’. [Note that Planck’s constant, as a physical constant, should obviously not depend on the reference frame either. Hence, if the E∙t – px product is invariant, so is (E∙t – px)/ħ.] However, I didn’t prove that, and I didn’t relate it to Schrödinger’s equation. Hence, let’s explore the matter somewhat further here.

I don’t want to do the academic thing, of course – and that is to prove the invariance of the four-vector dot product. If you want such proof, let me just give you a link to some course material that does just that. Here, I will just summarize the conclusions of such course material:

  1. Four-vector dot products – like xμxμ = xμ2, pμpμ = pμ2, the spacetime interval s= (Δr)– Δt2, or our pμxμ product here – are invariant under a Lorentz transformation (aka as a Lorentz boost). To be formally correct, I should write xμxμ, pμpμ, and pμxμ, because the product multiplies a row vector with a column vector, which is what the sub- and superscript indicate.
  2. Four-vector dot products are referred to as Lorentz scalars.
  3. When derivatives are involved, we must use the so-called four-gradient, which is denoted by  or μ and defined as:

 = μ = (∂/∂t, –) = (∂/∂t, –∂/∂x, –∂/∂y, –∂/∂z)

Applying the four-gradient vector operator to the wavefunction, we get:

μψ= (∂ψ/∂t, –ψ) = (∂ψ/∂t, –∂ψ/∂x, –∂ψ/∂y, –∂ψ/∂z)

We wrote about that in the context of electromagnetic theory (see, for instance, my post on the relativistic transformation of fields), so I won’t dwell on it here. Note, however, that that’s the weak spot in Schrödinger’s equation: it’s good, but not good enough. However, in the context in which it’s being used – i.e. to calculate electron orbitals – the approximation works just fine, so you shouldn’t worry about it. The point to remember is that the wavefunction itself is relativistically correct. 🙂

Of course, it is always good to work through a simple example, so let’s do that here. Let me first remind you of that transformation we presented a couple of times already, and that’s how to calculate the argument of the wavefunction in the reference frame of the particle itself, i.e. the inertial frame. It goes like this: when measuring all variables in Planck units, the physical constants ħ and c are numerically equal to one, then we can then re-write the argument of the wavefunction as follows:

  1. ħ = 1 ⇒ θ = (E∙t – p∙x)/ħ = E∙t – p∙x = Ev∙t − (mvv)∙x
  2. E= E0/√(1−v2) and m= m0/√(1−v2)  ⇒ θ = [E0/√(1−v2)]∙t – [m0v/√(1−v2)]∙x
  3. c = 1 ⇒ m0 = E⇒ θ = [E0/√(1−v2)]∙t – [E0v/√(1−v2)]∙x = E0∙(t − v∙x)/√(1−v2)

⇔ θ = E0∙t’ = E’·t’ with t’ = (t − v∙x)/√(1−v2)

The t’ in the θ = E0∙t’ expression is, obviously, the proper time as measured in the inertial reference frame. Needless to say, is the relative velocity, which is usually denoted by β. Note that this derivation uses the numerical m0 = E0 identity, which emerges when using natural time and distance units (c = 1). However, while mass and energy are equivalent, they are different physical concepts and, hence, they still have different physical dimensions. It is interesting to spell out what happens with the dimensions here:

  • The dimension of Evt and/or E0∙t’ is (N∙m)∙s, i.e. the dimension of (physical) action.
  • The dimension of the (mvv)∙x term must be the same, but how is that possible? Despite us using natural units – so the value of is now some number between 0 and 1 – velocity is what it is: velocity. Hence, its dimension is m/s. Hence, the dimension of the mvv∙x term is kg∙m = (N∙s2/m)∙(m/s)∙m = N∙m∙s.
  • Hence, the dimension of the [E0v/√(1−v2)]∙x term only makes sense if we remember the m2/s2 dimension of the c2 factor in the E = m∙c2 equivalence relation. We write: [E0v∙x] = [E0]∙[v]∙[x] = [(N∙m)∙(s2/m2)]∙(m/s)∙m = N∙m∙s. In short, when doing the mv = Ev and/or m0 = E0 substitution, we should not get rid of the physical 1/c2 dimension.

That should be clear enough. Let’s now do the example. The rest energy of an electron, expressed in Planck units, EeP = Ee/EP = (0.511×10eV)/(1.22×1028 eV) = 4.181×10−23. That is a very tiny fraction. However, the numerical value of the Planck time unit is even smaller: about 5.4×10−44 seconds. Hence, as a frequency is expressed as the number of cycles (or, as an angular frequency, as the number of radians) per time unit, the natural frequency of the wavefunction of the electron is 4.181×10−23 rad per Planck time unit, so that’s a frequency in the order of [4.181×10−23/(2π)]/(5.4×10−44 s) ≈ 1×1020 cycles per second (or hertz). The relevant calculations are given hereunder.

Electron
Rest energy (in joule) 8.1871E-14
Planck energy (in joule) 1.9562E+09
Rest energy in Planck units 4.1853E-23
Frequency in cycles per second 1.2356E+20

Because of these rather incredible numbers (like 10–31 or 1020), the calculations are not always very obvious, but the logic is clear enough: a higher rest mass increases the (angular) frequency of the real and imaginary part of the wavefunction, and gives them a much higher density in spacetime. How does a frequency like 1.235×1020 Hz compare to, say, the frequency of gamma rays. The answer may surprise you: they are of the same order, as is their energy! 🙂 However, their nature, as a wave ,is obviously very different: gamma rays are an electromagnetic wave, so they involve an E and B vector, rather than the two components of the matter-wave. As an energy propagation mechanism, they are structurally similar, though, as I showed in my previous post.

Now, the typical speed of an electron is given by of the fine-structure constant (α), which is (also) equal to the  is the (relative) speed of an electron (for the many interpretations of the fine-structure constant, see my post on it). So we write:

α = β = v/c

More importantly, we can use this formula to calculate it, which is done hereunder. As you can see, while the typical electron speed is quite impressive (about 2,188 km per second), it is only a fraction of the speed of light and, therefore, the Lorentz factor is still equal to one for all practical purposes. Therefore, its speed adds hardly anything to its energy.

 

Fine-structure constant 0.007297353
Typical speed of the electron (m/s) 2.1877E+06
Typical speed of the electron (km/s) 2,188 km/s
Lorentz factor (γ) 1.0000266267

But I admit it does have momentum now and, hence, the p∙x term in the θ = E∙t – p∙x comes into play. What is its momentum? That’s calculated below. Remember we calculate all in Planck units here!

Electron energy moving at alpha (in Planck units) 4.1854E-23
Electron mass moving at alpha (in Planck units) 4.1854E-23
Planck momentum (p = m·v = m·α ) 3.0542E-25

The momentum is tiny, but it’s real. Also note the increase in its energy. Now, when substituting x for x = v·t, we get the following formula for the argument of our wavefunction:

θ = E·t – p·x = E·t − p·v·t = mv·t − mv·v·v·t = mv·(1 − v2)·t

Now, how does that compare to our θ = θ = E0∙t’ = E’·t’ expression? Well… The value of the two coefficients is calculated below. You can, effectively, see it hardly matters.

mv·(1 − v2) 4.1852E-23
Rest energy in Planck units 4.1853E-23

With that, we are finally ready to use the non-relativistic Schrödinger equation in a non-relativistic way, i.e. we can start calculating electron orbitals with it now, which is what we did in one of my previous posts, but I will re-visit that post soon – and provide some extra commentary! 🙂

The Poynting vector for the matter-wave

In my various posts on the wavefunction – which I summarized in my e-book – I wrote at the length on the structural similarities between the matter-wave and the electromagnetic wave. Look at the following images once more:

Animation 5d_euler_f

Both are the same, and then they are not. The illustration on the right-hand side is a regular quantum-mechanical wavefunction, i.e. an amplitude wavefunction: the x-axis represents time, so we are looking at the wavefunction at some particular point in space. [Of course, we  could just switch the dimensions and it would all look the same.] The illustration on the left-hand side looks similar, but it is not an amplitude wavefunction. The animation shows how the electric field vector (E) of an electromagnetic wave travels through space. Its shape is the same. So it is the same function. Is it also the same reality?

Yes and no. The two energy propagation mechanisms are structurally similar. The key difference is that, in electromagnetics, we get two waves for the price of one. Indeed, the animation above does not show the accompanying magnetic field vector (B), which is equally essential. But, for the rest, Schrödinger’s equation and Maxwell’s equation model a similar energy propagation mechanism, as shown below.

amw propagation

They have to, as the force laws are similar too:

Coulomb Law

gravitation law

The only difference is that mass comes in one color only, so to speak: it’s always positive. In contrast, electric charge comes in two colors: positive and negative. You can now guess what comes next: quantum chromodynamics, but I won’t write about that here, because I haven’t studied that yet. I won’t repeat what I wrote elsewhere, but I want to make good on one promise, and that is to develop the idea of the Poynting vector for the matter-wave. So let’s do that now. Let me first remind you of the basic ideas, however.

Basics

The animation below shows the two components of the archetypal wavefunction, i.e. the sine and cosine:

circle_cos_sin

Think of the two oscillations as (each) packing half of the total energy of a particle (like an electron or a photon, for example). Look at how the sine and cosine mutually feed into each other: the sine reaches zero as the cosine reaches plus or minus one, and vice versa. Look at how the moving dot accelerates as it goes to the center point of the axis, and how it decelerates when reaching the end points, so as to switch direction. The two functions are exactly the same function, but for a phase difference of 90 degrees, i.e. a right angle. Now, I love engines, and so it makes me think of a V-2 engine with the pistons at a 90-degree angle. Look at the illustration below. If there is no friction, we have a perpetual motion machine: it would store energy in its moving parts, while not requiring any external energy to keep it going.

two-timer-576-px-photo-369911-s-original

If it is easier for you, you can replace each piston by a physical spring, as I did below. However, I should learn how to make animations myself, because the image below does not capture the phase difference. Hence, it does not show how the real and imaginary part of the wavefunction mutually feed into each other, which is (one of the reasons) why I like the V-2 image much better. 🙂

summary 2

The point to note is: all of the illustrations above are true representations – whatever that means – of (idealized) stationary particles, and both for matter (fermions) as well as for force-carrying particles (bosons). Let me give you an example. The (rest) energy of an electron is tiny: about 8.2×10−14 joule. Note the minus 14 exponent: that’s an unimaginably small amount. It sounds better when using the more commonly used electronvolt scale for the energy of elementary particles: 0.511 MeV. Despite its tiny mass (or energy, I should say, but then mass and energy are directly proportional to each other: the proportionality coefficient is given by the E = m·c2 formula), the frequency of the matter-wave of the electron is of the order of 1×1020 = 100,000,000,000,000,000,000 cycles per second. That’s an unimaginably large number and – as I will show when we get there – that’s not because the second is a huge unit at the atomic or sub-atomic scale.

We may refer to this as the natural frequency of the electron. Higher rest masses increase the frequency and, hence, give the wavefunction an even higher density in spacetime. Let me summarize things in a very simple way:

  • The (total) energy that is stored in an oscillating spring is the sum of the kinetic and potential energy (T and U) and is given by the following formula: E = T + U = a02·m·ω02/2. The afactor is the maximum amplitude – which depends on the initial conditions, i.e. the initial pull or push. The ωin the formula is the natural frequency of our spring, which is a function of the stiffness of the spring (k) and the mass on the spring (m): ω02 = k/m.
  • Hence, the total energy that’s stored in two springs is equal to a02·m·ω02.
  • The similarity between the E = a02·m·ω02 and the E = m·c2 formula is much more than just striking. It is fundamental: the two oscillating components of the wavefunction each store half of the total energy of our particle.
  • To emphasize the point: ω0 = √(k/m) is, obviously, a characteristic of the system. Likewise, = √(E/m) is just the same: a property of spacetime.

Of course, the key question is: what is that is oscillating here? In our V-2 engine, we have the moving parts. Now what exactly is moving when it comes to the wavefunction? The easy answer is: it’s the same thing. The V-2 engine, or our springs, store energy because of the moving parts. Hence, energy is equivalent only to mass that moves, and the frequency of the oscillation obviously matters, as evidenced by the E = a02·m·ω02/2 formula for the energy in a oscillating spring. Mass. Energy is moving mass. To be precise, it’s oscillating mass. Think of it: mass and energy are equivalent, but they are not the same. That’s why the dimension of the c2 factor in Einstein’s famous E = m·c2 formula matters. The equivalent energy of a 1 kg object is approximately 9×1016 joule. To be precise, it is the following monstrous number:

89,875,517,873,681,764 kg·m2/s2

Note its dimension: the joule is the product of the mass unit and the square of the velocity unit. So that, then, is, perhaps, the true meaning of Einstein’s famous formula: energy is not just equivalent to mass. It’s equivalent to mass that’s moving. In this case, an oscillating mass. But we should explore the question much more rigorously, which is what I do in the next section. Let me warn you: it is not an easy matter and, even if you are able to work your way through all of the other material below in order to understand the answer, I cannot promise you that the answer will satisfy you entirely. However, it will surely help you to phrase the question.

The Poynting vector for the matter-wave

For the photon, we have the electric and magnetic field vectors E and B. The boldface highlights the fact that these are vectors indeed: they have a direction as well as a magnitude. Their magnitude has a physical dimension. The dimension of E is straightforward: the electric field strength (E) is a quantity expressed in newton per coulomb (N/C), i.e. force per unit charge. This follows straight from the F = q·E force relation.

The dimension of B is much less obvious: the magnetic field strength (B) is measured in (N/C)/(m/s) = (N/C)·(s/m). That’s what comes out of the F = q·v×B force relation. Just to make sure you understand: v×B is a vector cross product, and yields another vector, which is given by the following formula:

a×b =  |a×bn = |a|·|bsinφ·n

The φ in this formula is the angle between a and b (in the plane containing them) and, hence, is always some angle between 0 and π. The n is the unit vector that is perpendicular to the plane containing a and b in the direction given by the right-hand rule. The animation below shows it works for some rather special angles:

Cross_product

We may also need the vector dot product, so let me quickly give you that formula too. The vector dot product yields a scalar given by the following formula:

ab = |a|·|bcosφ

Let’s get back to the F = q·v×B relation. A dimensional analysis shows that the dimension of B must involve the reciprocal of the velocity dimension in order to ensure the dimensions come out alright:

[F]= [q·v×B] = [q]·[v]·[B] = C·(m/s)·(N/C)·(s/m) = N

We can derive the same result in a different way. First, note that the magnitude of B will always be equal to E/c (except when none of the charges is moving, so B is zero), which implies the same:

[B] = [E/c] = [E]/[c] = (N/C)/(m/s) = (N/C)·(s/m)

Finally, the Maxwell equation we used to derive the wavefunction of the photon was ∂E/∂t = c2∇×B, which also tells us the physical dimension of B must involve that s/m factor. Otherwise, the dimensional analysis would not work out:

  1. [∂E/∂t] = (N/C)/s = N/(C·s)
  2. [c2∇×B] = [c2]·[∇×B] = (m2/s2)·[(N/C)·(s/m)]/m = N/(C·s)

This analysis involves the curl operator ∇×, which is a rather special vector operator. It gives us the (infinitesimal) rotation of a three-dimensional vector field. You should look it up so you understand what we’re doing here.

Now, when deriving the wavefunction for the photon, we gave you a purely geometric formula for B:

B = ex×E = i·E

Now I am going to ask you to be extremely flexible: wouldn’t you agree that the B = E/c and the B = ex×E = i·E formulas, jointly, only make sense if we’d assign the s/m dimension to ex and/or to i? I know you’ll think that’s nonsense because you’ve learned to think of the ex× and/or operation as a rotation only. What I am saying here is that it also transforms the physical dimension of the vector on which we do the operation: it multiplies it with the reciprocal of the velocity dimension. Don’t think too much about it, because I’ll do yet another hat trick. We can think of the real and imaginary part of the wavefunction as being geometrically equivalent to the E and B vector. Just compare the illustrations below:

e-and-b Rising_circular

Of course, you are smart, and you’ll note the phase difference between the sine and the cosine (illustrated below). So what should we do with that? Not sure. Let’s hold our breath for the moment.

circle_cos_sin

Let’s first think about what dimension we could possible assign to the real part of the wavefunction. We said this oscillation stores half of the energy of the elementary particle that is being described by the wavefunction. How does that storage work for the E vector? As I explained in my post on the topic, the Poynting vector describes the energy flow in a varying electromagnetic field. It’s a bit of a convoluted story (which I won’t repeat here), but the upshot is that the energy density is given by the following formula:

energy density

Its shape should not surprise you. The formula is quite intuitive really, even if its derivation is not. The formula represents the one thing that everyone knows about a wave, electromagnetic or not: the energy in it is proportional to the square of its amplitude, and so that’s E•E = E2 and B•B = B2. You should also note he cfactor that comes with the B•B product. It does two things here:

  1. As a physical constant, with some dimension of its own, it ensures that the dimensions on both sides of the equation come out alright.
  2. The magnitude of B is 1/c of that of E, so cB = E, and so that explains the extra c2 factor in the second term: we do get two waves for the price of one here and, therefore, twice the energy.

Speaking of dimensions, let’s quickly do the dimensional analysis:

  1. E is measured in newton per coulomb, so [E•E] = [E2] = N2/C2.
  2. B is measured in (N/C)/(m/s), so we get [B•B] = [B2] = (N2/C2)·(s2/m2). However, the dimension of our c2 factor is (m2/s2) and so we’re left with N2/C2. That’s nice, because we need to add stuff that’s expressed in the same units.
  3. The ε0 is that ubiquitous physical constant in electromagnetic theory: the electric constant, aka as the vacuum permittivity. Besides ensuring proportionality, it also ‘fixes’ our units, and so we should trust it to do the same thing here, and it does: [ε0] = C2/(N·m2), so if we multiply that with N2/C2, we find that u is expressed in N/m2.

Why is N/m2 an energy density? The correct answer to that question involves a rather complicated analysis, but there is an easier way to think about it: just multiply N/mwith m/m, and then its dimension becomes N·m/m= J/m3, so that’s  joule per cubic meter. That looks more like an energy density dimension, doesn’t it? But it’s actually the same thing. In any case, I need to move on.

We talked about the Poynting vector, and said it represents an energy flow. So how does that work? It is also quite intuitive, as its formula really speaks for itself. Let me write it down:

energy flux

Just look at it: u is the energy density, so that’s the amount of energy per unit volume at a given point, and so whatever flows out of that point must represent its time rate of change. As for the –S expression… Well… The • operator is the divergence, and so it give us the magnitude of a (vector) field’s source or sink at a given point. If C is a vector field (any vector field, really), then C is a scalar, and if it’s positive in a region, then that region is a source. Conversely, if it’s negative, then it’s a sink. To be precise, the divergence represents the volume density of the outward flux of a vector field from an infinitesimal volume around a given point. So, in this case, it gives us the volume density of the flux of S. If you’re somewhat familiar with electromagnetic theory, then you will immediately note that the formula has exactly the same shape as the j = −∂ρ/∂t formula, which represents a flow of electric charge.

But I need to get on with my own story here. In order to not create confusion, I will denote the total energy by U, rather than E, because we will continue to use E for the magnitude of the electric field. We said the real and the imaginary component of the wavefunction were like the E and B vector, but what’s their dimension? It must involve force, but it should obviously not involve any electric charge. So what are our options here? You know the electric force law (i.e. Coulomb’s Law) and the gravitational force law are structurally similar:

Coulomb Law

gravitation law

So what if we would just guess that the dimension of the real and imaginary component of our wavefunction should involve a newton per kg factor (N/kg), so that’s force per mass unit rather than force per unit charge? But… Hey! Wait a minute! Newton’s force law defines the newton in terms of mass and acceleration, so we can do a substitution here: 1 N = 1 kg·m/s2 ⇔ 1 kg = 1 N·s2/m. Hence, our N/kg dimension becomes:

N/kg = N/(N·s2/m)= m/s2

What is this: m/s2? Is that the dimension of the a·cosθ term in the a·ei·θ = a·cosθ − i·a·sinθ wavefunction? I hear you. This is getting quite crazy, but let’s see where it leads us. To calculate the equivalent energy density, we’d then need an equivalent for the ε0 factor, which – replacing the C by kg in the [ε0] = C2/(N·m2) expression – would be equal to kg2/(N·m2). Because we know what we want (energy is defined using the force unit, not the mass unit), we’ll want to substitute the kg unit once again, so – temporarily using the μ0 symbol for the equivalent of that ε0 constant – we get:

0] = [N·s2/m]2/(N·m2) = N·s4/m4

Hence, the dimension of the equivalent of that ε0·E2 term becomes:

 [(μ0/2)]·[cosθ]2 = (N·s4/m4)·m2/s= N/m2

Bingo! How does it work for the other component? The other component has the imaginary unit (i) in front. If we continue to pursue our comparison with the E and B vectors, we should assign an extra s/m dimension because of the ex and/or i factor, so the physical dimension of the i·sinθ term would be (m/s2)·(s/m) = s. What? Just the second? Relax. That second term in the energy density formula has the c2 factor, so it all works out:

 [(μ0/2)]·[c2]·[i·sinθ]2 = [(μ0/2)]·[c2]·[i]2·[sinθ]2 (N·s4/m4)·(m2/s2)·(s2/m2)·m2/s= N/m2

As weird as it is, it all works out. We can calculate and, hence, we can now also calculate the equivalent Poynting vector (S). However, I will let you think about that as an exercise. 🙂 Just note the grand conclusions:

  1. The physical dimension of the argument of the wavefunction is physical action (newton·meter·second) and Planck’s quantum of action is the scaling factor.
  2. The physical dimension of both the real and imaginary component of the elementary wavefunction is newton per kg (N/kg). This allows us to analyze the wavefunction as an energy propagation mechanism that is structurally similar to Maxwell’s equations, which represent the energy propagation mechanism when electromagnetic energy is involved.

As such, all we presented so far was a deep exploration of the mathematical equivalence between the gravitational and electromagnetic force laws:

Coulomb Law

gravitation law

The only difference is that mass comes in one color only, so to speak: it’s always positive. In contrast, electric charge comes in two colors: positive and negative. You can now guess what comes next. 🙂

Despite our grand conclusions, you should note we have not answered the most fundamental question of all. What is mass? What is electric charge? We have all these relations and equations, but are we any wiser, really? The answer to that question probably lies in general relativity: mass is that what curves spacetime. Likewise, we may look at electric charge as causing a very special type of spacetime curvature. However, even such answer – which would involve a much more complicated mathematical analysis – may not satisfy you. In any case, I will let you digest this post. I hope you enjoyed it as much as I enjoyed writing it. 🙂

Post scriptum: Of all of the weird stuff I presented here, I think the dimensional analyses were the most interesting. Think of the N/kg = N/(N·s2/m)= m/sidentity, for example. The m/s2 dimension is the dimension of physical acceleration (or deceleration): the rate of change of the velocity of an object. The identity comes straight out of Newton’s force law:

F = m·a ⇔ F/m = a

Now look, once again, at the animation, and remember the formula for the argument of the wavefunction: θ = E0∙t’. The energy of the particle that is being described is the (angular) frequency of the real and imaginary components of the wavefunction.

circle_cos_sin

The relation between (1) the (angular) frequency of a harmonic oscillator (which is what the sine and cosine represent here) and (2) the acceleration along the axis is given by the following equation:

a(x) = −ω02·x

I’ll let you think about what that means. I know you will struggle with it – because I did – and, hence, let me give you the following hint:

  1. The energy of an ordinary string wave, like a guitar string oscillating in one dimension only, will be proportional to the square of the frequency.
  2. However, for two-dimensional waves – such as an electromagnetic wave – we find that the energy is directly proportional to the frequency. Think of Einstein’s E = h·f = ħ·ω relation, for example. There is no squaring here!

It is a strange observation. Those two-dimensional waves – the matter-wave, or the electromagnetic wave – give us two waves for the price of one, each carrying half of the total energy but, as a result, we no longer have that square function. Think about it. Solving the mystery will make you feel like you’ve squared the circle, which – as you know – is impossible. 🙂

Quantum Mechanics: The Other Introduction

About three weeks ago, I brought my most substantial posts together in one document: it’s the Deep Blue page of this site. I also published it on Amazon/Kindle. It’s nice. It crowns many years of self-study, and many nights of short and bad sleep – as I was mulling over yet another paradox haunting me in my dreams. It’s been an extraordinary climb but, frankly, the view from the top is magnificent. 🙂 

The offer is there: anyone who is willing to go through it and offer constructive and/or substantial comments will be included in the book’s acknowledgements section when I go for a second edition (which it needs, I think). First person to be acknowledged here is my wife though, Maria Elena Barron, as she has given me the spacetime:-) and, more importantly, the freedom to take this bull by its horns.

Below I just copy the foreword, just to give you a taste of it. 🙂

Foreword

Another introduction to quantum mechanics? Yep. I am not hoping to sell many copies, but I do hope my unusual background—I graduated as an economist, not as a physicist—will encourage you to take on the challenge and grind through this.

I’ve always wanted to thoroughly understand, rather than just vaguely know, those quintessential equations: the Lorentz transformations, the wavefunction and, above all, Schrödinger’s wave equation. In my bookcase, I’ve always had what is probably the most famous physics course in the history of physics: Richard Feynman’s Lectures on Physics, which have been used for decades, not only at Caltech but at many of the best universities in the world. Plus a few dozen other books. Popular books—which I now regret I ever read, because they were an utter waste of time: the language of physics is math and, hence, one should read physics in math—not in any other language.

But Feynman’s Lectures on Physics—three volumes of about fifty chapters each—are not easy to read. However, the experimental verification of the existence of the Higgs particle in CERN’s LHC accelerator a couple of years ago, and the award of the Nobel prize to the scientists who had predicted its existence (including Peter Higgs and François Englert), convinced me it was about time I take the bull by its horns. While, I consider myself to be of average intelligence only, I do feel there’s value in the ideal of the ‘Renaissance man’ and, hence, I think stuff like this is something we all should try to understand—somehow. So I started to read, and I also started a blog (www.readingfeynman.org) to externalize my frustration as I tried to cope with the difficulties involved. The site attracted hundreds of visitors every week and, hence, it encouraged me to publish this booklet.

So what is it about? What makes it special? In essence, it is a common-sense introduction to the key concepts in quantum physics. However, while common-sense, it does not shy away from the math, which is complicated, but not impossible. So this little book is surely not a Guide to the Universe for Dummies. I do hope it will guide some Not-So-Dummies. It basically recycles what I consider to be my more interesting posts, but combines them in a comprehensive structure.

It is a bit of a philosophical analysis of quantum mechanics as well, as I will – hopefully – do a better job than others in distinguishing the mathematical concepts from what they are supposed to describe, i.e. physical reality.

Last but not least, it does offer some new didactic perspectives. For those who know the subject already, let me briefly point these out:

I. Few, if any, of the popular writers seems to have noted that the argument of the wavefunction (θ = E·t – p·t) – using natural units (hence, the numerical value of ħ and c is one), and for an object moving at constant velocity (hence, x = v·t) – can be written as the product of the proper time of the object and its rest mass:

θ = E·t – p·x = E·t − p·x = mv·t − mv·v·x = mv·(t − v·x)

⇔ θ = m0·(t − v·x)/√(1 – v2) = m0·t’

Hence, the argument of the wavefunction is just the proper time of the object with the rest mass acting as a scaling factor for the time: the internal clock of the object ticks much faster if it’s heavier. This symmetry between the argument of the wavefunction of the object as measured in its own (inertial) reference frame, and its argument as measured by us, in our own reference frame, is remarkable, and allows to understand the nature of the wavefunction in a more intuitive way.

While this approach reflects Feynman’s idea of the photon stopwatch, the presentation in this booklet generalizes the concept for all wavefunctions, first and foremost the wavefunction of the matter-particles that we’re used to (e.g. electrons).

II. Few, if any, have thought of looking at Schrödinger’s wave equation as an energy propagation mechanism. In fact, when helping my daughter out as she was trying to understand non-linear regression (logit and Poisson regressions), it suddenly realized we can analyze the wavefunction as a link function that connects two physical spaces: the physical space of our moving object, and a physical energy space.

Re-inserting Planck’s quantum of action in the argument of the wavefunction – so we write θ as θ = (E/ħ)·t – (p/ħ)·x = [E·t – p·x]/ħ – we may assign a physical dimension to it: when interpreting ħ as a scaling factor only (and, hence, when we only consider its numerical value, not its physical dimension), θ becomes a quantity expressed in newton·meter·second, i.e. the (physical) dimension of action. It is only natural, then, that we would associate the real and imaginary part of the wavefunction with some physical dimension too, and a dimensional analysis of Schrödinger’s equation tells us this dimension must be energy.

This perspective allows us to look at the wavefunction as an energy propagation mechanism, with the real and imaginary part of the probability amplitude interacting in very much the same way as the electric and magnetic field vectors E and B. This leads me to the next point, which I make rather emphatically in this booklet:  the propagation mechanism for electromagnetic energy – as described by Maxwell’s equations – is mathematically equivalent to the propagation mechanism that’s implicit in the Schrödinger equation.

I am, therefore, able to present the Schrödinger equation in a much more coherent way, describing not only how this famous equation works for electrons, or matter-particles in general (i.e. fermions or spin-1/2 particles), which is probably the only use of the Schrödinger equation you are familiar with, but also how it works for bosons, including the photon, of course, but also the theoretical zero-spin boson!

In fact, I am personally rather proud of this. Not because I am doing something that hasn’t been done before (I am sure many have come to the same conclusions before me), but because one always has to trust one’s intuition. So let me say something about that third innovation: the photon wavefunction.

III. Let me tell you the little story behind my photon wavefunction. One of my acquaintances is a retired nuclear scientist. While he knew I was delving into it all, I knew he had little time to answer any of my queries. However, when I asked him about the wavefunction for photons, he bluntly told me photons didn’t have a wavefunction. I should just study Maxwell’s equations and that’s it: there’s no wavefunction for photons: just this traveling electric and a magnetic field vector. Look at Feynman’s Lectures, or any textbook, he said. None of them talk about photon wavefunctions. That’s true, but I knew he had to be wrong. I mulled over it for several months, and then just sat down and started doing to fiddle with Maxwell’s equations, assuming the oscillations of the E and B vector could be described by regular sinusoids. And – Lo and behold! – I derived a wavefunction for the photon. It’s fully equivalent to the classical description, but the new expression solves the Schrödinger equation, if we modify it in a rather logical way: we have to double the diffusion constant, which makes sense, because E and B give you two waves for the price of one!

[…]

In any case, I am getting ahead of myself here, and so I should wrap up this rather long introduction. Let me just say that, through my rather long journey in search of understanding – rather than knowledge alone – I have learned there are so many wrong answers out there: wrong answers that hamper rather than promote a better understanding. Moreover, I was most shocked to find out that such wrong answers are not the preserve of amateurs alone! This emboldened me to write what I write here, and to publish it. Quantum mechanics is a logical and coherent framework, and it is not all that difficult to understand. One just needs good pointers, and that’s what I want to provide here.

As of now, it focuses on the mechanics in particular, i.e. the concept of the wavefunction and wave equation (better known as Schrödinger’s equation). The other aspect of quantum mechanics – i.e. the idea of uncertainty as implied by the quantum idea – will receive more attention in a later version of this document. I should also say I will limit myself to quantum electrodynamics (QED) only, so I won’t discuss quarks (i.e. quantum chromodynamics, which is an entirely different realm), nor will I delve into any of the other more recent advances of physics.

In the end, you’ll still be left with lots of unanswered questions. However, that’s quite OK, as Richard Feynman himself was of the opinion that he himself did not understand the topic the way he would like to understand it. But then that’s exactly what draws all of us to quantum physics: a common search for a deep and full understanding of reality, rather than just some superficial description of it, i.e. knowledge alone.

So let’s get on with it. I am not saying this is going to be easy reading. In fact, I blogged about much easier stuff than this in my blog—treating only aspects of the whole theory. This is the whole thing, and it’s not easy to swallow. In fact, it may well too big to swallow as a whole. But please do give it a try. I wanted this to be an intuitive but formally correct introduction to quantum math. However, when everything is said and done, you are the only who can judge if I reached that goal.

Of course, I should not forget the acknowledgements but… Well… It was a rather lonely venture, so I am only going to acknowledge my wife here, Maria, who gave me all of the spacetime and all of the freedom I needed, as I would get up early, or work late after coming home from my regular job. I sacrificed weekends, which we could have spent together, and – when mulling over yet another paradox – the nights were often short and bad. Frankly, it’s been an extraordinary climb, but the view from the top is magnificent.

I just need to insert one caution, my site (www.readingfeynman.org) includes animations, which make it much easier to grasp some of the mathematical concepts that I will be explaining. Hence, I warmly recommend you also have a look at that site, and its Deep Blue page in particular – as that page has the same contents, more or less, but the animations make it a much easier read.

Have fun with it!

Jean Louis Van Belle, BA, MA, BPhil, Drs.

Wave functions and equations: a summary

Post scriptum note added on 11 July 2016: This is one of the more speculative posts which led to my e-publication analyzing the wavefunction as an energy propagation. With the benefit of hindsight, I would recommend you to immediately the more recent exposé on the matter that is being presented here, which you can find by clicking on the provided link. In fact, I actually made some (small) mistakes when writing the post below.

Original post:

Schrödinger’s wave equation for spin-zero, spin-1/2, and spin-one particles in free space differ from each other by a factor two:

  1. For particles with zero spin, we write: ∂ψ/∂t = i·(ħ/m)·∇2ψ. We get this by multiplying the ħ/(2m) factor in Schrödinger’s original wave equation – which applies to spin-1/2 particles (e.g. electrons) only – by two. Hence, the correction that needs to be made is very straightforward.
  2. For fermions (spin-1/2 particles), Schrödinger’s equation is what it is: ∂ψ/∂t = i·[ħ/(2m)]·∇2ψ.
  3. For spin-1 particles (photons), we have ∂ψ/∂t = i·(2ħ/m)·∇2ψ, so here we multiply the ħ/m factor in Schrödinger’s wave equation for spin-zero particles by two, which amounts to multiplying Schrödinger’s original coefficient by four.

Look at the coefficients carefully. It’s a strange succession:

  1. The ħ/m factor (which is just the reciprocal of the mass measured in units of ħ) works for spin-0 particles.
  2. For spin-1/2 particles, we take only half that factor: ħ/(2m) = (1/2)·(ħ/m).
  3. For spin-1 particles, we double that factor: 2ħ/m = 2·(ħ/m).

I describe the detail on my Deep Blue page, so please go there for more detail. What I did there, can be summarized as follows:

  • The spin-one particle is the photon, and we derived the photon wavefunction from Maxwell’s equations in free space, and found that it solves the ∂ψ/∂t = i·(2ħ/m)·∇2ψ equation, not the ∂ψ/∂t = i·(ħ/m)·∇2ψ or ∂ψ/∂t = i·[ħ/(2m)]·∇2ψ equations.
  • As for the spin-zero particles, we simplified the analysis by assuming our particle had zero rest mass, and we found that we were basically modeling an energy flow.
  • The analysis for spin-1/2 particles is just the standard analysis you’ll find in textbooks.

We can speculate how things would look like for spin-3/2 particles, or for spin-2 particles, but let’s not do that here. In any case, we will come back to this. Let’s first focus on the more familiar terrain, i.e. the wave equation for spin-1/2 particles, such as protons or electrons. [A proton is not elementary – as it consists of quarks – but it is a spin-1/2 particle, i.e. a fermion.]

The phase and group velocity of the wavefunction for spin-1/2 particles (fermions)

We’ll start with the very beginning of it all, i.e. the two equations that the young Comte Louis de Broglie presented in his 1924 PhD thesis, which give us the temporal and spatial frequency of the wavefunction, i.e. the ω and k in the θ = ω·t − k·t argument  of the a·ei·θ wavefunction:

  1. ω = E/ħ
  2. k = p/ħ

This allows to calculate the phase velocity of the wavefunction:

vp = ω/k = (E/ħ)/(p/ħ) = E/p

This is an elementary wavefunction, several of which we would add with appropriate coefficients, with uncertainty in the energy and momentum ensuring our component waves have different frequencies, and, therefore, the concept of a group velocity does not apply. In effect, the a·ei·θ wavefunction does not describe a localized particle: the probability to find it somewhere is the same everywhere. We may want to think of our wavefunction being confined to some narrow band in space, with us having no prior information about the probability density function, and, therefore, we assume a uniform distribution. Assuming our box in space is defined by Δx = x2 − x1, and imposing the normalization condition (all probabilities have to add up to one), we find that the following logic should hold:

(Δx)·a2 = (x2−x1a= 1 ⇔ Δx = 1/a2

Capture

However, we are, of course, interested in the group velocity, as the group velocity should correspond to the classical velocity of the particle. The group velocity of a composite wave is given by the vg = ∂ω/∂k formula. Of course, that formula assumes an unambiguous relation between the temporal and spatial frequency of the component waves, which we may want to denote as ωn and kn, with n = 1, 2, 3,… However, we will not use the index as the context makes it quite clear what we are talking about.

The relation between ωn and kn is known as the dispersion relation, and one particularly nice way to calculate ω as a function of k is to distinguish the real and imaginary parts of the ∂ψ/∂t =i·[ħ/(2m)]·∇2ψ wave equation and, hence, re-write it as a pair of two equations:

  1. Re(∂ψB/∂t) =   −[ħ/(2m)]·Im(∇2ψB) ⇔ ω·cos(kx − ωt) = k2·[ħ/(2m)]·cos(kx − ωt)
  2. Im(∂ψB/∂t) = [ħ/(2m)]·Re(∇2ψB) ⇔ ω·sin(kx − ωt) = k2·[ħ/(2m)]·sin(kx − ωt)

Both equations imply the following dispersion relation:

ω = ħ·k2/(2m)

We can now calculate vg = ∂ω/∂k as:

vg = ∂ω/∂k = ∂[ħ·k2/(2m)]/∂k = 2ħk/(2m) = ħ·(p/ħ)/m = p/m = m·v/m = v

That’s nice, because it’s what we wanted to find. If the group velocity would not equal the classical velocity of our particle, then our model would not make sense.

We used the classical momentum formula in our calculation above: p = m·v. To calculate the phase velocity of our wavefunction, we need to calculate that E/p ratio and, hence, we need an energy formula. Here we have a lot of choice, as energy can be defined in many ways: is it rest energy, potential energy, or kinetic energy? At this point, I need to remind you of the basic concepts.

The argument of the wavefunction as the proper time

It is obvious that the energy concept that is to be used in the ω = E/ħ is the total energy. Louis de Broglie himself noted that the energy of a particle consisted of three parts:

  1. The particle’s rest energy m0c2, which de Broglie referred to as internal energy (Eint): it includes the rest mass of the ‘internal pieces’, as de Broglie put it (now we call those ‘internal pieces’ quarks), as well as their binding energy (i.e. the quarks’ interaction energy);
  2. Any potential energy (V) it may have because of some field (so de Broglie was not assuming the particle was traveling in free space): the field(s) can be anything—gravitational, electromagnetic—you name it: whatever changes the energy because of the position of the particle;
  3. The particle’s kinetic energy, which he wrote in terms of its momentum p: K.E. = m·v2/2 = m2·v2/(2m) = (m·v)2/(2m) = p2/(2m).

So the wavefunction, as de Broglie wrote it, can be written as follows:

ψ(θ) = ψ(x, t) = a·eiθ = a·e−i[(Eint + p2/(2m) + V)·t − p∙x]/ħ 

This formula allows us to analyze interesting phenomena such as the tunneling effect and, hence, you may want to stop here and start playing with it. However, you should note that the kinetic energy formula that is used here is non-relativistic. The relativistically correct energy formula is E = mvc, and the relativistically correct formula for the kinetic energy is the difference between the total energy and the rest energy:

K.E. = E − E0 = mv·c2 − m0·c2 = m0·γ·c2 − m0·c2 = m0·c2·(γ − 1), with γ the Lorentz factor.

At this point, we should simplify our calculations by adopting natural units, so as to ensure the numerical value of = 1, and likewise for ħ. Hence, we assume all is described in Planck units, but please note that the physical dimensions of our variables do not change when adopting natural units: time is time, energy is energy, etcetera. But when using natural units, the E = mvc2 reduces to E = mv. As for our formula for the momentum, this formula remains p = mv·v, but is now some relative velocity, i.e. a fraction between 0 and 1. We can now re-write θ = (E/ħ)·t – (p/ħ)·x as:

θ = E·t – p·x = E·t − p·v·t = mv·t − mv·v·v·t = mv·(1 − v2)·t

We can also write this as:

ψ(x, t) = a·ei·(mv·t − p∙x) = a·ei·[(m0/√(1−v2))·t − (m0·v/√(1−v2)∙x) = a·ei·m0·(t − v∙x)/√(1−v2)

The (t − v∙x)/√(1−v2) factor in the argument is the proper time of the particle as given by  the formulas for the Lorentz transformation of spacetime:

relativity

However, both the θ = mv·(1 − v2)·t and θ = m0·t’ = m0·(t − v∙x)/√(1−v2) are relativistically correct. Note that the rest mass of the particle (m0) acts as a scaling factor as we multiply it with the proper time: a higher m0 gives the wavefunction a higher density, in time as well as in space.

Let’s go back to our vp = E/p formula. Using natural units, it becomes:

vp = E/p = mv/mv·v = 1/v

Interesting! The phase velocity is the reciprocal of the classical velocity! This implies it is always superluminal, ranging from vp = ∞ to vp= 1 for going from 0 to 1 = c, as illustrated in the simple graph below.

phase velocity

Let me note something here, as you may also want to use the dispersion relation, i.e. ω = ħ·k2/(2m), to calculate the phase velocity. You’d write:

vp = ω/k = [ħ·k2/(2m)]/k = ħ·k/(2m) = ħ·(p/ħ)/(2m) = m·v/(2m) = v/2

That’s a nonsensical result. Why do we get it? Because we are mixing two different mass concepts here: the mass that’s associated with the component wave, and the mass that’s associated with the composite wave. Think of it. That’s where Schrödinger’s equation is different from all of the other diffusion equations you’ve seen: the mass factor in the ∂ψ/∂t = i·[ħ/(2m)]·∇2ψ equation is the mass of the particle that’s being represented by the wavefunction that solves the equation. Hence, the diffusion constant ħ/(2m) is not a property of the medium. In that sense, it’s different from the κ/k factor in the ∂T/∂t = (κ/k)·∇2T heat diffusion, for example. We don’t have a medium here and, therefore, Schrödinger’s equation and the associated wavefunction are intimately connected.

It’s an interesting point, because if we’re going to be measuring the mass as multiples of ħ/2 (as suggested by the ħ/(2m) = 1/[m/[ħ/2)] factor itself), then its possible values (for ħ = 1) will be 1/2, 1, 3/2, 2, 5/2,… Now that should remind you of a few things—things like harmonics, or allowable spin values, or… Well… So many things. 🙂

Let’s do the exercise for bosons now.

The phase and group velocity of the  wavefunction for spin-0 particles

My Deep Blue page explains why we need to drop the 1/2 factor in Schrödinger’s equation to make it fit the wavefunction for bosons. We distinguished two bosons: (1) the (theoretical) zero-mass particle (which has spin zero), and the (actual) photon (which has spin one). Let’s first do the analysis for the spin-zero particle.

  • A zero-mass particle (i.e. a particle with zero rest mass) should be traveling at the speed of light: both its phase as well as its group velocity should be equal to = 1. In fact, we’re not talking composite wavefunctions here, so there’s no such thing as a group velocity. We’re not adding waves: there is only one wavefunction. [Note that we don’t need to add waves with different frequencies in order to localize our particle, because quantum mechanics and relativity theory come together here in what might well be the most logical and absurd conclusion ever: as an outside observer, we’re going to see all those zero-mass particles as point objects whizzing by because of the relativistic length contraction. So their wavefunction is only all over spacetime in their proper space and time, but not in ours!]
  • Now, it’s easy to show that, if we choose our time and distance units such that c = 1, then the energy formula reduces to E = m∙c2 = m. Likewise, we find that p = m∙c = m. So we have this strange condition: E = p = m.
  • Now, this is not consistent with the ω = ħ·k2/(2m) we get out of the ∂ψ/∂t = i·[ħ/(2m)]·∇2ψ equation, because E/ħ = ħ·(p/ħ)2/(2m) ⇔ E = m2/(2m) = m/2. That does not fit the E = p = m condition. The only way out is to drop the 1/2 factor, i.e. to multiply Schrödinger’s coefficient with 2.

Let’s quickly check if it does the trick. We assume E, p and m will be multiples of ħ/2 (E = p = m = n·(ħ/2), so the wavefunction is ei∙[t − x]n·/2, Schrödinger’s constant becomes 2/n, and the derivatives for ∂ψ/∂t = i·(ħ/m)·∇2ψ are:

  • ∂ψ/∂t = −i·(n/2)·ei∙[t − x]·n/2
  • 2ψ = ∂2[ei∙[t − x]·n/2]/∂x= i·(n/2)·∂[ei∙[t − x]·n/2]/∂x = −(n2/4)·ei∙[t − x]·n/2

So the Schrödinger equation becomes:

i·(n/2)·ei∙[t − x]n·/2) = −i·(2/n)·(n2/4)·ei∙[t − x]·n/2 ⇔  n/2 = n/2 ⇔ 1 = 1

As Feynman would say, it works like a charm, and note that n does not have to be some integer to make this work.

So what makes spin-1/2 particles different? The answer is: they have both linear as well as angular momentum, and the equipartition theorem tells us the energy will be shared equally among both , so they will pick up linear and angular momentum. Hence, the associated condition is not E = p = m, but E = p = 2m. We’ll come back to this.

Let’s now summarize how it works for spin-one particles

The phase and group velocity of the  wavefunction for spin-1 particles (photons)

Because of the particularities that characterize an electromagnetic wave, the wavefunction packs two waves, capturing both the electric as well as the magnetic field vector (i.e. E and B). For the detail, I’ll refer you to the mentioned page, because the proof is rather lengthy (but easy to follow, so please do check it out). I will just briefly summarize the logic here.

1. For the spin-zero particle, we measured E, m and p in units of – or as multiples of – the ħ/2 factor. Hence, the elementary wavefunction (i.e. the wavefunction for E = p = m = 1) for the zero-mass particle is ei(x/2 − t/2).

2. For the spin-1 particle (the photon), one can show that we get two of these elementary wavefunctions (ψand ψB), and one can then prove that we can write the sum of the electric and magnetic field vector as:

E + BE + B = ψ+ ψ= E + i·E

= √2·ei(x/2 − t/2+ π/4) = √2·ei(π/4)·ei(x/2 − t/2) = √2·ei(π/4)·= √2·ei(π/4)·ei(x/2 − t/2)

Hence, the photon has a special wavefunction. Does it solve the Schrödinger equation? It does when we use the 2ħ/m diffusion constant, rather than the ħ/m or ħ/(2m) coefficient. Let us quickly check it. The derivatives are:

  • ∂ψ/∂t = −√2·ei(π/4)·ei∙[t − x]/2·(i/2)
  • 2ψ = ∂2[√2·ei(π/4)·ei∙[t − x]/2]/∂x= √2·ei(π/4)·∂[ei∙[t − x]/2·(i/2)]/∂x = −√2·ei(π/4)·ei∙[t − x]/2·(1/4)

Note, however, that we have two mass, energy and momentum concepts here: EE, pE, mE and EB, pB, and mB respectively. Hence, if E= p= mE = E= p= mB = 1/2, then E = E+ EB, p = p+ pB and m = m+ mare all equal to 1. Hence, because E = p = m = 1 and we measure in units of ħ, the 2ħ/m factor is equal to 2 and, therefore, the modified Schrödinger’s equation ∂ψ/∂t = i·(2ħ/m)·∇2ψ becomes:

i·√2·ei(π/4)·ei∙[t − x]/2·(1/2) = −i·√2·2·ei(π/4)·ei∙[t − x]/2·(1/4) ⇔ 1/2 = 2/4 = 1/2

It all works out. Let’s quickly check it for E, m and p being multiples of ħ, so we write: E = p = m = n·ħ = n, so the wavefunction is √2·ei(π/4)·ei∙[t − x]n·/2, Schrödinger’s 2ħ/m constant becomes 2ħ/m = 2/n, and the derivatives for ∂ψ/∂t = i·(ħ/m)·∇2ψ are:

  • ∂ψ/∂t = −i·(n/2)·√2·ei(π/4)·ei∙[t − x]·n/2
  • 2ψ = ∂2[ei∙[t − x]·n/2]/∂x= i·√2·ei(π/4)·(n/2)·∂[ei∙[t − x]·n/2]/∂x = −√2·(n2/4)·ei(π/4)·ei∙[t − x]·n/2

So the Schrödinger equation becomes:

i·√2·ei(π/4)·(n/2)·ei∙[t − x]·n/2) = −i·√2·ei(π/4)·(2/n)·(n2/4)·ei∙[t − x]·n/2 ⇔  n/2 = n/2 ⇔ 1 = 1

It works like a charm again. Note the subtlety of the difference between the ħ/(2m) and 2ħ/m factor: it depends on us measuring the mass (and, hence, the energy and momentum as well) in units of ħ/2 (for spin-0 particles) or, alternatively (for spin-1 particles), in units of ħ. This is very deep—but it does make sense in light of the En =n·ħ·ω = n·h·f formula that solves the black-body radiation problem, as illustrated below. [The formula next to the energy levels is the probability of an atomic oscillator occupying that energy level, which is given by Boltzmann’s Law. You can check things in my post on it.]

energy levels

It is now time to look at something else.

Schrödinger’s equation as an energy propagation mechanism

The Schrödinger equations above are not complete. The complete equation includes force fields, i.e. potential energy:

schrodinger 5

To write the equation like this, we need to move the on the right-hand side of our ∂ψ/∂t = i·(2ħ/m)·∇2ψ equation to the other side, and multiply both sides with −1. [Remember: 1/i = −i.] Now, it is very interesting to do a dimensional analysis of this equation. Let’s do the right-hand side first. The ħfactor in the ħ/(2m) is expressed in J2·s2. Now that doesn’t make much sense, but then that mass factor in the denominator makes everything come out alright. Indeed, we can use the mass-equivalence relation to express m in J/(m/s)2 units. So we get: (J2·s2)·[(m/s)2/J] = J·m2. But so we multiply that with some quantity (the Laplacian) that’s expressed per m2. So −(ħ2/2m)·∇2ψ is something expressed in joule, so it’s some amount of energy! Interesting, isn’t it? [Note that it works out fine with the addition Vψ term, which is also expressed in joule.] On the left-hand side, we have ħ, and its dimension is the action dimension: J·s, i.e. force times distance times time (N·m·s). So we multiply that with a time derivative and we get J once again, the unit of energy. So it works out: we have joule units both left and right. But what does it mean?

Well… The Laplacian on the right-hand side works just the same as for our heat diffusion equation: it gives us a flux density, i.e. something expressed per square meter (1/m2). Likewise, the time derivative on the left-hand side gives us a flow per second. But so what is it that is flowing here? Well… My interpretation is that it is energy, and it’s flowing between a real and an imaginary space—but don’t be fooled by the terms here: both spaces are equally real, as both have an actual physical dimension. Let me explain.

Things become somewhat more comprehensible when we remind ourselves that the Schrödinger equation is equivalent to the following pair of equations:

  1. Re(∂ψ/∂t) =   −(ħ/2m)·Im(∇2ψ) ⇔ ω·cos(kx − ωt) = k2·(ħ/2m)·cos(kx − ωt)
  2. Im(∂ψ/∂t) = (ħ/2m)·Re(∇2ψ) ⇔ ω·sin(kx − ωt) = k2·(ħ/2m)·sin(kx − ωt)

So what? Let me insert an illustration here. See what happens. The wavefunction acts as a link function between our physical spacetime and some other space whose dimensions – in my humble opinion – are also physical. We have those sines and cosines, which mirror the energy of the system at any point in time, as measured by the proper time of the system.

summary

Let me more precise. The wavefunction, as a link function between two spaces here, associates every point in spacetime with some real as well as some imaginary energy here—but, as mentioned above, that imaginary energy is as real as the real energy. What it embodies really is the energy conservation law: at any point in time (as measured by the proper time) the sum of kinetic and potential energy must be equal to some constant, and so that’s what’s shown here. Indeed, you should note the phase shift between the sine and the cosine function: if one reaches the +1 or −1 value, then the other function reaches the zero point—and vice versa. It’s a beautiful structure.

Of course, the million-dollar question is: is it a physical structure, or a mathematical structure? The answer is: it’s a bit of both. It’s a mathematical structure but, at the same time, its dimension is physical: it’s an energy space. It’s that energy that explains why amplitudes interfere—which, as you know, is what they do. So these amplitudes are something real, and as the dimensional analysis of Schrödinger’s equation reveals their dimension is expressed in joule, then… Well… Then these physical equations say what they say, don’t they? And what they say, is something like the diagram below.

summary 2

Note that the diagram above does not show the phase difference between the two springs. The animation below does a better job here, although you need to realize the hand of the clock will move faster or slower as our object travels through force fields and accelerates or decelerates accordingly.

Circle_cos_sin

We may relate that picture above to the principle of least action, which ensures that the difference between the kinetic energy (KE) and potential energy (PE) in the integrand of the action integral, i.e.

action

is minimized along the path of travel.

The spring metaphor should also make us think of the energy formula for a harmonic oscillator, which tells us that the total energy – kinetic (i.e. the energy related to its momentum) plus potential (i.e. the energy stored in the spring) – is equal to T + U = m·ω02/2. The ωhere is the angular velocity, and we have two springs here, so the total energy would be the sum of both, i.e. m·ω02, without the 1/2 factor. Does that make sense? It’s like an E = m·vequation, so that’s twice the (non-relativistic) kinetic energy. Does that formula make any sense?

In the context of what we’re discussing here, it does. Think about the limit situation by trying to imagine a zero-mass particle here (I am talking a zero-mass spin-1/2 particle this time). It would have no rest energy, so it’s only energy is kinetic, which is equal to:

K.E. = E − E0 = mv·c2 − m0·c2 = mc·c2

Why is mequal to mc? Zero-mass particles must travel at the speed of light, as the slightest force on them gives them an infinite acceleration. So there we are: the m·ω02 equation makes sense! But what if we have a non-zero rest mass? In that case, look at that pair of equations again: they give us a dispersion relation, i.e. a relation between ω and k. Indeed, using natural units again, so the numerical value of ħ = 1, we can write:

ω = k2/(2m) = p2/(2m) = (m·v)2/(2m) = m·v2/2

This equation seems to represent the kinetic energy but m is not the rest mass here: it’s the relativistic mass, so that makes it different from the classical kinetic energy formula (K.E. = m0·v2/2). [It may be useful here to remind you of how we get that classical formula. We basically integrate force over distance, from some start to some final point of a path in spacetime. So we write: ∫ F·ds = ∫ (m·a)·ds = ∫ (m·a)·ds = ∫ [m·(dv/dt)]·ds = ∫ [m·(ds/dt)]·d= ∫ m·v·ds. So we can solve that using the m·v2/2 primitive but only if m does not vary, i.e. if m = m0. If velocity are high, we need the relativistic mass concept.]

So we have a new energy concept here: m·v2, and it’s split over those two springs. Hmm… The interpretation of all of this is not so easy, so I will need to re-visit this. As for now, however, it looks like the Universe can be represented by a V-twin engine! 🙂

V-Twin engine

 

Is it real?

You may still doubt whether that new ‘space’ has an actual energy dimension. It’s a figment of our mind, right? Well… Yes and no. Again, it’s a bit of a mixture between a mathematical and a physical space: it’s definitely not our physical space, as it’s not the spacetime we’re living in. But, having said that, I don’t think this energy space is just a figment of our mind. Let me give you some additional reasons, beside the dimensional analysis we did above.

For example, there is the fact that we need to to take the absolute square of the wavefunction to get the probability that our elementary particle is actually right there! Now that’s something real! Hence, let me say a few more things about that. The absolute square gets rid of the time factor. Just write it out to see what happens:

|reiθ|2 = |r|2|eiθ|2 = r2[√(cos2θ + sin2θ)]2 = r2(√1)2 = r2

Now, the gives us the maximum amplitude (sorry for the mix of terminology here: I am just talking the wave amplitude here – i.e. the classical concept of an amplitude – not the quantum-mechanical concept of a probability amplitude). Now, we know that the energy of a wave – anywave, really – is proportional to the amplitude of a wave. It would also be logical to expect that the probability of finding our particle at some point x is proportional to the energy densitythere, isn’t it? [I know what you’ll say now: you’re squaring the amplitude, so if the dimension of its square is energy, then its own dimension must be the square root, right? No. Wrong. That’s why this confusion between amplitude and probability amplitude is so bad. Look at the formula: we’re squaring the sine and cosine, to then take the square root again, so the dimension doesn’t change: it’s √J2 = J.]

The third reason why I think the probability amplitude represents some energy is that its real and imaginary part also interfere with each other, as is evident when you take the ordinary square (i.e. not the absolute square). Then the i2   = –1 rule comes into play and, therefore, the square of the imaginary part starts messing with the square of the real part. Just write it out:

(reiθ)2 = r2(cosθ + isinθ)2 = r2(cos2θ – sin2θ + 2icosθsinθ)2 = r2(1 – 2sin2θ + 2icosθsinθ)2 

As mentioned above, if there’s interference, then something is happening, and so then we’re talking something real. Hence, the real and imaginary part of the wavefunction must have some dimension, and not just any dimension: it must be energy, as that’s the currency of the Universe, so to speak.

Let me add a philosophical note here—or an ontological note, I should say. When you think we should only have one physical space, you’re right. This new physical space, in which we relate energy to time, is not our physical space. It’s not reality—as we know, as we experience it. So, in that sense, you’re right. It’s not physical space. But then… Well… It’s a definitional matter. Any space whose dimensions are physical, is a physical space for me. But then I should probably be more careful. What we have here is some kind of projection of our physical space to a space that  lacks… Well… It lacks the spatial dimension. It’s just time – but a special kind of time: relativistic proper time – and energy—albeit energy in two dimensions, so to speak. So… What can I say? Just what I said a couple of times already: it’s some kind of mixture between a physical and mathematical space. But then… Well… Our own physical space – including the spatial dimension – is something like a mixture as well, isn’t it? We can try to disentangle them – which is what I am trying to do – but we’ll never fully succeed.

The Imaginary Energy Space

Post scriptum note added on 11 July 2016: This is one of the more speculative posts which led to my e-publication analyzing the wavefunction as an energy propagation. With the benefit of hindsight, I would recommend you to immediately the more recent exposé on the matter that is being presented here, which you can find by clicking on the provided link.

Original post:

Intriguing title, isn’t it? You’ll think this is going to be highly speculative and you’re right. In fact, I could also have written: the imaginary action space, or the imaginary momentum space. Whatever. It all works ! It’s an imaginary space – but a very real one, because it holds energy, or momentum, or a combination of both, i.e. action. 🙂

So the title is either going to deter you or, else, encourage you to read on. I hope it’s the latter. 🙂

In my post on Richard Feynman’s exposé on how Schrödinger got his famous wave equation, I noted an ambiguity in how he deals with the energy concept. I wrote that piece in February, and we are now May. In-between, I looked at Schrödinger’s equation from various perspectives, as evidenced from the many posts that followed that February post, which I summarized on my Deep Blue page, where I note the following:

  1. The argument of the wavefunction (i.e. θ = ωt – kx = [E·t – p·x]/ħ) is just the proper time of the object that’s being represented by the wavefunction (which, in most cases, is an elementary particle—an electron, for example).
  2. The 1/2 factor in Schrödinger’s equation (∂ψ/∂t = i·(ħ/2m)·∇2ψ) doesn’t make all that much sense, so we should just drop it. Writing ∂ψ/∂t = i·(m/ħ)∇2ψ (i.e. Schrödinger’s equation without the 1/2 factor) does away with the mentioned ambiguities and, more importantly, avoids obvious contradictions.

Both remarks are rather unusual—especially the second one. In fact, if you’re not shocked by what I wrote above (Schrödinger got something wrong!), then stop reading—because then you’re likely not to understand a thing of what follows. 🙂 In any case, I thought it would be good to follow up by devoting a separate post to this matter.

The argument of the wavefunction as the proper time

Frankly, it took me quite a while to see that the argument of the wavefunction is nothing but the t’ = (t − v∙x)/√(1−v2)] formula that we know from the Lorentz transformation of spacetime. Let me quickly give you the formulas (just substitute the for v):

relativity

In fact, let me be precise: the argument of the wavefunction also has the particle’s rest mass m0 in it. That mass factor (m0) appears in it as a general scaling factor, so it determines the density of the wavefunction both in time as well as in space. Let me jot it down:

ψ(x, t) = a·ei·(mv·t − p∙x) = a·ei·[(m0/√(1−v2))·t − (m0·v/√(1−v2))∙x] = a·ei·m0·(t − v∙x)/√(1−v2)

Huh? Yes. Let me show you how we get from θ = ωt – kx = [E·t – p·x]/ħ to θ = mv·t − p∙x. It’s really easy. We first need to choose our units such that the speed of light and Planck’s constant are numerically equal to one, so we write: = 1 and ħ = 1. So now the 1/ħ factor no longer appears.

[Let me note something here: using natural units does not do away with the dimensions: the dimensions of whatever is there remain what they are. For example, energy remains what it is, and so that’s force over distance: 1 joule = 1 newton·meter (1 J = 1 N·m. Likewise, momentum remains what it is: force times time (or mass times velocity). Finally, the dimension of the quantum of action doesn’t disappear either: it remains the product of force, distance and time (N·m·s). So you should distinguish between the numerical value of our variables and their dimension. Always! That’s where physics is different from algebra: the equations actually mean something!]

Now, because we’re working in natural units, the numerical value of both and cwill be equal to 1. It’s obvious, then, that Einstein’s mass-energy equivalence relation reduces from E = mvc2 to E = mv. You can work out the rest yourself – noting that p = mv·v and mv = m0/√(1−v2). Done! For a more intuitive explanation, I refer you to the above-mentioned page.

So that’s for the wavefunction. Let’s now look at Schrödinger’s wave equation, i.e. that differential equation of which our wavefunction is a solution. In my introduction, I bluntly said there was something wrong with it: that 1/2 factor shouldn’t be there. Why not?

What’s wrong with Schrödinger’s equation?

When deriving his famous equation, Schrödinger uses the mass concept as it appears in the classical kinetic energy formula: K.E. = m·v2/2, and that’s why – after all the complicated turns – that 1/2 factor is there. There are many reasons why that factor doesn’t make sense. Let me sum up a few.

[I] The most important reason is that de Broglie made it quite clear that the energy concept in his equations for the temporal and spatial frequency for the wavefunction – i.e. the ω = E/ħ and k = p/ħ relations – is the total energy, including rest energy (m0), kinetic energy (m·v2/2) and any potential energy (V). In fact, if we just multiply the two de Broglie (aka as matter-wave equations) and use the old-fashioned v = λ relation (so we write E as E = ω·ħ = (2π·f)·(h/2π) = f·h, and p as p = k·ħ = (2π/λ)·(h/2π) = h/λ and, therefore, we have = E/h and p = h/p), we find that the energy concept that’s implicit in the two matter-wave equations is equal to E = m∙v2, as shown below:

  1. f·λ = (E/h)·(h/p) = E/p
  2. v = λ ⇒ f·λ = v = E/p ⇔ E = v·p = v·(m·v) ⇒ E = m·v2

Huh? E = m∙v2? Yes. Not E = m∙c2 or m·v2/2 or whatever else you might be thinking of. In fact, this E = m∙v2 formula makes a lot of sense in light of the two following points.

Skeptical note: You may – and actually should – wonder whether we can use that v = λ relation for a wave like this, i.e. a wave with both a real (cos(-θ)) as well as an imaginary component (i·sin(-θ). It’s a deep question, and I’ll come back to it later. But… Yes. It’s the right question to ask. 😦

[II] Newton told us that force is mass time acceleration. Newton’s law is still valid in Einstein’s world. The only difference between Newton’s and Einstein’s world is that, since Einstein, we should treat the mass factor as a variable as well. We write: F = mv·a = mv·= [m0/√(1−v2)]·a. This formula gives us the definition of the newton as a force unit: 1 N = 1 kg·(m/s)/s = 1 kg·m/s2. [Note that the 1/√(1−v2) factor – i.e. the Lorentz factor (γ) – has no dimension, because is measured as a relative velocity here, i.e. as a fraction between 0 and 1.]

Now, you’ll agree the definition of energy as a force over some distance is valid in Einstein’s world as well. Hence, if 1 joule is 1 N·m, then 1 J is also equal to 1 (kg·m/s2)·m = 1 kg·(m2/s2), so this also reflects the E = m∙v2 concept. [I can hear you mutter: that kg factor refers to the rest mass, no? No. It doesn’t. The kg is just a measure of inertia: as a unit, it applies to both mas well as mv. Full stop.]

Very skeptical note: You will say this doesn’t prove anything – because this argument just shows the dimensional analysis for both equations (i.e. E = m∙v2 and E = m∙c2) is OK. Hmm… Yes. You’re right. 🙂 But the next point will surely convince you! 🙂

[III] The third argument is the most intricate and the most beautiful at the same time—not because it’s simple (like the arguments above) but because it gives us an interpretation of what’s going on here. It’s fairly easy to verify that Schrödinger’s equation, ∂ψ/∂t = i·(ħ/2m)·∇2ψ equation (including the 1/2 factor to which I object), is equivalent to the following set of two equations:

  1. Re(∂ψ/∂t) = −(ħ/2m)·Im(∇2ψ)
  2. Im(∂ψ/∂t) = (ħ/2m)·Re(∇2ψ)

[In case you don’t see it immediately, note that two complex numbers a + i·b and c + i·d are equal if, and only if, their real and imaginary parts are the same. However, here we have something like this: a + i·b = i·(c + i·d) = i·c + i2·d = − d + i·c (remember i= −1).]

Now, before we proceed (i.e. before I show you what’s wrong here with that 1/2 factor), let us look at the dimensions first. For that, we’d better analyze the complete Schrödinger equation so as to make sure we’re not doing anything stupid here by looking at one aspect of the equation only. The complete equation, in its original form, is:

schrodinger 5

Notice that, to simplify the analysis above, I had moved the and the ħ on the left-hand side to the right-hand side (note that 1/= −i, so −(ħ2/2m)/(i·ħ) = ħ/2m). Now, the ħfactor on the right-hand side is expressed in J2·s2. Now that doesn’t make much sense, but then that mass factor in the denominator makes everything come out alright. Indeed, we can use the mass-equivalence relation to express m in J/(m/s)2 units. So our ħ2/2m coefficient is expressed in (J2·s2)/[J/(m/s)2] = J·m2. Now we multiply that by that Laplacian operating on some scalar, which yields some quantity per square meter. So the whole right-hand side becomes some amount expressed in joule, i.e. the unit of energy! Interesting, isn’t it?

On the left-hand side, we have i and ħ. We shouldn’t worry about the imaginary unit because we can treat that as just another number, albeit a very special number (because its square is minus 1). However, in this equation, it’s like a mathematical constant and you can think of it as something like π or e. [Think of the magical formula: eiπ = i2 = −1.] In contrast, ħ is a physical constant, and so that constant comes with some dimension and, therefore, we cannot just do what we want. [I’ll show, later, that even moving it to the other side of the equation comes with interpretation problems, so be careful with physical constants, as they really mean something!] In this case, its dimension is the action dimension: J·s = N·m·s, so that’s force times distance times time. So we multiply that with a time derivative and we get joule once again (N·m·s/s = N·m = J), so that’s the unit of energy. So it works out: we have joule units both left and right in Schrödinger’s equation. Nice! Yes. But what does it mean? 🙂

Well… You know that we can – and should – think of Schrödinger’s equation as a diffusion equation – just like a heat diffusion equation, for example – but then one describing the diffusion of a probability amplitude. [In case you are not familiar with this interpretation, please do check my post on it, or my Deep Blue page.] But then we didn’t describe the mechanism in very much detail, so let me try to do that now and, in the process, finally explain the problem with the 1/2 factor.

The missing energy

There are various ways to explain the problem. One of them involves calculating group and phase velocities of the elementary wavefunction satisfying Schrödinger’s equation but that’s a more complicated approach and I’ve done that elsewhere, so just click the reference if you prefer the more complicated stuff. I find it easier to just use those two equations above:

  1. Re(∂ψ/∂t) = −(ħ/2m)·Im(∇2ψ)
  2. Im(∂ψ/∂t) = (ħ/2m)·Re(∇2ψ)

The argument is the following: if our elementary wavefunction is equal to ei(kx − ωt) = cos(kx−ωt) + i∙sin(kx−ωt), then it’s easy to proof that this pair of conditions is fulfilled if, and only if, ω = k2·(ħ/2m). [Note that I am omitting the normalization coefficient in front of the wavefunction: you can put it back in if you want. The argument here is valid, with or without normalization coefficients.] Easy? Yes. Check it out. The time derivative on the left-hand side is equal to:

∂ψ/∂t = −iω·iei(kx − ωt) = ω·[cos(kx − ωt) + i·sin(kx − ωt)] = ω·cos(kx − ωt) + iω·sin(kx − ωt)

And the second-order derivative on the right-hand side is equal to:

2ψ = ∂2ψ/∂x= i·k2·ei(kx − ωt) = k2·cos(kx − ωt) + i·k2·sin(kx − ωt)

So the two equations above are equivalent to writing:

  1. Re(∂ψB/∂t) =   −(ħ/2m)·Im(∇2ψB) ⇔ ω·cos(kx − ωt) = k2·(ħ/2m)·cos(kx − ωt)
  2. Im(∂ψB/∂t) = (ħ/2m)·Re(∇2ψB) ⇔ ω·sin(kx − ωt) = k2·(ħ/2m)·sin(kx − ωt)

So both conditions are fulfilled if, and only if, ω = k2·(ħ/2m). You’ll say: so what? Well… We have a contradiction here—something that doesn’t make sense. Indeed, the second of the two de Broglie equations (always look at them as a pair) tells us that k = p/ħ, so we can re-write the ω = k2·(ħ/2m) condition as:

ω/k = vp = k2·(ħ/2m)/k = k·ħ/(2m) = (p/ħ)·(ħ/2m) = p/2m ⇔ p = 2m

You’ll say: so what? Well… Stop reading, I’d say. That p = 2m doesn’t make sense—at all! Nope! In fact, if you thought that the E = m·v2  is weird—which, I hope, is no longer the case by now—then… Well… This p = 2m equation is much weirder. In fact, it’s plain nonsense: this condition makes no sense whatsoever. The only way out is to remove the 1/2 factor, and to re-write the Schrödinger equation as I wrote it, i.e. with an ħ/m coefficient only, rather than an (1/2)·(ħ/m) coefficient.

Huh? Yes.

As mentioned above, I could do those group and phase velocity calculations to show you what rubbish that 1/2 factor leads to – and I’ll do that eventually – but let me first find yet another way to present the same paradox. Let’s simplify our life by choosing our units such that = ħ = 1, so we’re using so-called natural units rather than our SI units. [Again, note that switching to natural units doesn’t do anything to the physical dimensions: a force remains a force, a distance remains a distance, and so on.] Our mass-energy equivalence then becomes: E = m·c= m·1= m. [Again, note that switching to natural units doesn’t do anything to the physical dimensions: a force remains a force, a distance remains a distance, and so on. So we’d still measure energy and mass in different but equivalent units. Hence, the equality sign should not make you think mass and energy are actually the same: energy is energy (i.e. force times distance), while mass is mass (i.e. a measure of inertia). I am saying this because it’s important, and because it took me a while to make these rather subtle distinctions.]

Let’s now go one step further and imagine a hypothetical particle with zero rest mass, so m0 = 0. Hence, all its energy is kinetic and so we write: K.E. = mv·v/2. Now, because this particle has zero rest mass, the slightest acceleration will make it travel at the speed of light. In fact, we would expect it to travel at the speed, so mv = mc and, according to the mass-energy equivalence relation, its total energy is, effectively, E = mv = mc. However, we just said its total energy is kinetic energy only. Hence, its total energy must be equal to E = K.E. = mc·c/2 = mc/2. So we’ve got only half the energy we need. Where’s the other half? Where’s the missing energy? Quid est veritas? Is its energy E = mc or E = mc/2?

It’s just a paradox, of course, but one we have to solve. Of course, we may just say we trust Einstein’s E = m·c2 formula more than the kinetic energy formula, but that answer is not very scientific. 🙂 We’ve got a problem here and, in order to solve it, I’ve come to the following conclusion: just because of its sheer existence, our zero-mass particle must have some hidden energy, and that hidden energy is also equal to E = m·c2/2. Hence, the kinetic and the hidden energy add up to E = m·c2 and all is alright.

Huh? Hidden energy? I must be joking, right?

Well… No. Let me explain. Oh. And just in case you wonder why I bother to try to imagine zero-mass particles. Let me tell you: it’s the first step towards finding a wavefunction for a photon and, secondly, you’ll see it just amounts to modeling the propagation mechanism of energy itself. 🙂

The hidden energy as imaginary energy

I am tempted to refer to the missing energy as imaginary energy, because it’s linked to the imaginary part of the wavefunction. However, it’s anything but imaginary: it’s as real as the imaginary part of the wavefunction. [I know that sounds a bit nonsensical, but… Well… Think about it. And read on!]

Back to that factor 1/2. As mentioned above, it also pops up when calculating the group and the phase velocity of the wavefunction. In fact, let me show you that calculation now. [Sorry. Just hang in there.] It goes like this.

The de Broglie relations tell us that the k and the ω in the ei(kx − ωt) = cos(kx−ωt) + i∙sin(kx−ωt) wavefunction (i.e. the spatial and temporal frequency respectively) are equal to k = p/ħ, and ω = E/ħ. Let’s now think of that zero-mass particle once more, so we assume all of its energy is kinetic: no rest energy, no potential! So… If we now use the kinetic energy formula E = m·v2/2 – which we can also write as E = m·v·v/2 = p·v/2 = p·p/2m = p2/2m, with v = p/m the classical velocity of the elementary particle that Louis de Broglie was thinking of – then we can calculate the group velocity of our ei(kx − ωt) = cos(kx−ωt) + i∙sin(kx−ωt) wavefunction as:

vg = ∂ω/∂k = ∂[E/ħ]/∂[p/ħ] = ∂E/∂p = ∂[p2/2m]/∂p = 2p/2m = p/m = v

[Don’t tell me I can’t treat m as a constant when calculating ∂ω/∂k: I can. Think about it.]

Fine. Now the phase velocity. For the phase velocity of our ei(kx − ωt) wavefunction, we find:

vp = ω/k = (E/ħ)/(p/ħ) = E/p = (p2/2m)/p = p/2m = v/2

So that’s only half of v: it’s the 1/2 factor once more! Strange, isn’t it? Why would we get a different value for the phase velocity here? It’s not like we have two different frequencies here, do we? Well… No. You may also note that the phase velocity turns out to be smaller than the group velocity (as mentioned, it’s only half of the group velocity), which is quite exceptional as well! So… Well… What’s the matter here? We’ve got a problem!

What’s going on here? We have only one wave here—one frequency and, hence, only one k and ω. However, on the other hand, it’s also true that the ei(kx − ωt) wavefunction gives us two functions for the price of one—one real and one imaginary: ei(kx − ωt) = cos(kx−ωt) + i∙sin(kx−ωt). So the question here is: are we adding waves, or are we not? It’s a deep question. If we’re adding waves, we may get different group and phase velocities, but if we’re not, then… Well… Then the group and phase velocity of our wave should be the same, right? The answer is: we are and we aren’t. It all depends on what you mean by ‘adding’ waves. I know you don’t like that answer, but that’s the way it is, really. 🙂

Let me make a small digression here that will make you feel even more confused. You know – or you should know – that the sine and the cosine function are the same except for a phase difference of 90 degrees: sinθ = cos(θ + π/2). Now, at the same time, multiplying something with amounts to a rotation by 90 degrees, as shown below.

Hence, in order to sort of visualize what our ei(kx − ωt) function really looks like, we may want to super-impose the two graphs and think of something like this:

vision

You’ll have to admit that, when you see this, our formulas for the group or phase velocity, or our v = λ relation, do no longer make much sense, do they? 🙂

Having said that, that 1/2 factor is and remains puzzling, and there must be some logical reason for it. For example, it also pops up in the Uncertainty Relations:

Δx·Δp ≥ ħ/2 and ΔE·Δt ≥ ħ/2

So we have ħ/2 in both, not ħ. Why do we need to divide the quantum of action here? How do we solve all these paradoxes? It’s easy to see how: the apparent contradiction (i.e. the different group and phase velocity) gets solved if we’d use the E = m∙v2 formula rather than the kinetic energy E = m∙v2/2. But then… What energy formula is the correct one: E = m∙v2 or m∙c2? Einstein’s formula is always right, isn’t it? It must be, so let me postpone the discussion a bit by looking at a limit situation. If v = c, then we don’t need to make a choice, obviously. 🙂 So let’s look at that limit situation first. So we’re discussing our zero-mass particle once again, assuming it travels at the speed of light. What do we get?

Well… Measuring time and distance in natural units, so c = 1, we have:

E = m∙c2 = m and p = m∙c = m, so we get: E = m = p

Waw ! E = m = p ! What a weird combination, isn’t it? Well… Yes. But it’s fully OK. [You tell me why it wouldn’t be OK. It’s true we’re glossing over the dimensions here, but natural units are natural units and, hence, the numerical value of c and c2 is 1. Just figure it out for yourself.] The point to note is that the E = m = p equality yields extremely simple but also very sensible results. For the group velocity of our ei(kx − ωt) wavefunction, we get:

vg = ∂ω/∂k = ∂[E/ħ]/∂[p/ħ] = ∂E/∂p = ∂p/∂p = 1

So that’s the velocity of our zero-mass particle (remember: the 1 stands for c here, i.e. the speed of light) expressed in natural units once more—just like what we found before. For the phase velocity, we get:

vp = ω/k = (E/ħ)/(p/ħ) = E/p = p/p = 1

Same result! No factor 1/2 here! Isn’t that great? My ‘hidden energy theory’ makes a lot of sense.:-)

However, if there’s hidden energy, we still need to show where it’s hidden. 🙂 Now that question is linked to the propagation mechanism that’s described by those two equations, which now – leaving the 1/2 factor out, simplify to:

  1. Re(∂ψ/∂t) = −(ħ/m)·Im(∇2ψ)
  2. Im(∂ψ/∂t) = (ħ/m)·Re(∇2ψ)

Propagation mechanism? Yes. That’s what we’re talking about here: the propagation mechanism of energy. Huh? Yes. Let me explain in another separate section, so as to improve readability. Before I do, however, let me add another note—for the skeptics among you. 🙂

Indeed, the skeptics among you may wonder whether our zero-mass particle wavefunction makes any sense at all, and they should do so for the following reason: if x = 0 at t = 0, and it’s traveling at the speed of light, then x(t) = t. Always. So if E = m = p, the argument of our wavefunction becomes E·t – p·x = E·t – E·t = 0! So what’s that? The proper time of our zero-mass particle is zero—always and everywhere!?

Well… Yes. That’s why our zero-mass particle – as a point-like object – does not really exist. What we’re talking about is energy itself, and its propagation mechanism. 🙂

While I am sure that, by now, you’re very tired of my rambling, I beg you to read on. Frankly, if you got as far as you have, then you should really be able to work yourself through the rest of this post. 🙂 And I am sure that – if anything – you’ll find it stimulating! 🙂

The imaginary energy space

Look at the propagation mechanism for the electromagnetic wave in free space, which (for = 1) is represented by the following two equations:

  1. B/∂t = –∇×E
  2. E/∂t = ∇×B

[In case you wonder, these are Maxwell’s equations for free space, so we have no stationary nor moving charges around.] See how similar this is to the two equations above? In fact, in my Deep Blue page, I use these two equations to derive the quantum-mechanical wavefunction for the photon (which is not the same as that hypothetical zero-mass particle I introduced above), but I won’t bother you with that here. Just note the so-called curl operator in the two equations above (∇×) can be related to the Laplacian we’ve used so far (∇2). It’s not the same thing, though: for starters, the curl operator operates on a vector quantity, while the Laplacian operates on a scalar (including complex scalars). But don’t get distracted now. Let’s look at the revised Schrödinger’s equation, i.e. the one without the 1/2 factor:

∂ψ/∂t = i·(ħ/m)·∇2ψ

On the left-hand side, we have a time derivative, so that’s a flow per second. On the right-hand side we have the Laplacian and the i·ħ/m factor. Now, written like this, Schrödinger’s equation really looks exactly the same as the general diffusion equation, which is written as: ∂φ/∂t = D·∇2φ, except for the imaginary unit, which makes it clear we’re getting two equations for the price of one here, rather than one only! 🙂 The point is: we may now look at that ħ/m factor as a diffusion constant, because it does exactly the same thing as the diffusion constant D in the diffusion equation ∂φ/∂t = D·∇2φ, i.e:

  1. As a constant of proportionality, it quantifies the relationship between both derivatives.
  2. As a physical constant, it ensures the dimensions on both sides of the equation are compatible.

So the diffusion constant for  Schrödinger’s equation is ħ/m. What is its dimension? That’s easy: (N·m·s)/(N·s2/m) = m2/s. [Remember: 1 N = 1 kg·m/s2.] But then we multiply it with the Laplacian, so that’s something expressed per square meter, so we get something per second on both sides.

Of course, you wonder: what per second? Not sure. That’s hard to say. Let’s continue with our analogy with the heat diffusion equation so as to try to get a better understanding of what’s being written here. Let me give you that heat diffusion equation here. Assuming the heat per unit volume (q) is proportional to the temperature (T) – which is the case when expressing T in degrees Kelvin (K), so we can write q as q = k·T  – we can write it as:

heat diffusion 2

So that’s structurally similar to Schrödinger’s equation, and to the two equivalent equations we jotted down above. So we’ve got T (temperature) in the role of ψ here—or, to be precise, in the role of ψ ‘s real and imaginary part respectively. So what’s temperature? From the kinetic theory of gases, we know that temperature is not just a scalar: temperature measures the mean (kinetic) energy of the molecules in the gas. That’s why we can confidently state that the heat diffusion equation models an energy flow, both in space as well as in time.

Let me make the point by doing the dimensional analysis for that heat diffusion equation. The time derivative on the left-hand side (∂T/∂t) is expressed in K/s (Kelvin per second). Weird, isn’t it? What’s a Kelvin per second? Well… Think of a Kelvin as some very small amount of energy in some equally small amount of space—think of the space that one molecule needs, and its (mean) energy—and then it all makes sense, doesn’t it?

However, in case you find that a bit difficult, just work out the dimensions of all the other constants and variables. The constant in front (k) makes sense of it. That coefficient (k) is the (volume) heat capacity of the substance, which is expressed in J/(m3·K). So the dimension of the whole thing on the left-hand side (k·∂T/∂t) is J/(m3·s), so that’s energy (J) per cubic meter (m3) and per second (s). Nice, isn’t it? What about the right-hand side? On the right-hand side we have the Laplacian operator  – i.e. ∇= ·, with ∇ = (∂/∂x,  ∂/∂y,  ∂/∂z) – operating on T. The Laplacian operator, when operating on a scalar quantity, gives us a flux density, i.e. something expressed per square meter (1/m2). In this case, it’s operating on T, so the dimension of ∇2T is K/m2. Again, that doesn’t tell us very much (what’s the meaning of a Kelvin per square meter?) but we multiply it by the thermal conductivity (κ), whose dimension is W/(m·K) = J/(m·s·K). Hence, the dimension of the product is  the same as the left-hand side: J/(m3·s). So that’s OK again, as energy (J) per cubic meter (m3) and per second (s) is definitely something we can associate with an energy flow.

In fact, we can play with this. We can bring k from the left- to the right-hand side of the equation, for example. The dimension of κ/k is m2/s (check it!), and multiplying that by K/m(i.e. the dimension of ∇2T) gives us some quantity expressed in Kelvin per second, and so that’s the same dimension as that of ∂T/∂t. Done! 

In fact, we’ve got two different ways of writing Schrödinger’s diffusion equation. We can write it as ∂ψ/∂t = i·(ħ/m)·∇2ψ or, else, we can write it as ħ·∂ψ/∂t = i·(ħ2/m)·∇2ψ. Does it matter? I don’t think it does. The dimensions come out OK in both cases. However, interestingly, if we do a dimensional analysis of the ħ·∂ψ/∂t = i·(ħ2/m)·∇2ψ equation, we get joule on both sides. Interesting, isn’t it? The key question, of course, is: what is it that is flowing here?

I don’t have a very convincing answer to that, but the answer I have is interesting—I think. 🙂 Think of the following: we can multiply Schrödinger’s equation with whatever we want, and then we get all kinds of flows. For example, if we multiply both sides with 1/(m2·s) or 1/(m3·s), we get a equation expressing the energy conservation law, indeed! [And you may want to think about the minus sign of the  right-hand side of Schrödinger’s equation now, because it makes much more sense now!]

We could also multiply both sides with s, so then we get J·s on both sides, i.e. the dimension of physical action (J·s = N·m·s). So then the equation expresses the conservation of actionHuh? Yes. Let me re-phrase that: then it expresses the conservation of angular momentum—as you’ll surely remember that the dimension of action and angular momentum are the same. 🙂

And then we can divide both sides by m, so then we get N·s on both sides, so that’s momentum. So then Schrödinger’s equation embodies the momentum conservation law.

Isn’t it just wonderfulSchrödinger’s equation packs all of the conservation laws!:-) The only catch is that it flows back and forth from the real to the imaginary space, using that propagation mechanism as described in those two equations.

Now that is really interesting, because it does provide an explanation – as fuzzy as it may seem