# Wavefunctions, perspectives, reference frames, representations and symmetries

Ouff ! This title is quite a mouthful, isn’t it? đ So… What’s the topic of the day? Well… In our previous posts, we developed a few key ideas in regard to a possible physical interpretation of the (elementary) wavefunction. It’s been an interesting excursion, and I summarized it in another pre-publication paper on the open arXiv.org site.

In my humble view, one of the toughest issues to deal with when thinking about geometric (orÂ physical) interpretations of the wavefunction is the fact that a wavefunction does not seem to obey the classical 360Â° symmetry in space. In this post, I want to muse a bit about this and show that… Well… It does and it doesn’t. It’s got to do with what happens when you change from one representational base (orÂ representation, tout court)Â to another which is… Well… Like changing the reference frame but, at the same time, it is also more than just a change of the reference frameâand so that explains the weird stuff (like that 720Â° symmetry of the amplitudes for spin-1/2 particles, for example).

I should warn you before you start reading: I’ll basically just pick up some statements from my paper (and previous posts) and develop some more thoughts on them. As a result, this post may not be very well structured. Hence, you may want to read the mentioned paperÂ first.

### The reality of directions

Huh? TheÂ realityÂ of directions? Yes. I warned you. This post may cause brain damage. đÂ The whole argument revolves around a thoughtÂ experimentâbut one whose results have been verified in zillions of experiments in university student labs so… Well… We do notÂ doubt the results and, therefore, we do not doubt the basic mathematical results: we just want to try to understandÂ them better.

So what is the set-up? Well… In the illustration below (Feynman, III, 6-3), Feynman compares the physics of two situations involving rather special beam splitters. Feynman calls them modified or âimprovedâ Stern-Gerlach apparatuses. The apparatus basically splits and then re-combines the two new beams along theÂ z-axis. It is also possible to block one of the beams, so we filter out only particles with their spinÂ upÂ or, alternatively, with their spinÂ down. Spin (or angular momentum or the magnetic moment) as measured along theÂ z-axis, of courseâI should immediately add: we’re talking theÂ z-axis of the apparatus here.

The two situations involve a different relative orientation of the apparatuses: in (a), the angle is 0Â°, while in (b) we have a (right-handed) rotation of 90Â° about the z-axis. He then provesâusing geometry and logic onlyâthat the probabilities and, therefore, the magnitudes of the amplitudes (denoted byÂ C+ and Câ and Câ+ and Cââ in the S and T representation respectively) must be the same, but the amplitudes must have different phases, notingâin his typical style, mixing academic and colloquial languageâthat âthere must be some way for a particle to tell that it has turned a corner in (b).â

The various interpretations of what actually happens here may shed some light on the heated discussions on the reality of the wavefunctionâand of quantum states. In fact, I should note that Feynman’s argument revolves around quantum states. To be precise, the analysis is focused on two-state systems only, and the wavefunctionâwhich captures a continuum of possible states, so to speakâis introduced only later. However, we may look at the amplitude for a particle to be in theÂ up– or down-state as a wavefunction and, therefore (but do note that’s my humble opinion once more), the analysis is actuallyÂ notÂ all that different.

We know, from theory and experiment, that the amplitudes are different. For example, for the given difference in the relative orientation of the two apparatuses (90Â°), we know that the amplitudes are given by Câ+ = eiâĎ/2âC+ = e iâĎ/4âC+ and Cââ = eâiâĎ/2âC+ = eâ iâĎ/4âCâ respectively (the amplitude to go from the down to the up state, or vice versa, is zero). Hence, yes, weânotÂ the particle, Mr. Feynman!âknowÂ that, in (b), the electron has, effectively, turned a corner.

The more subtle question here is the following: is the reality of the particle in the two setups the same? Feynman, of course, stays away from such philosophical question. He just notes that, while â(a) and (b) are differentâ, âthe probabilities are the sameâ. He refrains from making any statement on the particle itself: is or is it not the same? The common sense answer is obvious: of course, it is! The particle is the same, right? In (b), it just took a turnâso it is just going in some other direction. Thatâs all.

However, common sense is seldom a good guide when thinking about quantum-mechanical realities. Also, from a more philosophical point of view, one may argue that the reality of the particle is not the same: something mightâor mustâhave happened to the electron because, when everything is said and done, the particle did take a turn in (b). It did not in (a). [Note that the difference between âmightâ and âmustâ in the previous phrase may well sum up the difference between a deterministic and a non-deterministic world view but… Well… This discussion is going to be way too philosophical already, so let’s refrain from inserting new language here.]

Let us think this through. The (a) and (b) set-up are, obviously, different but…Â Wait a minute…Â Nothing is obvious in quantum mechanics, right? How can weÂ experimentally confirmÂ thatÂ they are different?

Huh?Â I must be joking, right? You canÂ seeÂ they are different, right? No.Â I am not joking. In physics, two things are different if we get differentÂ measurementÂ results. [That’s a bit of a simplified view of the ontological point of view of mainstream physicists, but you will have to admit I am not far off.] So… Well… We can’t see those amplitudes and so… Well… If we measure the same thingâsame probabilities, remember?âwhy are they different? Think of this: if we look at the two beam splitters as one singleÂ tube (anÂ ST tube, we might say), then all we did in (b) was bend the tube. Pursuing the logic that says our particle is still the sameÂ even when it takes a turn, we could say the tube is still the same, despite us having wrenched it over a 90Â° corner.

Now, I am sure you think I’ve just gone nuts, but just tryÂ to stick with me a little bit longer. Feynman actually acknowledges the same: we need to experimentallyÂ proveÂ (a) and (b) are different. He does so by getting aÂ thirdÂ apparatus in (U), as shown below, whose relative orientation to T is the same in both (a) and (b), so there is no difference there.

Now, the axis ofÂ UÂ is not theÂ z-axis: it is theÂ x-axis in (a), and theÂ y-axis in (b). So what? Well… I will quote Feynman hereânot (only) because his words are more important than mine but also because every word matters here:

“The two apparatuses in (a) and (b) are, in fact, different, as we can see in the following way. Suppose that we put an apparatus in front ofÂ SÂ which produces a pure +xÂ state. Such particles would be split into +z andÂ âz intoÂ beams inÂ S,Â but the two beams would be recombined to give aÂ +xÂ state again at P1âthe exit ofÂ S.Â The same thing happens again inÂ T.Â If we followÂ TÂ by a third apparatusÂ U,Â whose axis is in the +xÂ direction and, as shown in (a), all the particles would go into the +Â beam ofÂ U.Â Now imagine what happens ifÂ TÂ and UÂ are swung aroundÂ togetherÂ by 90Â°Â to the positions shown in (b).Â Again, theÂ TÂ apparatus puts out just what it takes in, so the particles that enterÂ UÂ are in a +xÂ stateÂ with respect toÂ S,Â which is different. By symmetry, we would now expect only one-half of the particles to get through.”

I should note that (b) shows theÂ UÂ apparatus wide open so… Well… I must assume that’s a mistake (and should alert the current editors of the LecturesÂ to it): Feynman’s narrative tells us we should also imagine it with theÂ minus channel shut. InÂ thatÂ case, it should, effectively, filter approximately half of the particles out, while they all get through in (a). So that’s aÂ measurementÂ result which shows the direction, as weÂ seeÂ it, makes a difference.

Now, Feynman would be very angry with meâbecause, as mentioned, he hates philosophersâbut I’d say: this experiment proves that a direction is something real. Of course, the next philosophical question then is: whatÂ isÂ a direction? I could answer this by pointing to the experiment above: a direction is something that alters the probabilities between the STU tube as set up in (a) versus the STU tube in (b). In factâbut, I admit, that would be pretty ridiculousâwe could use the varying probabilities as we wrench this tube over varying angles toÂ define an angle! But… Well… While that’s a perfectly logical argument, I agree it doesn’t sound very sensical.

OK. Next step. What follows may cause brain damage. đ Please abandon all pre-conceived notions and definitions for a while and think through the following logic.

You know this stuff is about transformations of amplitudes (or wavefunctions), right? [And you also want to hear about those special 720Â° symmetry, right? No worries. We’ll get there.] So the questions all revolve around this: what happens to amplitudes (or the wavefunction) when we go from one reference frameâorÂ representation, as it’s referred to in quantum mechanicsâto another?

Well… I should immediately correct myself here: a reference frame and a representation are two different things. They areÂ relatedÂ but… Well… Different… Quite different. Not same-same but different. đ I’ll explain why later. Let’s go for it.

Before talking representations, let us first think about what we reallyÂ mean by changing the reference frame. To change it, we first need to answer the question: what is our reference frame? It is a mathematical notion, of course, but then it is also more than that: it is ourÂ reference frame. We use it to make measurements. That’s obvious, you’ll say, but let me make a more formal statement here:

The reference frame is given by (1) the geometry (or theÂ shape, if that sounds easier to you) of the measurement apparatusÂ (so that’s the experimental set-up) here) and (2) our perspective of it.

If we would want to sound academic, we might refer to Kant and other philosophers here, who told usâ230 years agoâthat the mathematical idea of a three-dimensional reference frame is grounded in our intuitive notions of up and down, and left and right. [If you doubt this, think about the necessity of the various right-hand rules and conventions that we cannot do without in math, and in physics.] But so we do not want to sound academic. Let us be practical. Just think about the following.Â The apparatus gives us two directions:

(1) TheÂ upÂ direction, whichÂ weÂ associate with theÂ positive direction of theÂ z-axis, and

(2) the direction of travel of our particle, whichÂ we associateÂ with the positive direction of theÂ y-axis.

Now, if we have two axes, then the third axis (theÂ x-axis) will be given by the right-hand rule, right? So we may say the apparatus gives us the reference frame. Full stop.Â So… Well… Everything is relative? Is this reference frame relative? Are directions relative? That’s what you’ve been told, but think about this:Â relativeÂ to what?Â Here is where the object meets the subject. What’s relative? What’s absolute?Â Frankly, I’ve started to think that, in this particular situation, we should, perhaps, not use these two terms. I am notÂ saying thatÂ our observation of what physically happens here gives these two directions any absolute character but… Well… You will have to admit they are more than just some mathematical construct: when everything is said and done, we will have to admit that these two directions are real. because… Well… They’re part of theÂ realityÂ that we are observing, right? And the third one… Well… That’s given by our perspectiveâby our right-hand rule, which is… Well… OurÂ right-hand rule.

Of course, now you’ll say: if you think that ârelativeâ and âabsoluteâ are ambiguous terms and that we, therefore, may want to avoid them a bit more, then ârealâ and its opposite (unreal?) are ambiguous terms too, right? WellâŚ Maybe. What language would youÂ suggest? đ Just stick to the story for a while. I am not done yet. So… Yes… WhatÂ isÂ theirÂ reality?Â Let’s think about that in the next section.

### Perspectives, reference frames and symmetries

You’ve done some mental exercises already as you’ve been working your way through the previous section, but you’ll need to do plenty more. In fact, they may become physical exercise too: when I first thought about these things (symmetries and, more importantly, asymmetries in space), I found myself walking around the table with some asymmetrical everyday objects and papers with arrows and clocks and other stuff on itâeffectively analyzing what right-hand screw, thumb or grip rules actuallyÂ mean. đ

So… Well… I want you to distinguishâjust for a whileâbetween the notion of a reference frame (think of the xyz reference frame that comes with the apparatus) and yourÂ perspective on it. What’s our perspective on it? Well… You may be looking from the top, or from the side and, if from the side, from the left-hand side or the right-hand sideâwhich, if you think about it, you can only defineÂ in terms of the various positive and negative directions of the various axes. đÂ If you think this is getting ridiculous… Well… Don’t. Feynman himselfÂ doesn’t think this is ridiculous, because he starts his own “long and abstract side tour” on transformations with a very simple explanation of how the top and side view of the apparatus are related to theÂ axesÂ (i.e. the reference frame) that comes with it. You don’t believe me? This is theÂ very first illustration of hisÂ LectureÂ on this:

He uses it to explain the apparatus (which we don’t do here because you’re supposed to already know how these (modified or improved) Stern-Gerlach apparatuses work). So let’s continue this story. Suppose that we are looking in the positiveÂ y-directionâso thatâs the direction in which our particle is movingâthen we might imagine how it would look like whenÂ weÂ would make a 180Â°Â turn and look at the situation from the other side, so to speak. We do not change the reference frame (i.e. the orientation) of the apparatus here: we just change our perspective on it. Instead of seeing particles going away from us, into the apparatus, we now see particles comingÂ towardsÂ us, out of the apparatus.

What happensâbut that’s not scientific language, of courseâis that left becomes right, and right becomes left. Top still is top, and bottom is bottom. We are looking now in theÂ negativeÂ y-direction, and the positive direction of the x-axisâwhich pointed right when we were looking in the positiveÂ y-directionânow points left. I see you nodding your head nowâbecause you’ve heard about parity inversions, mirror symmetries and what have youâand I hear you say: “That’s the mirror world, right?”

No. It is not. I wrote about this in another post: the world in the mirror is theÂ world in the mirror. We don’t get a mirror image of an object by going around it and looking at its back side. I can’t dwell too much on this (just check that post, and another one who talks about the same), but so don’t try to connect it to the discussions on symmetry-breaking and what have you. Just stick toÂ this story, which is about transformations of amplitudes (or wavefunctions). [If you really want to knowâbut I know this sounds counterintuitiveâthe mirror world doesn’t really switch left for right. Your reflection doesn’t do a 180 degree turn: it is just reversed front to back, with no rotation at all. It’s only your brain which mentallyÂ adds (or subtracts) the 180 degree turn that you assume must have happened from the observed front to back reversal. So the left to right reversal is onlyÂ apparent. It’s a common misconception, and… Well… I’ll let you figure this out yourself. I need to move on.]Â Just note the following:

1. TheÂ xyzÂ reference frame remains a valid right-handed reference frame. Of course it does: it comes with our beam splitter, and we can’t change its reality, right? We’re just looking at it from another angle. OurÂ perspectiveÂ on it has changed.
2. However, if we think of the real and imaginary part of the wavefunction describing the electrons that are going through our apparatus as perpendicular oscillations (as shown below)âa cosine and sine function respectivelyâthen our change in perspectiveÂ might, effectively, mess up our convention for measuring angles.

I am not saying itÂ does. Not now, at least. I am just saying it might. It depends on the plane of the oscillation, as I’ll explain in a few moments. Think of this: we measure angles counterclockwise, right? As shown below… But… Well… If the thing below would be some funny clock going backwardsâyou’ve surely seen them in a bar or so, right?âthen… Well… If they’d be transparent, and you’d go around them, you’d see them as going… Yes… Clockwise. đ [This should remind you of a discussion on real versus pseudo-vectors, or polar versus axial vectors, but… Well… We don’t want to complicate the story here.]

Now, ifÂ we wouldÂ assume this clock represents something realâand, of course, I am thinking of theÂ elementary wavefunctionÂ eiÎ¸Â =Â cosÎ¸ +Â iÂˇsinÎ¸ nowâthen… Well… Then it will look different when we go around it. When going around our backwards clock above and looking at it from… Well… The back, we’d describe it, naively, as… Well…Â Think! What’s your answer? Give me the formula!Â đ

[…]

We’d see it asÂ eâiÎ¸Â =Â cos(âÎ¸) +Â iÂˇsin(âÎ¸) =Â cosÎ¸ âÂ iÂˇsinÎ¸, right? The hand of our clock now goes clockwise, so that’s theÂ oppositeÂ direction of our convention for measuring angles. Hence, instead ofÂ eiÎ¸, we writeÂ eâiÎ¸, right? So that’s the complex conjugate. So we’ve got a differentÂ imageÂ of the same thing here. Not good. Not good at all.

You’ll say: so what? We can fix this thing easily, right?Â YouÂ don’t need the convention for measuring angles or for the imaginary unit (i) here.Â This particle is moving, right? So if you’d want to look at the elementary wavefunction as some sort of circularly polarized beam (which, I admit, is very much what I would like to do, but its polarization is rather particular as I’ll explain in a minute), then you just need to define left- and right-handed angles as per the standard right-hand screw rule (illustrated below).Â To hell with the counterclockwise convention for measuring angles!

You are right. WeÂ couldÂ use the right-hand rule more consistently. We could, in fact, use it as anÂ alternativeÂ convention for measuring angles: we could, effectively, measure them clockwise or counterclockwise depending on the direction of our particle.Â But… Well… The fact is:Â we don’t. We do not use that alternative convention when we talk about the wavefunction. Physicists do use theÂ counterclockwiseÂ convention all of the time and just jot down these complex exponential functions and don’t realize that,Â if they are to represent something real, ourÂ perspectiveÂ on the reference frame matters. To put it differently, theÂ directionÂ in which we are looking at things matters! Hence, the direction is not…Â Well… I am tempted to say… NotÂ relative at all but then… Well… We wanted to avoid that term, right? đ

[…]

I guess that, by now, your brain may suffered from various short-circuits. If not, stick with me a while longer. Let us analyze how our wavefunction model might be impacted by this symmetryâorÂ asymmetry, I should say.

### The flywheel model of an electron

In our previous posts, we offered a model that interprets the real and the imaginary part of the wavefunction as oscillations which each carry half of the total energy of the particle. These oscillations are perpendicular to each other, and the interplay between both is how energy propagates through spacetime. Let us recap the fundamental premises:

1. The dimension of the matter-wave field vector is forceÂ per unit mass (N/kg), as opposed to the force per unit charge (N/C) dimension of the electric field vector. This dimension is an acceleration (m/s2), which is the dimension of the gravitational field.
2. We assume this gravitational disturbance causes our electron (or a charged massÂ in general) to move about some center, combining linear and circular motion. This interpretation reconciles the wave-particle duality: fields interfere but if, at the same time, they do drive a pointlike particle, then we understand why, as Feynman puts it, âwhen you do find the electron some place, the entire charge is there.â Of course, we cannot prove anything here, but our elegant yet simple derivation of the Compton radius of an electron is… Well… Just nice. đ
3. Finally, and most importantly in the context of this discussion, we noted that, in light of the direction of the magnetic moment of an electron in an inhomogeneous magnetic field, the plane which circumscribes the circulatory motion of the electron should also compriseÂ the direction of its linear motion. Hence, unlike an electromagnetic wave, theÂ planeÂ of the two-dimensional oscillation (so that’s the polarization plane, really) cannotÂ be perpendicular to the direction of motion of our electron.

Let’s say some more about the latter point here. The illustrations below (one from Feynman, and the other is just open-source) show what we’re thinking of.Â The direction of the angular momentum (and the magnetic moment) of an electronâor, to be precise, its component as measured in the direction of the (inhomogeneous) magnetic field through which our electron is travelingâcannotÂ be parallel to the direction of motion. On the contrary, it must be perpendicularÂ to the direction of motion. In other words, if we imagine our electron as spinning around some center (see the illustration on the left-hand side), then the disk it circumscribes (i.e. theÂ planeÂ of the polarization)Â has toÂ compriseÂ the direction of motion.

Of course, we need to add another detail here. As my readers will know, we do not really have a precise direction of angular momentum in quantum physics. While there is no fully satisfactory explanation of this, the classical explanationâcombined with the quantization hypothesisâgoes a long way in explaining this: an object with an angular momentumÂ JÂ and a magnetic momentÂ ÎźÂ that is not exactly parallel to some magnetic fieldÂ B, willÂ notÂ line up: it willÂ precessâand, as mentioned, the quantization of angular momentum may well explain the rest.Â [Well… Maybe… We haveÂ detailed our attempts in this regard in various posts on this (just search for spinÂ orÂ angular momentumÂ on this blog, and you’ll get a dozen posts or so), but these attempts are, admittedly, not fully satisfactory. Having said that, they do go a long way in relating angles to spin numbers.]

The thing is: we do assume our electron is spinning around. If we look from theÂ up-direction only, then it will be spinningÂ clockwise if its angular momentum is down (so itsÂ magnetic moment isÂ up). Conversely, it will be spinningÂ counterclockwise if its angular momentum isÂ up. Let us take theÂ up-state. So we have a top view of the apparatus, and we see something like this:I know you are laughing aloud now but think of your amusement as a nice reward for having stuck to the story so far. Thank you. đ And, yes, do check it yourself by doing some drawings on your table or so, and then look at them from various directions as you walk around the table asâI am not ashamed to admit thisâI did when thinking about this. So what do we get when we change the perspective? Let us walk around it, counterclockwise, let’s say, so we’re measuring our angle of rotation as someÂ positiveÂ angle.Â Walking around itâin whatever direction, clockwise or counterclockwiseâdoesn’t change the counterclockwise direction of our… Well… That weird object that mightâjust mightârepresent an electron that has its spin up and that is traveling in the positive y-direction.

When we look in the direction of propagation (so that’s from left to right as you’re looking at this page), and we abstract away from its linear motion, then we could, vaguely, describe this by some wrenchedÂ eiÎ¸Â =Â cosÎ¸ +Â iÂˇsinÎ¸ function, right? The x- andÂ y-axesÂ of the apparatus may be used to measure the cosine and sine components respectively.

Let us keep looking from the top but walk around it, rotating ourselves over a 180Â° angle so we’re looking in theÂ negativeÂ y-direction now. As I explained in one of those posts on symmetries, our mind will want to switch to a new reference frame: we’ll keep theÂ z-axis (up is up, and down is down), but we’ll want the positive direction of the x-axis to… Well… Point right. And we’ll want theÂ y-axis to point away, rather than towards us. In short, we have a transformation of the reference frame here:Â z’ =Â z,Â y’ = âÂ y, andÂ x’ =Â âÂ x. Mind you, this is still a regular right-handed reference frame. [That’s the difference with aÂ mirrorÂ image: aÂ mirroredÂ right-hand reference frame is no longer right-handed.]Â So, in our new reference frame, that we choose to coincide with ourÂ perspective,Â we will now describe the same thing as someÂ âcosÎ¸ âÂ iÂˇsinÎ¸ =Â âeiÎ¸Â function. Of course,Â âcosÎ¸ =Â cos(Î¸ +Â Ď) andÂ âsinÎ¸ =Â sin(Î¸ +Â Ď) so we can write this as:

âcosÎ¸ âÂ iÂˇsinÎ¸ =Â cos(Î¸ +Â Ď) +Â iÂˇsinÎ¸ =Â eiÂˇ(Î¸+Ď)Â =Â eiĎÂˇeiÎ¸Â = âeiÎ¸.

Sweet ! But… Well… First note this isÂ notÂ the complex conjugate:Â eâiÎ¸Â =Â cosÎ¸ âÂ iÂˇsinÎ¸Â â Â âcosÎ¸ âÂ iÂˇsinÎ¸ =Â âeiÎ¸. Why is that? Aren’t we looking at the same clock, but from the back? No. The plane of polarization is different. Our clock is more like those in Dali’s painting: it’s flat. đ And, yes, let me lighten up the discussion with that painting here. đ We need to haveÂ someÂ fun while torturing our brain, right?

So, because we assume the plane of polarization is different, we get anÂ âeiÎ¸Â function instead of aÂ eâiÎ¸Â function.

Let us now think about the eiÂˇ(Î¸+Ď)Â function. It’s the same asÂ âeiÎ¸Â but… Well… We walked around theÂ z-axis taking a full 180Â° turn, right? So that’s Ď in radians. So that’s the phase shiftÂ here. Hey!Â Try the following now. Go back and walk around the apparatus once more, but letÂ the reference frame rotate with us, as shown below. So we start left and look in the direction of propagation, and then we start moving about theÂ z-axis (which points out of this page, toward you, as you are looking at this), let’s say by some small angleÂ Îą. So we rotate the reference frame about theÂ z-axis byÂ Îą and… Well… Of course, ourÂ eiÂˇÎ¸Â now becomes anÂ ourÂ eiÂˇ(Î¸+Îą)Â function, right? We’ve just derived the transformation coefficient for a rotation about theÂ z-axis, didn’t we? It’s equal toÂ eiÂˇÎą, right? We get the transformed wavefunction in the new reference frame by multiplying the old one byÂ eiÂˇÎą, right? It’s equal toÂ eiÂˇÎąÂˇeiÂˇÎ¸Â =Â eiÂˇ(Î¸+Îą), right?

Well…

[…]

No. The answer is: no. TheÂ transformation coefficient is notÂ eiÂˇÎąÂ butÂ eiÂˇÎą/2. So we get an additional 1/2 factor in theÂ phase shift.

Huh?Â Yes.Â That’s what it is: when we change the representation, by rotating our apparatus over some angle Îą about the z-axis, then we will, effectively, get a new wavefunction, which will differ from the old one by a phase shift that is equal to onlyÂ half ofÂ the rotation angle only.

Huh?Â Yes. It’s even weirder than that. For a spin downÂ electron, the transformation coefficient is eâiÂˇÎą/2, so we get an additional minus sign in the argument.

Huh?Â Yes.

I know you are terribly disappointed, but that’s how it is. That’s what hampers an easy geometric interpretation of the wavefunction. Paraphrasing Feynman, I’d say that, somehow, our electron not only knows whether or not it has taken a turn, but it also knows whether or not it is moving away from us or, conversely, towards us.

[…]

But…Â Hey! Wait a minute! That’s it, right?Â

What? Well… That’s it! The electron doesn’t know whether it’s moving away or towards us. That’s nonsense. But… Well… It’s like this:

OurÂ eiÂˇÎąÂ coefficient describes a rotation of the reference frame. In contrast, theÂ eiÂˇÎą/2Â andÂ eâiÂˇÎą/2Â coefficients describe what happens when we rotate the T apparatus! Now thatÂ is a very different proposition.Â

Right! You got it! RepresentationsÂ and reference frames are different things.Â QuiteÂ different, I’d say: representations areÂ real, reference frames aren’tâbut then you don’t like philosophical language, do you? đÂ But think of it. When we just go about theÂ z-axis, a full 180Â°, but we don’t touch thatÂ T-apparatus, we don’t changeÂ reality. When we were looking at the electron while standing left to the apparatus, we watched the electrons going in and moving away from us, and when we go about theÂ z-axis, a full 180Â°, looking at it from the right-hand side, we see the electrons coming out, moving towards us. But it’s still the same reality. We simply change the reference frameâfrom xyz to x’y’z’ to be precise: we doÂ not changeÂ the representation.

In contrast, when we rotate theÂ TÂ apparatus over a full 180Â°, our electron now goes in the opposite direction. And whether that’s away or towards us, that doesn’t matter: it was going in one direction while traveling throughÂ S, and now it goes in the opposite directionârelative to the direction it was going in S, that is.

So what happens,Â really, when weÂ change the representation, rather than the reference frame? Well… Let’s think about that. đ

### Quantum-mechanical weirdness?

The transformation matrix for the amplitude of a system to be in anÂ upÂ orÂ downÂ state (and, hence, presumably, for a wavefunction) for a rotation about theÂ z-axis is the following one:

Feynman derives this matrix in a rather remarkable intellectualÂ tour de forceÂ in the 6th of hisÂ Lectures on Quantum Mechanics. So that’s pretty early on. He’s actually worried about that himself, apparently, and warns his students that “This chapter is a rather long and abstract side tour, and it does not introduce any idea which we will not also come to by a different route in later chapters. You can, therefore, skip over it, and come back later if you are interested.”

Well… That’s howÂ IÂ approached it. I skipped it, and didn’t worry about those transformations for quite a while. But… Well… You can’t avoid them. In some weird way, they are at the heart of the weirdness of quantum mechanics itself. Let us re-visit his argument. Feynman immediately gets that the whole transformation issue here is just a matter of finding an easy formula for that phase shift. Why? He doesn’t tell us. Lesser mortals like us must just assume that’s how the instinct of a genius works, right? đ So… Well… Because heÂ knowsâfrom experimentâthat the coefficient isÂ eiÂˇÎą/2Â instead of eiÂˇÎą, he just says the phase shiftâwhich he denotes by Îťâmust be someÂ proportionalÂ to the angle of rotationâwhich he denotes byÂ Ď rather than Îą (so as to avoid confusion with the EulerÂ angleÂ Îą). So he writes:

Îť =Â mÂˇĎ

Initially, he also tries the obvious thing: m should be one, right? SoÂ Îť = Ď, right? Well… No. It can’t be. Feynman shows why that can’t be the case by adding a third apparatus once again, as shown below.

Let me quote him here, as I can’t explain it any better:

“SupposeÂ TÂ is rotated byÂ 360Â°; then, clearly, it is right back at zero degrees, and we should haveÂ Câ+ = C+Â andÂ Cââ =Â CâÂ or,Â what is the same thing,Â eiÂˇmÂˇ2ĎÂ = 1. We get m =Â 1. [But no!]Â This argument is wrong!Â To see that it is, consider thatÂ TÂ is rotated byÂ 180Â°. If mÂ were equal to 1, we would have Câ+ =Â eiÂˇĎC+Â = âC+Â and Cââ =Â eâiÂˇĎCâÂ =Â âCâ. [Feynman works with statesÂ here, instead of the wavefunction of the particle as a whole. I’ll come back to this.] However, this is just theÂ originalÂ state all over again.Â BothÂ amplitudes are just multiplied byÂ â1Â which gives back the original physical system. (It is again a case of a common phase change.) This means that if the angle betweenÂ TÂ andÂ SÂ is increased to 180Â°, the system would be indistinguishable from the zero-degree situation, and the particles would again go through the (+)Â state of theÂ UÂ apparatus. AtÂ 180Â°, though, the (+)Â state of theÂ UÂ apparatus is theÂ (âx)Â state of the originalÂ SÂ apparatus. So a (+x)Â state would become aÂ (âx)Â state. But we have done nothing toÂ changeÂ the original state; the answer is wrong. We cannot haveÂ m = 1.Â We must have the situation that a rotation byÂ 360Â°, andÂ no smaller angleÂ reproduces the same physical state. This will happen ifÂ m = 1/2.”

The result, of course, is this weird 720Â° symmetry. While we get the same physics after a 360Â° rotation of the T apparatus, we doÂ notÂ get the same amplitudes. We get the opposite (complex) number:Â Câ+ =Â eiÂˇ2Ď/2C+Â = âC+Â and Cââ =Â eâiÂˇ2Ď/2CâÂ =Â âCâ. That’s OK, because… Well… It’s aÂ commonÂ phase shift, so it’s just like changing the origin of time. Nothing more. Nothing less. Same physics. Same reality. But… Well…Â Câ+ â Â âC+Â andÂ Cââ â Â âCâ, right? We only get our original amplitudes back if we rotate theÂ T apparatus two times, so that’s by a full 720 degreesâas opposed to the 360Â° we’d expect.

Now, space is isotropic, right? So this 720Â° business doesn’t make sense, right?

Well… It does and it doesn’t. We shouldn’t dramatize the situation. What’s the actual difference between a complex number and its opposite? It’s like x orÂ âx, or t and ât.Â I’ve said this a couple of times already again, and I’ll keep saying it many times more:Â NatureÂ surely can’t be bothered by how we measure stuff, right? In the positive or the negative directionâthat’s just our choice, right?Â OurÂ convention. So… Well… It’s just like thatÂ âeiÎ¸Â function we got when looking at theÂ same experimental set-up from the other side: ourÂ eiÎ¸Â and âeiÎ¸Â functions didÂ notÂ describe a different reality. We just changed our perspective. TheÂ reference frame. As such, the reference frame isn’tÂ real. The experimental set-up is. AndâI know I will anger mainstream physicists with thisâtheÂ representationÂ is. Yes. Let me say it loud and clear here:

A different representation describes a different reality.

In contrast, a different perspectiveâor a different reference frameâdoes not.

### Conventions

While you might have had a lot of trouble going through all of the weird stuff above, the point is: it isÂ notÂ all that weird. WeÂ canÂ understand quantum mechanics. And in a fairly intuitive way, really. It’s just that… Well… I think some of the conventions in physics hamper such understanding. Well… Let me be precise: one convention in particular, really. It’s that convention for measuring angles. Indeed, Mr. Leonhard Euler, back in the 18th century, might well be “the master of us all” (as Laplace is supposed to have said) but… Well… He couldn’t foresee how his omnipresent formulaâeiÎ¸Â =Â cosÎ¸ +Â iÂˇsinÎ¸âwould, one day, be used to representÂ something real: an electron, or any elementary particle, really. If he wouldÂ have known, I am sure he would have noted what I am noting here:Â NatureÂ can’t be bothered by our conventions. Hence, ifÂ eiÎ¸Â represents something real, thenÂ eâiÎ¸Â must also represent something real. [Coz I admire this genius so much, I can’t resist the temptation. Here’s his portrait. He looks kinda funny here, doesn’t he? :-)]

Frankly, he would probably have understood quantum-mechanical theory as easily and instinctively as Dirac, I think, and I am pretty sure he would have notedâand, if he would have known about circularly polarized waves, probably agreed toâthatÂ alternative convention for measuring angles: we could, effectively, measure angles clockwise or counterclockwise depending on the direction of our particleâas opposed to Euler’s ‘one-size-fits-all’ counterclockwise convention. But so we didÂ notÂ adopt that alternative convention because… Well… We want to keep honoring Euler, I guess. đ

So… Well… If we’re going to keep honoring Euler by sticking to that ‘one-size-fits-all’ counterclockwise convention, then I doÂ believe thatÂ eiÎ¸Â and eâiÎ¸Â represent twoÂ differentÂ realities: spin up versus spin down.

Yes. In our geometric interpretation of the wavefunction, these are, effectively, two different spin directions. And… Well… These are real directions: we seeÂ something different when they go through a Stern-Gerlach apparatus. So it’s not just some convention toÂ countÂ things like 0, 1, 2, etcetera versus 0,Â â1,Â â2 etcetera. It’s the same story again: different but relatedÂ mathematicalÂ notions are (often) related to different but relatedÂ physicalÂ possibilities. So… Well… I think that’s what we’ve got here.Â Think of it. Mainstream quantum math treats all wavefunctions as right-handed but… Well…Â A particle with up spin is a different particle than one withÂ downÂ spin, right? And, again,Â NatureÂ surely cannotÂ be bothered about our convention of measuring phase angles clockwise or counterclockwise, right? So… Well… Kinda obvious, right? đ

Let me spell out my conclusions here:

1. The angular momentum can be positive or, alternatively, negative: J = +Ä§/2 orÂ âÄ§/2. [Let me note that this is not obvious. Or less obvious than it seems, at first. In classical theory, you would expect an electron, or an atomic magnet, to line up with the field. Well… The Stern-Gerlach experiment shows they don’t: they keep their original orientation. Well… If the field is weak enough.]

2. Therefore, we would probably like to think that an actual particleâthink of an electron, or whatever other particle you’d think ofâcomes in twoÂ variants:Â right-handed and left-handed. They will, therefore,Â either consist of (elementary) right-handed waves or,Â else, (elementary) left-handed waves. An elementary right-handed wave would be written as: Ď(Î¸i)Â = eiÎ¸iÂ = aiÂˇ(cosÎ¸i + iÂˇsinÎ¸i). In contrast,Â an elementary left-handed wave would be written as: Ď(Î¸i)Â =Â eâiÎ¸iÂ = aiÂˇ(cosÎ¸i â iÂˇsinÎ¸i).Â So that’s the complex conjugate.

So… Well… Yes, I think complex conjugates are not just someÂ mathematicalÂ notion: I believe they represent something real. It’s the usual thing:Â NatureÂ has shown us that (most) mathematical possibilities correspond to realÂ physical situations so… Well… Here you go. It is reallyÂ just like the left- or right-handed circular polarization of an electromagnetic wave: we can have both for the matter-wave too! [As for the differencesâdifferent polarization plane and dimensions and what have youâI’ve already summed those up, so I won’t repeat myself here.]Â The point is: ifÂ we have two differentÂ physicalÂ situations, we’ll want to have two different functions to describe it. Think of it like this: why would we haveÂ twoâyes, I admit, two relatedâamplitudes to describe the upÂ or downÂ state of the same system, but only one wavefunction for it?Â You tell me.

[…]

Authors like me are looked down upon by the so-called professional class of physicists. The few who bothered to react to my attempts to make sense of Einstein’s basic intuition in regard to the nature of the wavefunction all said pretty much the same thing: “Whatever your geometric (orÂ physical) interpretation of the wavefunction might be, it won’t be compatible with theÂ isotropyÂ of space. You cannot imagineÂ an object with a 720Â° symmetry. That’sÂ geometrically impossible.”

Well… Almost three years ago, I wrote the following on this blog: “As strange as it sounds, aÂ spin-1/2 particle needsÂ twoÂ full rotations (2Ă360Â°=720Â°) until it is again in the same state. Now, in regard to that particularity, youâll often read something like: âThere isÂ nothingÂ in our macroscopic world which has a symmetry like that.â Or, worse, âCommon sense tells us that something like that cannot exist, that it simply is impossible.â [I wonât quote the site from which I took this quotes, because it is, in fact, the site of a very respectable Â research center!]Â Bollocks!Â TheÂ Wikipedia article on spinÂ has this wonderful animation: look at how the spirals flip between clockwise and counterclockwise orientations, and note that itâs only after spinning a full 720 degrees that this âpointâ returns to its original configuration after spinning a full 720 degrees.

So… Well… I am still pursuing my original dream which is… Well… Let me re-phrase what I wrote back in January 2015:

Yes, weÂ canÂ actually imagine spin-1/2 particles, and we actually do not need all that much imagination!

In fact, I am tempted to think that I’ve found a pretty good representation or… Well… A pretty goodÂ image, I should say, because… Well… A representation is something real, remember? đ

Post scriptum (10 December 2017):Â Our flywheel model of an electron makes sense, but also leaves many unanswered questions. The most obvious one question, perhaps, is: why theÂ upÂ andÂ downÂ state only?

I am not so worried about that question, even if I can’t answer it right away because… Well… Our apparatusâthe way weÂ measureÂ realityâis set up to measure the angular momentum (or the magnetic moment, to be precise) in one direction only. If our electron isÂ capturedÂ by someÂ harmonicÂ (or non-harmonic?) oscillation in multiple dimensions, then it should not be all that difficult to show its magnetic moment is going to align, somehow, in the same or, alternatively, the opposite direction of the magnetic field it is forced to travel through.

Of course, the analysis for the spinÂ upÂ situation (magnetic moment down) is quite peculiar: if our electron is aÂ mini-magnet, why would itÂ notÂ line up with the magnetic field? We understand the precession of a spinning top in a gravitational field, but…Â Hey… It’s actually not that different. Try to imagine some spinning top on the ceiling. đ I am sure we can work out the math. đ The electron must be some gyroscope, really: it won’t change direction. In other words, its magnetic moment won’t line up. It will precess, and it can do so in two directions, depending on its state. đ […] At least, that’s why my instinct tells me. I admit I need to work out the math to convince you. đ

The second question is more important. If we just rotate the reference frame over 360Â°, we see the same thing: some rotating object which we, vaguely, describe by someÂ e+iÂˇÎ¸Â functionâto be precise, I should say: by some Fourier sum of such functionsâor, if the rotation is in the other direction, by someÂ eâiÂˇÎ¸Â function (again, you should read: aÂ FourierÂ sum of such functions). Now, the weird thing, as I tried to explain above is the following: if we rotate the object itself, over the sameÂ 360Â°, we get aÂ differentÂ object: ourÂ eiÂˇÎ¸Â andÂ eâiÂˇÎ¸Â function (again: think of aÂ FourierÂ sum, so that’s a waveÂ packet, really) becomes aÂ âeÂąiÂˇÎ¸Â thing. We get aÂ minusÂ sign in front of it.Â So what happened here? What’s the difference, really?

Well… I don’t know. It’s very deep. If I do nothing, and you keep watching me while turning around me, for a fullÂ 360Â°, then you’ll end up where you were when you started and, importantly, you’ll see the same thing.Â ExactlyÂ the same thing: if I was anÂ e+iÂˇÎ¸Â wave packet, I am still anÂ anÂ e+iÂˇÎ¸Â wave packet now. OrÂ if I was an eâiÂˇÎ¸Â wave packet, then I am still anÂ an eâiÂˇÎ¸Â wave packet now. Easy. Logical. Obvious, right?

But so now we try something different:Â IÂ turn around, over a fullÂ 360Â° turn, and youÂ stay where you are. When I am back where I wasâlooking at you again, so to speakâthen… Well… I am not quite the same any more. Or… Well… Perhaps I am but youÂ seeÂ me differently. If I wasÂ e+iÂˇÎ¸Â wave packet, then I’ve become aÂ âe+iÂˇÎ¸Â wave packet now. Not hugely different but… Well… ThatÂ minusÂ sign matters, right? OrÂ If I wasÂ wave packet built up from elementaryÂ aÂˇeâiÂˇÎ¸Â waves, then I’ve become aÂ âeâiÂˇÎ¸Â wave packet now. What happened?

It makes me think of the twin paradox in special relativity. We know it’s aÂ paradoxâso that’s anÂ apparentÂ contradiction only: we know which twin stayed on Earth and which one traveled because of the gravitational forces on the traveling twin. The one who stays on Earth does not experience any acceleration or deceleration. Is it the same here? I mean… The one who’s turning around must experience someÂ force.

Can we relate this to the twin paradox? Maybe. Note that aÂ minusÂ sign in front of theÂ eâÂąiÂˇÎ¸Â functions amounts a minus sign in front of both the sine and cosine components. So… Well… The negative of a sine and cosine is the sine and cosine but with a phase shift of 180Â°: âcosÎ¸ =Â cos(Î¸ Âą Ď) andÂ âsinÎ¸ =Â sin(Î¸ Âą Ď). Now, adding or subtracting aÂ commonÂ phase factor to/from the argument of the wavefunction amounts toÂ changingÂ the origin of time. So… Well… I do think the twin paradox and this rather weird business of 360Â° and 720Â° symmetries are, effectively, related. đ

# The reality of the wavefunction

If you haven’t read any of my previous posts on the geometry of the wavefunction (this link goes to the most recent one of them), then don’t attempt to read this one. It brings too much stuff together to be comprehensible. In fact, I am not even sure if I am going to understand what I write myself. đ [OK. Poor joke. Acknowledged.]

Just to recap the essentials, I part ways with mainstream physicists in regard to theÂ interpretationÂ of the wavefunction. For mainstream physicists, the wavefunction is just some mathematical construct. NothingÂ real. Of course, I acknowledge mainstream physicists have very good reasons for that, but… Well… I believe that, if there is interference, or diffraction, thenÂ somethingÂ must be interfering, or something must be diffracting. I won’t dwell on this because… Well… I have done that too many times already. MyÂ hypothesisÂ is that the wavefunction is, in effect, aÂ rotatingÂ field vector, so itâs just like the electric field vector of a (circularly polarized) electromagnetic wave (illustrated below).

Of course, it must be different, and it is. First, theÂ (physical) dimension of the field vector of the matter-wave must be different. So what is it? Well… I am tempted to associate the real and imaginary component of the wavefunction with a forceÂ per unit massÂ (as opposed to the force per unit charge dimension of the electric field vector). Of course, the newton/kg dimension reduces to the dimension of acceleration (m/s2), so thatâs the dimension of a gravitational field.

Second, I also am tempted to think that this gravitational disturbance causes an electron (or any matter-particle) to move about some center, and I believe it does so at the speed of light. In contrast, electromagnetic waves doÂ notÂ involve any mass: theyâre just an oscillatingÂ field. Nothing more. Nothing less. Why would I believe there must still be some pointlike particle involved? Well…Â As Feynman puts it: âWhen you do find the electron some place, the entire charge is there.â (FeynmanâsÂ Lectures, III-21-4) So… Well… That’s why.

The third difference is one that I thought of only recently: theÂ planeÂ of the oscillation cannotÂ be perpendicular to the direction of motion of our electron, because then we canât explain the direction of its magnetic moment, which is either up or down when traveling through a Stern-Gerlach apparatus. I am more explicit on that in the mentioned post, so you may want to check there. đ

I wish I mastered the software to make animations such as the one above (for which I have to credit Wikipedia), but so I don’t. You’ll just have toÂ imagineÂ it. That’s great mental exercise, so… Well… Just try it. đ

Let’s now think about rotating reference frames and transformations. If theÂ z-direction is the direction along which we measure the angular momentum (or the magnetic moment), then theÂ up-direction will be theÂ positiveÂ z-direction. We’ll also assume theÂ y-direction is the direction of travel of our elementary particleâand let’s just consider an electron here so we’re moreÂ real. đ So we’re in the reference frame that Feynman used to derive the transformation matrices for spin-1/2 particles (or for two-state systems in general). His ‘improved’ Stern-Gerlach apparatusâwhich I’ll refer to as a beam splitterâillustrates this geometry.

So I think the magnetic momentâor the angular momentum, reallyâcomes from an oscillatory motion in the x– and y-directions. One is theÂ realÂ component (the cosine function) and the other is the imaginary component (the sine function), as illustrated below.Â

So the crucial difference with the animations above (which illustrate left- and a right-handed polarization respectively) is that we, somehow, need to imagine the circular motion isÂ notÂ in theÂ xz-plane, but in theÂ yz-plane. Now what happens if we change the reference frame?

Well… That depends on what you mean by changing the reference frame. Suppose we’re looking in the positive y-directionâso that’s the direction in which our particle is movingâ, then we might imagine how it would look like whenÂ weÂ would make a 180Â°Â turn and look at the situation from the other side, so to speak. Now, I did a post on that earlier this year, which you may want to re-read.Â When we’re looking at the same thing from the other side (from the back side, so to speak), we will want to use our familiar reference frame. So we will want to keep theÂ z-axis as it is (pointing upwards), and we will also want to define theÂ x– andÂ y-axis using the familiar right-hand rule for defining a coordinate frame. So our newÂ x-axis and our newÂ y-axis will the same as the oldÂ x- andÂ y-axes but with the sign reversed. In short, we’ll have the following mini-transformation: (1)Â z‘ =Â z, (2) x’ = âx, and (3) y’ =Â ây.

So… Well… If we’re effectively looking at somethingÂ realÂ that was moving along theÂ y-axis, then it will now still be moving along the y’-axis, butÂ in theÂ negativeÂ direction. Hence, our elementary wavefunctionÂ eiÎ¸Â = cosÎ¸ +Â iÂˇsinÎ¸ willÂ transformÂ intoÂ âcosÎ¸ âÂ iÂˇsinÎ¸ =Â âcosÎ¸ âÂ iÂˇsinÎ¸ =Â cosÎ¸ âÂ iÂˇsinÎ¸.Â It’s the same wavefunction. We just… Well… We just changed our reference frame. We didn’t change reality.

Now you’ll cry wolf, of course, because we just went through all that transformational stuff in our last post. To be specific, we presented the following transformation matrix for a rotation along theÂ z-axis:

Now, ifÂ Ď is equal to 180Â° (so that’s Ď in radians), then theseÂ eiĎ/2Â andÂ eâiĎ/2/â2Â factors areÂ equal toÂ eiĎ/2Â =Â +iÂ andÂ eâiĎ/2Â = âiÂ respectively. Hence, ourÂ eiÎ¸Â = cosÎ¸ +Â iÂˇsinÎ¸ becomes…

Hey ! Wait a minute ! We’re talking about twoÂ veryÂ different things here, right? TheÂ eiÎ¸Â = cosÎ¸ +Â iÂˇsinÎ¸ is anÂ elementaryÂ wavefunction which, we presume, describes some real-life particleâwe talked about an electron with its spin in theÂ up-directionâwhile these transformation matrices are to be applied to amplitudes describing… Well… Either anÂ up– or a down-state, right?

Right. But… Well… Is itÂ so different, really? Suppose ourÂ eiÎ¸Â = cosÎ¸ +Â iÂˇsinÎ¸ wavefunction describes anÂ up-electron, then we still have to apply thatÂ eiĎ/2Â =Â eiĎ/2Â =Â +iÂ factor, right? So we get a new wavefunction that will be equal toÂ eiĎ/2ÂˇeiÎ¸Â =Â eiĎ/2ÂˇeiÎ¸Â =Â +iÂˇeiÎ¸Â =Â iÂˇcosÎ¸ +Â i2ÂˇsinÎ¸ =Â sinÎ¸ âÂ iÂˇcosÎ¸, right? So how can we reconcile that with the cosÎ¸ âÂ iÂˇsinÎ¸ function we thought we’d find?

We can’t. So… Well… Either my theory is wrong or… Well… Feynman can’t be wrong, can he? I mean… It’s not only Feynman here. We’re talking all mainstream physicists here, right?

Right. But think of it. Our electron in that thought experiment does, effectively, make a turn of 180Â°, so it is going in the other direction now !Â That’s more than just… Well… Going around the apparatus and looking at stuff from the other side.

Hmm… Interesting. Let’s think about the difference between theÂ sinÎ¸ âÂ iÂˇcosÎ¸ andÂ cosÎ¸ âÂ iÂˇsinÎ¸ functions. First, note that they will give us the same probabilities: the square of the absolute value of both complex numbers is the same. [It’s equal to 1 because we didn’t bother to put a coefficient in front.] Secondly, we should note that the sine and cosine functions are essentially the same. They just differ by a phase factor: cosÎ¸ =Â sin(Î¸ +Â Ď/2) andÂ âsinÎ¸ =Â cos(Î¸ +Â Ď/2). Let’s see what we can do with that. We can write the following, for example:

sinÎ¸ âÂ iÂˇcosÎ¸ =Â âcos(Î¸ +Â Ď/2) âÂ iÂˇsin(Î¸ +Â Ď/2) =Â â[cos(Î¸ +Â Ď/2) +Â iÂˇsin(Î¸ +Â Ď/2)] =Â âeiÂˇ(Î¸ +Â Ď/2)

Well… I guess that’s something at least ! The eiÂˇÎ¸Â and âeiÂˇ(Î¸ +Â Ď/2)Â functions differ by a phase shiftÂ andÂ a minus sign so… Well… That’s what it takes to reverse the direction of an electron. đ Let us mull over that in the coming days. As I mentioned, these more philosophical topics are not easily exhausted. đ

# The geometry of the wavefunction, electron spin and the form factor

Our previous posts showed how a simple geometric interpretation of the elementary wavefunction yielded the (Compton scattering) radius of an elementary particleâfor an electron, at least: for the proton, we only got the order of magnitude rightâbut then a proton is not an elementary particle.Â We got lots of other interesting equations as well… But… Well… When everything is said and done, it’s that equivalence between theÂ E =Â mÂˇa2ÂˇĎ2Â andÂ E =Â mÂˇc2Â relations that we… Well… We need to be moreÂ specific about it.

Indeed, I’ve been ambiguous here and thereâoscillatingÂ between various interpretations, so to speak. đ In my own mind, I refer to my unanswered questions, or my ambiguous answers to them, as the form factorÂ problem.Â So… Well… That explains the title of my post. But so… Well… I do want to be somewhat moreÂ conclusiveÂ in this post. So let’s go and see where we end up. đ

To help focus our mind, let us recall the metaphor of the V-2 perpetuum mobile, as illustrated below. With permanently closed valves, the air inside the cylinder compresses and decompresses as the pistons move up and down. It provides, therefore, a restoring force. As such, it will store potential energy, just like a spring, and the motion of the pistons will also reflect that of a mass on a spring: it is described by a sinusoidal function, with the zero point at the center of each cylinder. We can, therefore, think of the moving pistons as harmonic oscillators, just like mechanical springs. Of course, instead of two cylinders with pistons, one may also think of connecting two springs with a crankshaft, but then that’s not fancy enough for me. đ

At first sight, the analogy between our flywheel model of an electron and the V-twin engine seems to be complete: the 90 degree angle of ourÂ V-2 engine makes it possible to perfectly balance the pistons and we may, therefore, think of the flywheel as a (symmetric) rotating mass, whose angular momentum is given by the product of the angular frequency and the moment of inertia: L =Â ĎÂˇI. Of course,Â the moment of inertia (aka the angular mass) will depend on theÂ formÂ (orÂ shape) of our flywheel:

1. I = mÂˇa2Â for a rotating pointÂ mass m or, what amounts to the same, for a circular hoop of mass m and radiusÂ rÂ =Â a.
2. For a rotating (uniformly solid)Â disk, we must add a 1/2 factor: IÂ =Â mÂˇa2/2.

How can we relate those formulas to the E =Â mÂˇa2ÂˇĎ2Â formula? TheÂ kinetic energy that is being stored in a flywheel is equal EkineticÂ = IÂˇĎ2/2, so that is only halfÂ of theÂ E =Â mÂˇa2ÂˇĎ2Â product if we substitute I forÂ I = mÂˇa2. [For a disk, we get a factor 1/4, so that’s even worse!] However, our flywheel model of an electron incorporates potential energy too. In fact, theÂ E =Â mÂˇa2ÂˇĎ2Â formula just adds the (kinetic and potential) energy of two oscillators: we do not really consider the energy in the flywheel itself because… Well… The essence of our flywheel model of an electron is not the flywheel: the flywheel justÂ transfersÂ energy from one oscillator to the other, but so… Well… We don’tÂ includeÂ it in our energy calculations. The essence of our model is thatÂ two-dimensional oscillation whichÂ drivesÂ the electron, and which is reflected in Einstein’sÂ E =Â mÂˇc2Â formula.Â That two-dimensional oscillationâtheÂ a2ÂˇĎ2Â = c2Â equation, reallyâtells us that theÂ resonantÂ (orÂ natural) frequencyÂ of the fabric of spacetime is given by theÂ speed of lightâbut measured in units ofÂ a. [If you don’t quite get this, re-write theÂ a2ÂˇĎ2Â = c2Â equation asÂ Ď = c/a: the radius of our electron appears as a naturalÂ distance unit here.]

Now, we were extremely happy with this interpretation not only because of the key results mentioned above, but also because it has lots of other nice consequences. Think of our probabilities as being proportional to energy densities, for exampleâand all of the other stuff I describe in my published paper on this. But there is even more on the horizon: a follower of this blog (a reader with an actual PhD in physics, for a change) sent me an article analyzing elementary particles as tiny black holes because… Well… If our electron is effectively spinning around, then its tangential velocity is equal toÂ vÂ =Â aÂˇĎÂ =Â c. Now, recent research suggest black holes are also spinning at (nearly) the speed of light. Interesting, right? However, in order to understand what she’s trying to tell me, I’ll first need to get a better grasp of general relativity, so I can relate what I’ve been writing here and in previous posts to the Schwarzschild radiusÂ and other stuff.

Let me get back to the lesson here. In the reference frame of our particle, the wavefunction really looks like the animation below: it has two components, and the amplitude of the two-dimensional oscillation is equal to a, which we calculated asÂ aÂ =Â Ä§Âˇ/(mÂˇc) = 3.8616Ă10â13Â m, so that’s the (reduced) Compton scattering radius of an electron.

In my original article on this, I used a more complicated argument involving the angular momentum formula, but I now prefer a more straightforward calculation:

cÂ = aÂˇĎÂ =Â aÂˇE/Ä§ =Â aÂˇmÂˇc2/Ä§Â Â âÂ aÂ =Â Ä§/(mÂˇc)

The question is: whatÂ is that rotating arrow? I’ve been vague and not so vague on this. The thing is: I can’tÂ proveÂ anything in this regard. But myÂ hypothesisÂ is that it is, in effect, aÂ rotatingÂ field vector, so it’s just like the electric field vector of a (circularly polarized) electromagnetic wave (illustrated below).

There are a number of crucial differences though:

1. The (physical) dimension of the field vector of the matter-wave is different: I associate the real and imaginary component of the wavefunction with a force per unit massÂ (as opposed to the force per unit charge dimension of the electric field vector). Of course, the newton/kg dimension reduces to the dimension of acceleration (m/s2), so that’s the dimension of a gravitational field.
2. I do believe this gravitational disturbance, so to speak, does cause an electron to move about some center, and I believe it does so at the speed of light. In contrast, electromagnetic waves doÂ notÂ involve any mass: they’re just an oscillating field. Nothing more. Nothing less. In contrast, as Feynman puts it: “When you do find the electron some place, the entire charge is there.” (Feynman’s Lectures, III-21-4)
3. The third difference is one that I thought of only recently: theÂ planeÂ of the oscillation cannotÂ be perpendicular to the direction of motion of our electron, because then we can’t explain the direction of its magnetic moment, which is either up or down when traveling through a Stern-Gerlach apparatus.

I mentioned that in my previous post but, for your convenience, I’ll repeat what I wrote there.Â The basic idea here is illustrated below (credit for this illustration goes toÂ another blogger on physics). As for the Stern-Gerlach experiment itself, let me refer you to aÂ YouTube videoÂ from theÂ Quantum Made SimpleÂ site.

The point is: the direction of the angular momentum (and the magnetic moment) of an electronâor, to be precise, its component as measured in the direction of the (inhomogeneous) magnetic field through which our electron isÂ travelingâcannotÂ be parallel to the direction of motion. On the contrary, it isÂ perpendicularÂ to the direction of motion. In other words, if we imagine our electron as spinning around some center, then the disk it circumscribes will compriseÂ the direction of motion.

However, we need to add an interesting detail here. As you know, we don’t really have a precise direction of angular momentum in quantum physics. [If you don’t know this… Well… Just look at one of my many posts on spin and angular momentum in quantum physics.] Now, we’ve explored a number of hypotheses but, when everything is said and done, a rather classical explanation turns out to be the best: an object with an angular momentum JÂ and a magnetic momentÂ ÎźÂ (I used bold-face because these areÂ vector quantities) that is parallel to some magnetic field B, will notÂ line up, as you’d expect a tiny magnet to do in a magnetic fieldâor not completely, at least: it willÂ precess. I explained that in another post on quantum-mechanical spin, which I advise you to re-read if you want to appreciate the point that I am trying to make here. That post integrates some interesting formulas, and so one of the things on my ‘to do’ list is to prove that these formulas are, effectively, compatible with the electron model we’ve presented in this and previous posts.

Indeed, when one advances a hypothesis like this, it’s not enough to just sort ofÂ showÂ that the general geometry of the situation makes sense: we also need to show the numbers come out alright. So… Well… Whatever weÂ thinkÂ our electronâor its wavefunctionâmight be, it needs to be compatible with stuff like the observedÂ precession frequencyÂ of an electron in a magnetic field.

Our model also needs to be compatible with the transformation formulas for amplitudes. I’ve been talking about this for quite a while now, and so it’s about time I get going on that.

Last but not least, those articles that relate matter-particles to (quantum) gravityâsuch as the one I mentioned aboveâare intriguing too and, hence, whatever hypotheses I advance here, I’d better check them against those more advanced theories too, right? đ Unfortunately, that’s going to take me a few more years of studying… But… Well… I still have many years aheadâI hope. đ

Post scriptum: It’s funny how one’s brain keeps working when sleeping. When I woke up this morning, I thought: “But itÂ isÂ that flywheel that matters, right? That’s the energy storage mechanism and also explains how photons possibly interact with electrons. The oscillatorsÂ driveÂ the flywheel but, without the flywheel, nothing is happening. It is really theÂ transferÂ of energyâthrough the flywheelâwhich explains why our flywheel goes round and round.”

It may or may not be useful to remind ourselves of the math in this regard.Â The motionÂ ofÂ our first oscillator is given by the cos(ĎÂˇt) = cosÎ¸ function (Î¸ = ĎÂˇt), and its kinetic energy will be equal toÂ sin2Î¸. Hence, the (instantaneous)Â changeÂ in kinetic energy at any point in time (as a function of the angle Î¸) isÂ equal to:Â d(sin2Î¸)/dÎ¸ = 2âsinÎ¸âd(sinÎ¸)/dÎ¸ = 2âsinÎ¸âcosÎ¸. Now, the motion of theÂ second oscillator (just look at that second piston going up and down in the V-2 engine) is given by theÂ sinÎ¸ function, which is equal to cos(Î¸ â Ď /2). Hence, its kinetic energy is equal toÂ sin2(Î¸ â Ď /2), and how itÂ changesÂ (as a function of Î¸ again) is equal toÂ 2âsin(Î¸ â Ď /2)âcos(Î¸ â Ď /2) =Â = â2âcosÎ¸âsinÎ¸ = â2âsinÎ¸âcosÎ¸. So here we have our energy transfer: the flywheel organizes the borrowing and returning of energy, so to speak. That’s the crux of the matter.

So… Well… WhatÂ if the relevant energy formula isÂ E =Â mÂˇa2ÂˇĎ2/2 instead ofÂ E =Â mÂˇa2ÂˇĎ2? What are the implications? Well… We get aÂ â2 factor in our formula for the radiusÂ a, as shown below.

Now that isÂ notÂ so nice. For the tangential velocity, we getÂ vÂ =Â aÂˇĎ =Â â2Âˇc. This is alsoÂ notÂ so nice. How can we save our model? I am not sure, but here I am thinking of the mentioned precessionâtheÂ wobbling of our flywheel in a magnetic field. Remember we may think of Jzâthe angular momentum or, to be precise, its component in theÂ z-direction (the direction in which weÂ measureÂ itâas the projection of theÂ realÂ angular momentumÂ J. Let me insert Feynman’s illustration here again (Feynman’s Lectures, II-34-3), so you get what I am talking about.

Now, all depends on the angle (Î¸) betweenÂ JzÂ andÂ J, of course. We did a rather obscure post on these angles, but the formulas there come in handy now. Just click the link and review it if and when you’d want to understand the following formulas for theÂ magnitudeÂ of theÂ presumedÂ actualÂ momentum:In this particular case (spin-1/2 particles),Â j is equal to 1/2 (in units ofÂ Ä§, of course). Hence,Â JÂ is equal toÂ â0.75Â â 0.866. Elementary geometry then tells us cos(Î¸) =Â (1/2)/â(3/4) =Â  = 1/â3. Hence,Â Î¸Â â 54.73561Â°. That’s a big angleâlarger than the 45Â° angle we had secretly expected because… Well… The 45Â° angle has thatÂ â2 factor in it:Â cos(45Â°) =Â sin(45Â°) = 1/â2.

Hmm… As you can see, there is no easy fix here. Those damn 1/2 factors! They pop up everywhere, don’t they? đ We’ll solve the puzzle. One day… But not today, I am afraid. I’ll call it the form factor problem… Because… Well… It sounds better than the 1/2 orÂ â2 problem, right? đ

Note: If you’re into quantum math, you’ll noteÂ aÂ =Â Ä§/(mÂˇc) is theÂ reducedÂ Compton scattering radius. The standard Compton scattering radius is equal toÂ aÂˇ2ĎÂ = (2ĎÂˇÄ§)/(mÂˇc) =Â  h/(mÂˇc) = h/(mÂˇc). It doesn’t solve theÂ â2 problem. Sorry. The form factor problem. đ

To be honest, I finished my published paper on all of this with a suggestion that, perhaps, we should think of twoÂ circularÂ oscillations, as opposed to linear ones. Think of a tiny ball, whose center of mass stays where it is, as depicted below. Any rotation â around any axis â will be some combination of a rotation around the two other axes. Hence, we may want to think of our two-dimensionalÂ oscillation as an oscillation of a polar and azimuthal angle. It’s just a thought but… Well… I am sure it’s going to keep me busy for a while. đThey are oscillations, still, so I am not thinking ofÂ twoÂ flywheels that keep going around in the same direction. No. More like a wobbling object on a spring. Something like the movement of a bobblehead on a spring perhaps. đ

# Re-visiting the Complementarity Principle: the field versus the flywheel model of the matter-wave

This post is a continuation of the previous one: it is just going to elaborate the questions I raised in the post scriptum of that post. Let’s first review the basics once more.

### The geometry of the elementary wavefunction

In the reference frame of the particle itself, the geometry of the wavefunction simplifies to what is illustrated below: an oscillation in two dimensions which, viewed together, form a plane that would be perpendicular to the direction of motionâbut then our particle doesn’t move in its own reference frame, obviously. Hence, we could be looking at our particle from any direction and we should, presumably, see a similar two-dimensional oscillation. That is interesting because… Well… If we rotate this circle around its center (in whatever direction we’d choose), we get a sphere, right? It’s only when it starts moving, that it loses its symmetry. Now, that isÂ very intriguing, butÂ let’s think about that later.

Let’s assume we’re looking at it from some specificÂ direction. ThenÂ we presumably have some charge (the green dot) moving about some center, and its movement can be analyzed as the sum of two oscillations (the sine and cosine) which represent the real and imaginary component of the wavefunction respectivelyâas we observe it, so to speak. [Of course, you’ve been told you can’t observe wavefunctions so… Well… You should probably stop reading this. :-)] We write:

Ď = =Â aÂˇeâiâÎ¸Â =Â aÂˇeâiâEÂˇt/Ä§ = aÂˇcos(âEât/Ä§) + iÂˇaÂˇsin(âEât/Ä§) = aÂˇcos(Eât/Ä§) â iÂˇaÂˇsin(Eât/Ä§)Â

So that’s the wavefunction in the reference frame of the particle itself. When we think of it as moving in some direction (so relativity kicks in), we need to add the pÂˇx term to the argument (Î¸ = EÂˇt âÂ pâx). It is easy to show this term doesn’t change the argument (Î¸), because we also get a different value for the energy in the new reference frame:Â EvÂ =Â ÎłÂˇE0Â and so… Well… I’ll refer you to my post on this, in which I show the argument of the wavefunction is invariant under a Lorentz transformation: the way EvÂ and pvÂ and, importantly,Â the coordinates xÂ and tÂ relativisticallyÂ transform ensures the invariance.

In fact, I’ve always wanted to readÂ de Broglie‘sÂ original thesis because I strongly suspect he saw that immediately. If you click this link, you’ll find an author who suggests the same. Having said that, I should immediately add this doesÂ notÂ imply there is no need for a relativistic waveÂ equation: the wavefunction is aÂ solutionÂ for the wave equation and, yes, I am the first to note the SchrĂśdinger equation has some obvious issues, which I briefly touch upon in one of my other postsâand which is why SchrĂśdinger himself and other contemporaries came up with a relativistic wave equation (Oskar Klein and Walter Gordon got the credit but others (including Louis de Broglie) also suggested a relativistic wave equation when SchrĂśdinger published his). In my humble opinion, the key issue is notÂ that SchrĂśdinger’s equation is non-relativistic. It’s that 1/2 factor again but… Well… I won’t dwell on that here. We need to move on. So let’s leave the waveÂ equationÂ for what it is and goÂ back to our wavefunction.

You’ll note the argument (orÂ phase) of our wavefunction moves clockwiseâorÂ counterclockwise, depending on whether you’re standing in front of behind the clock. Of course,Â NatureÂ doesn’t care about where we stand orâto put it differentlyâwhether we measure time clockwise, counterclockwise, in the positive, the negative or whatever direction. Hence, I’ve argued we can have both left- as well as right-handed wavefunctions, as illustrated below (for pÂ â  0). Our hypothesis is that these two physical possibilities correspond to the angular momentum of our electron being eitherÂ positive or negative: JzÂ =Â +Ä§/2 or, else, JzÂ =Â âÄ§/2. [If you’ve read a thing or two about neutrinos, then… Well… They’re kinda special in this regard: they have no charge and neutrinos and antineutrinos are actually definedÂ by their helicity. But… Well… Let’s stick to trying to describing electrons for a while.]

The line of reasoning that we followed allowed us toÂ calculateÂ the amplitudeÂ a. We got a result that tentatively confirms we’re on the right track with our interpretation: we found thatÂ aÂ =Â Ä§/meÂˇc, so that’s theÂ Compton scattering radiusÂ of our electron. All good ! But we were still a bit stuckâorÂ ambiguous, I should sayâon what the components of our wavefunction actuallyÂ are. Are we really imagining the tip of that rotating arrow is a pointlike electric chargeÂ spinning around the center? [Pointlike or… Well… Perhaps we should think of theÂ ThomsonÂ radius of the electron here, i.e. the so-calledÂ classical electron radius, which isÂ equal to the Compton radius times the fine-structure constant:Â rThomsonÂ =Â ÎąÂˇrComptonÂ â 3.86Ă10â13/137.]

So that would be the flywheel model.

In contrast, we may also think the whole arrow is some rotatingÂ field vectorâsomething like the electric field vector, with the same or some other physicalÂ dimension, like newton per charge unit, or newton per mass unit? So that’s the fieldÂ model. Now, theseÂ interpretations may or may not be compatibleâorÂ complementary, I should say. I sure hopeÂ they are but… Well… What can we reasonably say about it?

Let us first note that the flywheel interpretation has a very obvious advantage, because it allows us to explain theÂ interactionÂ between a photon and an electron, as I demonstrated in my previous post: the electromagnetic energy of the photon willÂ driveÂ the circulatory motion of our electron… So… Well… That’s a nice physicalÂ explanation for the transfer of energy.Â However, when we think about interference or diffraction, we’re stuck: flywheels don’t interfere or diffract. Only waves do. So… Well… What to say?

I am not sure, but here I want to think some more by pushing the flywheelÂ metaphorÂ to its logical limits. Let me remind you of what triggered it all: it was theÂ mathematicalÂ equivalence of the energy equation for an oscillator (E =Â mÂˇa2ÂˇĎ2) and Einstein’s formula (E =Â mÂˇc2), which tells us energy and mass areÂ equivalentÂ but… Well… They’re not the same. So whatÂ areÂ they then? WhatÂ isÂ energy, and whatÂ isÂ massâin the context of these matter-waves that we’re looking at. To be precise, theÂ E =Â mÂˇa2ÂˇĎ2Â formula gives us the energy ofÂ twoÂ oscillators, so we need aÂ two-spring model whichâbecause I love motorbikesâI referred to as my V-twin engine model, but it’s not anÂ engine, really: it’s two frictionless pistons (or springs) whose direction of motion is perpendicular to each other, so they are in a 90Â° degree angle and, therefore, their motion is, effectively, independent. In other words: they will not interfereÂ with each other. It’s probably worth showing the illustration just one more time. And… Well… Yes. I’ll also briefly review the math one more time.

If the magnitude of the oscillation is equal to a, then the motion of these piston (or the mass on a spring) will be described by x = aÂˇcos(ĎÂˇt + Î).Â Needless to say, Î is just a phase factor which defines our t = 0 point, and ĎÂ is theÂ naturalÂ angular frequency of our oscillator. Because of the 90Â° angle between the two cylinders, Î would be 0 for one oscillator, and âĎ/2 for the other. Hence, the motion of one piston is given by x = aÂˇcos(ĎÂˇt), while the motion of the other is given by x = aÂˇcos(ĎÂˇtâĎ/2) = aÂˇsin(ĎÂˇt). TheÂ kinetic and potential energy of one oscillator â think of one piston or one spring only â can then be calculated as:

1. K.E. = T = mÂˇv2/2 =Â (1/2)ÂˇmÂˇĎ2Âˇa2Âˇsin2(ĎÂˇt + Î)
2. P.E. = U = kÂˇx2/2 = (1/2)ÂˇkÂˇa2Âˇcos2(ĎÂˇt + Î)

The coefficient k in the potential energy formula characterizes the restoring force: F = âkÂˇx. From the dynamics involved, it is obvious that k must be equal to mÂˇĎ2. Hence, the total energyâforÂ oneÂ piston, or one springâis equal to:

E = T + U = (1/2)Âˇ mÂˇĎ2Âˇa2Âˇ[sin2(ĎÂˇt + Î) + cos2(ĎÂˇt + Î)] = mÂˇa2ÂˇĎ2/2

Hence, adding the energy of the two oscillators, we have a perpetuum mobile storing an energy that is equal to twice this amount: E = mÂˇa2ÂˇĎ2. It is a great metaphor. Somehow, in this beautiful interplay between linear and circular motion, energy is borrowed from one place and then returns to the other, cycle after cycle. However, we still have to prove this engine is, effectively, a perpetuum mobile: we need to prove the energy that is being borrowed or returned by one piston is the energy that is being returned or borrowed by the other. That is easy to do, butÂ I won’t bother you with that proof here: you can double-check it in the referenced post or – more formally – in an article I posted on viXra.org.

It is all beautiful, and the key question is obvious: if we want to relate theÂ E =Â mÂˇa2ÂˇĎ2Â and E =Â mÂˇc2Â formulas, we need to explain why we could, potentially, writeÂ cÂ asÂ cÂ =Â aÂˇĎÂ =Â aÂˇâ(k/m). We’ve done that alreadyâto some extent at least. TheÂ tangentialÂ velocity of a pointlike particle spinning around some axis is given byÂ vÂ =Â rÂˇĎ. Now, the radiusÂ rÂ is given byÂ aÂ =Â Ä§/(mÂˇc), andÂ Ď = E/Ä§ =Â mÂˇc2/Ä§, soÂ vÂ is equal to toÂ v = [Ä§/(mÂˇc)]Âˇ[mÂˇc2/Ä§] =Â c. Another beautiful result, but what does itÂ mean? We need to think about theÂ meaning of theÂ Ď =Â â(k/m) formula here. In the mentioned article, we boldly wrote that the speed of light is to be interpreted as theÂ resonantÂ frequency of spacetime, but so… Well… What do we reallyÂ meanÂ by that? Think of the following.

Einsteinâs E = mc2 equation implies the ratio between the energy and the mass of any particle is always the same:

This effectively reminds us of theÂ Ď2 = C1/L or Ď2 = k/mÂ formula for harmonic oscillators.Â The key difference is that the Ď2= C1/L and Ď2 = k/m formulas introduce two (or more) degrees of freedom. In contrast, c2= E/m for any particle, always. However, that is exactly the point: we can modulate the resistance, inductance and capacitance of electric circuits, and the stiffness of springs and the masses we put on them, but we live inÂ oneÂ physical space only:Â ourÂ spacetime. Hence, the speed of light (c) emerges here as the defining property ofÂ spacetime: the resonant frequency, so to speak. We have no further degrees of freedom here.

Let’s think about k. [I am not trying to avoid the Ď2= 1/LC formula here. It’s basically the same concept:Â the Ď2= 1/LC formula gives us the natural or resonant frequency for a electric circuit consisting of a resistor, an inductor, and a capacitor. Writing the formula as Ď2= Câ1/L introduces the concept of elastance, which is the equivalent of the mechanical stiffness (k) of a spring, so… Well… You get it, right? The Ď2= C1/L and Ď2 = k/m sort of describe the same thing: harmonic oscillation. It’s just… Well… Unlike theÂ Ď2= C1/L, theÂ Ď2 = k/m isÂ directlyÂ compatible with our V-twin engine metaphor, because it also involves physical distances, as I’ll show you here.] TheÂ kÂ in theÂ Ď2 = k/m is, effectively, the stiffness of the spring. It isÂ definedÂ by Hooke’s Law, which states thatÂ the force that is needed to extend or compress a springÂ by some distanceÂ xÂ Â is linearly proportional to that distance, so we write: F = kÂˇx.

NowÂ that is interesting, isn’t it? We’re talkingÂ exactlyÂ the same thing here: spacetime is, presumably,Â isotropic, so it should oscillate the same in any directionâI am talking those sine and cosine oscillations now, but inÂ physicalÂ spaceâso there is nothing imaginary here: all is realÂ or… Well… As real as we can imagine it to be. đ

We can elaborate the point as follows. TheÂ F = kÂˇxÂ equation implies k is a forceÂ per unit distance: k = F/x. Hence, its physical dimension isÂ newton per meterÂ (N/m). Now, theÂ xÂ in this equation may be equated to theÂ maximumÂ extension of our spring, or theÂ amplitudeÂ of the oscillation, so that’s the radiusÂ rÂ =Â aÂ in the metaphor we’re analyzing here. NowÂ look at how we can re-write theÂ cÂ =Â aÂˇĎÂ =Â aÂˇâ(k/m) equation:

In case you wonder about the E =Â FÂˇa substitution: just remember thatÂ energyÂ is force times distance. [Just do a dimensional analysis: you’ll see it works out.] So we have aÂ spectacular result here, for several reasons. The first, and perhaps most obvious reason, is that we can actuallyÂ deriveÂ Einstein’s E = mÂˇc2Â formula from ourÂ flywheel model. Now, thatÂ isÂ truly glorious, I think. However, even more importantly, this equation suggests we doÂ not necessarilyÂ need to think of some actual mass oscillating up and down and sideways at the same time: the energy in the oscillation can be thought of aÂ forceÂ acting over some distance, regardless of whether or not it is actually acting on a particle.Â Now,Â thatÂ energy will have anÂ equivalentÂ mass which isâor should be, I’d say… Well… The mass of our electron or, generalizing, the mass of the particle we’re looking at.

Huh?Â Yes. In case you wonder what I am trying to get at, I am trying to convey the idea that theÂ two interpretationsâthe field versus the flywheel modelâare actually fullyÂ equivalent, orÂ compatible, if you prefer that term. In Asia, they would say: they are the “same-same but different” đ but, using the language that’s used when discussing the Copenhagen interpretation of quantum physics, we should actually say the two models are complementary.

You may shrug your shoulders but… Well… It is a very deep philosophical point, really. đ As far as I am concerned, I’ve never seen a better illustration of the (in)famous Complementarity Principle in quantum physics because… Well… It goes much beyond complementarity. This is aboutÂ equivalence. đ So it’s just like Einstein’s equation. đ

Post scriptum: If you read my posts carefully, you’ll remember I struggle with those 1/2 factors here and there. Textbooks don’t care about them. For example, when deriving the size of an atom, or theÂ RydbergÂ energy, even Feynman casually writes that “we need not trust our answer [to questions like this] within factors like 2,Â Ď, etcetera.” Frankly, that’s disappointing. Factors like 2, 1/2, Ď or 2Ď are pretty fundamental numbers, and so they need an explanation. So… Well… I do loose sleep over them. Let me advance some possible explanation here.

As for Feynman’s model, and the derivation of electron orbitals in general, I think it’s got to do with the fact that electrons do want to pair up when thermal motion doesÂ not come into play: think of the Cooper pairs we use to explain superconductivity (so that’s the BCS theory). The 1/2 factorÂ in SchrĂśdinger’s equation also has weird consequences (when you plug in the elementary wavefunction and do the derivatives, you get a weird energy concept: E = mÂˇv2, to be precise). This problem may also be solved when assuming we’re actually calculating orbitals for aÂ pairÂ of electrons, rather than orbitals for just one electron only. [We’d getÂ twiceÂ the mass (and, presumably, the charge, so… Well… It might workâbut I haven’t done it yet. It’s on my agendaâas so many other things, but I’ll get there… One day. :-)]

So… Well… Let’s get back to the lesson here. In this particular context (i.e. in the context of trying to find some reasonable physicalÂ interpretation of the wavefunction), you may or may not remember (if not, check my post on it) ‘ll remember I had to use theÂ I = mÂˇr2/2 formula for the angular momentum, as opposed to the I = mÂˇr2Â formula.Â I = mÂˇr2/2 (with the 1/2 factor) gives us the angular momentum of aÂ diskÂ with radiusÂ r, as opposed to aÂ pointÂ mass going around some circle with radiusÂ r. I noted that “the addition of this 1/2 factor may seem arbitrary”âand it totallyÂ is, of courseâbut so it gave us the result we wanted: theÂ exactÂ (Compton scattering)Â radius of our electron.

Now, the arbitraryÂ 1/2 factor may or may be explained as follows. In the field model of our electron, the force is linearly proportional to the extension or compression. Hence, to calculate the energy involved in stretching it from x = 0 toÂ xÂ =Â a, we need to calculate it as the following integral:

So… Well… That will give you some food for thought, I’d guess. đ If it racks your brain too muchâor if you’re too exhausted by this point (which is OK, because it racks my brain too!)âjust note we’ve also shown that the energy is proportional to theÂ squareÂ of the amplitude here, so that’s a nice result as well… đ

Talking food for thought, let me make one final point here. TheÂ c2Â = a2Âˇk/m relation implies a value for k which is equal to k = mÂˇc2/a = E/a. What does this tell us? In one of our previous posts, we wrote that the radius of our electron appeared as aÂ naturalÂ distance unit. We wrote that because of another reason: the remark was triggered by the fact that we can write theÂ c/Ď ratioÂ asÂ c/Ď =Â aÂˇĎ/Ď =Â a. This implies the tangential and angular velocity in our flywheel model of an electron would be the same if weâd measure distance in units ofÂ a. Now, the E = aÂˇk =Â aÂˇF/xÂ (just re-writing…) implies that the force is proportional to the energyâ F = (x/a)ÂˇE â and the proportionality coefficient is… Well…Â x/a. So that’s the distance measured in units ofÂ a.Â So… Well… Isn’t that great? The radius of our atom appearing as aÂ naturalÂ distance unit does fit in nicely with ourÂ geometricÂ interpretation of the wavefunction, doesn’t it? I mean…Â Do I need to say more?

I hope not because… Well… I can’t explain any better for the time being. I hope I sort of managed to convey the message. Just to make sure, in case you wonder what I was trying to do here, it’s the following: I told youÂ cÂ appears as a resonant frequency of spacetime and, in this post, I tried to explain what that reallyÂ means. I’d appreciate if you could let me know if you got it. If not, I’ll try again. đ When everything is said and done, one only truly understands stuff when one is able to explain it to someone else, right? đ Please do think of more innovative or creative ways if you can! đ

OK. That’s it but… Well…Â I should, perhaps, talk about one other thing here. It’s what I mentioned in the beginning of this post: this analysis assumes we’re looking at our particle from someÂ specificÂ direction. It could be anyÂ direction but… Well… It’sÂ someÂ direction. We have noÂ depth in our line of sight, so to speak. That’s really interesting, and I should do some more thinking about it. Because the direction could beÂ anyÂ direction, our analysis is valid for any direction. Hence, ifÂ our interpretation would happen to be some trueâand that’s a bigÂ if, of courseâthenÂ our particle has to be spherical, right? Why? Well… Because we see this circular thing from any direction, so itÂ hasÂ to be a sphere, right?

Well… Yes. But then… Well… While that logic seems to beÂ incontournable, as they say in French, I am somewhat reluctant to accept it at face value. Why? I am not sure. Something inside of me says I should look at the symmetries involved… I mean the transformation formulas for wavefunction when doing rotations and stuff. So… Well… I’ll be busy with that for a while, I guess. đŚ

Post scriptum 2: You may wonder whether this line of reasoning would also work for a proton. Well… Let’s try it. Because its mass is so much larger than that of an electron (about 1835 times), theÂ aÂ =Â Ä§/(mÂˇc) formula gives a muchÂ smaller radius: 1835 timesÂ smaller, to be precise, so that’s around 2.1Ă10â16Â m, which is about 1/4 of the so-calledÂ chargeÂ radius of a proton, as measured by scattering experiments. So… Well… We’re not that far off, but… Well… We clearly need some more theory here. Having said that, a proton isÂ notÂ an elementary particle, so its mass incorporates other factors than what we’re considering here (two-dimensional oscillations).

# The flywheel model of an electron

One of my readers sent me the following question on the geometric (or evenÂ physical) interpretation of the wavefunction that I’ve been offering in recent posts:

Does this mean that the wave function is merely describing excitations in a matter field; or is this unsupported?

My reply wasÂ veryÂ short:Â “Yes. In fact, we can think of a matter-particle as a tiny flywheel that stores energy.”

However, I realize this answer answers the question only partially. Moreover, I now feel I’ve been quite ambiguous in my description. When looking at the geometry of the elementary wavefunction (see the animation below, which shows us a left- and right-handed wave respectively), two obvious but somewhat conflicting interpretations readily come to mind:

(1) One is that the components of the elementary wavefunction represent an oscillation (in two dimensions) of aÂ field. We may call it aÂ matterÂ field (yes, think of the scalar Higgs field here), but we could also think of it as an oscillation of theÂ spacetime fabric itself: aÂ tiny gravitational wave, in effect. All we need to do here is to associate the sine and cosine component with aÂ physicalÂ dimension. The analogy here is the electromagnetic field vector, whose dimension isÂ forceÂ per unitÂ chargeÂ (newton/coulomb). So we may associate the sine and cosine components of the wavefunction with, say, theÂ force per unitÂ massÂ dimension (newton/kg) which, using Newton’s Law (F = mÂˇa) reduces to the dimension ofÂ accelerationÂ (m/s2), which is the dimension of gravitational fields.Â I’ll refer to this interpretation as theÂ fieldÂ interpretation of the matter wave (or wavefunction).

(2) The other interpretation is what I refer to as theÂ flywheelÂ interpretation of the electron. If you google this, you won’t find anything. However, you will probably stumble upon the so-calledÂ ZitterbewegungÂ interpretation of quantum mechanics, which is a more elaborate theory based on the same basic intuition. TheÂ ZitterbewegungÂ (a term which was coined by Erwin SchrĂśdinger himself, and which you’ll see abbreviated as zbw) is, effectively, a local circulatory motion of the electron, which is presumed to be the basis of the electron’sÂ spin and magnetic moment. All that I am doing, is… Well… I think I do push the envelope of this interpretation quite a bit. đ

The first interpretation implies our rotating arrow is, effectively, someÂ field vector. In contrast, the second interpretation implies it’s only the tip of the rotating arrow that, literally, matters: we should look at it as a pointlike charge moving around a central axis, which is the direction of propagation. Let’s look at both.

### The flywheel interpretation

The flywheel interpretation has an advantage over the field interpretation, because it also gives us a wonderfully simple physicalÂ interpretation of the interactionÂ between electrons and photonsâor, further speculating, between matter-particles (fermions) and force-carrier particles (bosons) in general. In fact,Â FeynmanÂ shows how this might workâbut in a rather theoreticalÂ LectureÂ on symmetries and conservation principles, and heÂ doesn’t elaborate much, so let me do that for him.Â The argument goes as follows.

A light beamâan electromagnetic waveâconsists of a large number of photons. These photons are thought of as being circularly polarized: look at those animations above again. The Planck-Einstein equation tells us the energy of each photon is equal to E =Â Ä§ÂˇĎ = hÂˇf. [I should, perhaps, quickly note that the frequencyÂ fÂ is, obviously, the frequency of the electromagnetic wave. It, therefore, is notÂ to be associated with aÂ matterÂ wave: theÂ de BroglieÂ wavelength and the wavelength of light are very different concepts, even if the Planck-Einstein equation looks the same for both.]

Now, if our beam consists ofÂ NÂ photons, the total energy of our beam will be equal to W =Â NÂˇE =Â NÂˇÄ§ÂˇĎ. It is crucially important to note that this energy is to be interpreted as the energy that is carried by the beamÂ in a certain time: we should think of the beam as being finite, somehow, in time and in space. Otherwise, our reasoning doesn’t make sense.

The photons carryÂ angular momentum. Just look at those animations (above) once more. It doesn’t matter much whether or not we think of light as particles or as a wave: you canÂ see there is angular momentum there. Photons are spin-1 particles, so the angular momentum will be equal toÂ Âą Ä§. Hence,Â thenÂ theÂ totalÂ angular momentum JzÂ (the direction of propagation is supposed to be theÂ z-axis here) will be equal toÂ Jz =Â NÂˇÄ§. [This, of course, assumesÂ all photons are polarized in the same way, which may or may not be the case. You should just go along with the argument right now.] Combining theÂ W =Â NÂˇÄ§ÂˇĎ andÂ Jz =Â NÂˇÄ§ equations, we get:

Jz =Â NÂˇÄ§ = W/Ď

For a photon, we do accept the field interpretation, as illustrated below. As mentioned above, theÂ z-axis here is the direction of propagation (so that’s the line of sight when looking at the diagram). So we have an electric field vector, which we write asÂ Îľ (epsilon) so as to not cause any confusion with the Î we used for the energy. [You may wonder if we shouldn’t also consider the magnetic field vector, but then we know the magnetic field vector is, basically, aÂ relativisticÂ effect which vanishes in the reference frame of the charge itself.] TheÂ phaseÂ of the electric field vector isÂ Ď =Â ĎÂˇt.

Now, a chargeÂ (so that’s our electron now) will experience a force which is equal to F = qÂˇÎľ. We use bold letters here because F andÂ Îľ are vectors. We now need to look at our electron which, in our interpretation of the elementary wavefunction, we think of as rotating about some axis. So that’s what’s represented below. [Both illustrations are Feynman’s, not mine. As for the animations above, I borrowed them from Wikipedia.]

Now, in previous posts, weÂ calculatedÂ the radiusÂ rÂ based on a similar argument as the one Feynman used to get thatÂ Jz =Â NÂˇÄ§ = W/Ď equation. I’ll refer you those posts and just mention the result here:Â r is the Compton scattering radius for an electron, which is equal to:

An equally spectacular implication of our flywheel model of the electron was the following: we found that the angular velocityÂ vÂ was equal to v =Â rÂˇĎ =Â [Ä§Âˇ/(mÂˇc)]Âˇ(E/Ä§) =Â c. Hence, in our flywheel model of an electron, it is effectively spinning around at the speed of light. Note that the angular frequency (Ď) in theÂ v =Â rÂˇĎ equation isÂ not the angular frequency of our photon: it’s the frequency of our electron. So we use the same Planck-Einstein equation (Ď = E/Ä§) but the energy E is the (rest) energy of our electron, so that’s about 0.511 MeV (so that’s an order of magnitude which is 100,000 to 300,000 times that of photons in the visible spectrum). Hence, the angular frequencies of our electron and our photon areÂ veryÂ different. Feynman casually reflects this difference by noting the phases of our electron and our photon will differ by a phase factor, which he writes asÂ Ď0.

Just to be clear here, at this point, our analysis here diverges from Feynman’s. Feynman had no intention whatsoever to talk about SchrĂśdinger’sÂ ZitterbewegungÂ hypothesis when he wrote what he wrote back in the 1960s. In fact, Feynman is very reluctant to venture intoÂ physicalÂ interpretations of the wavefunction in all hisÂ Lectures on quantum mechanicsâwhich is surprising. Because he comes so tantalizing close at many occasionsâas he does here: he describes the motion of the electron here as that of a harmonic oscillator which can be driven by an external electric field. Now thatÂ isÂ a physical interpretation, and it is totally consistent with the one I’ve advanced in my recent posts.Â Indeed, Feynman also describes it as an oscillation in two dimensionsâperpendicular to each other and to the direction of motion, as we doâ in both the flywheel as well as the field interpretation of the wavefunction!

This point is important enough to quote Feynman himself in this regard:

“We have often described the motion of the electron in the atom as a harmonic oscillator which can be driven into oscillation by an external electric field. Weâll suppose that the atom is isotropic, so that it can oscillate equally well in theÂ x– orÂ y-Â directions. Then in the circularly polarized light, theÂ xÂ displacement and theÂ yÂ displacement are the same, but one is 90Â°Â behind the other. The net result is that the electron moves in a circle.”

Right on! But so what happens really? As our light beamâthe photons, reallyâare being absorbed by our electron (or our atom), it absorbsÂ angular momentum. In other words, there is aÂ torqueÂ about the central axis. Let me remind you of the formulas for the angular momentum and for torqueÂ respectively: L = rĂp andÂ Ď =Â rĂF. Needless to say, we have twoÂ vector cross-products here. Hence, if we use theÂ Ď =Â rĂFÂ formula, we need to find theÂ tangentialÂ component of the force (Ft), whose magnitude will be equal to Ft = qÂˇÎľt.Â Now, energy is force over some distance so… Well… You may need to think about it for a while but, if you’ve understood all of the above, you should also be able to understand the following formula:

dW/dt =Â qÂˇÎľtÂˇv

[If you have trouble, rememberÂ vÂ is equal to ds/dt =Â Îs/Ît forÂ ÎtÂ â 0, and re-write the equation above asÂ dW =Â qÂˇÎľtÂˇvÂˇdt =Â qÂˇÎľtÂˇds =Â FtÂˇds. Capito?]

Now, you may or may not remember thatÂ the time rate of change of angular momentumÂ must be equal to the torqueÂ that is being applied. Now, the torque is equal toÂ Ď = FtÂˇrÂ =Â qÂˇÎľtÂˇr, so we get:

dJz/dt =Â qÂˇÎľtÂˇv

TheÂ ratioÂ ofÂ dW/dt andÂ dJz/dt gives us the following interesting equation:

Now, Feynman tries to relate this to theÂ Jz =Â NÂˇÄ§ = W/Ď formula but… Well… We should remind ourselves that the angular frequency of these photons isÂ not the angular frequency of our electron. So… Well… WhatÂ canÂ we say about this equation? Feynman suggests to integrateÂ dJzÂ andÂ dW over some time interval, which makes sense: as mentioned, we interpreted W as the energy that is carried by the beam inÂ a certain time. So if we integrateÂ dW over this time interval, we get W. Likewise, if we integrateÂ dJzÂ over theÂ sameÂ time interval, we should get the totalÂ angular momentum that our electron isÂ absorbingÂ from the light beam. Now, becauseÂ dJzÂ =Â dW/Ď, we do concur withÂ Feynman’s conclusion: the total angular momentum which is being absorbed by the electron is proportional to the total energy of the beam, and the constant of proportionality is equal to 1/Ď.

It’s just… Well… TheÂ Ď here is the angular frequency of the electron. It’sÂ notÂ the angular frequency of the beam. Not in our flywheel model of the electron which, admittedly, isÂ notÂ the model which Feynman used in his analysis. Feynman’s analysis is simpler: he assumes an electron at rest, so to speak, and then the beam drives it so it goes around in a circle with a velocity that is, effectively, given by the angular frequency of the beam itself. So… Well… Fine. Makes sense. As said, I just pushed the analysis a bit further along here. Both analyses raise an interesting question:Â how and where is the absorbed energy being stored?Â What is the mechanism here?

In Feynman’s analysis, the answer is quite simple: the electron did not have any motion before but does spin aroundÂ afterÂ the beam hit it. So it has more energy now: it wasn’t a tiny flywheel before, but it is now!

In contrast, in my interpretation of the matter wave, the electron was spinning around already, so where does the extra energy go now? As its energy increases,Â Ď =Â E/Ä§ must increase, right? Right. At the same time, the velocityÂ vÂ =Â rÂˇĎ must still be equal toÂ v =Â rÂˇĎ =Â [Ä§Âˇ/(mÂˇc)]Âˇ(E/Ä§) =Â c, right? Right. So… IfÂ Ď increases, butÂ rÂˇĎ must equal the speed of light, thenÂ rÂ must actuallyÂ decreaseÂ somewhat, right?

Right. It’s a weird but inevitable conclusion, it seems. I’ll let you think about it. đ

To conclude this postâwhich, I hope, the reader who triggered it will find interestingâI would like to quote Feynman on an issue on which most textbooks remain silent: the two-state nature of photons. I will just quote him without trying to comment or alter what he writes, because what he writes is clear enough, I think:

“Now letâs ask the following question: If light is linearly polarized in the x-direction, what is its angular momentum? Light polarized in the x-direction can be represented as the superposition of RHC and LHC polarized light. […] The interference of these two amplitudes produces the linear polarization, but it hasÂ equalÂ probabilities to appear with plus or minus one unit of angular momentum. [Macroscopic measurements made on a beam of linearly polarized light will show that it carries zero angular momentum, because in a large number of photons there are nearly equal numbers of RHC and LHC photons contributing opposite amounts of angular momentumâthe average angular momentum is zero.]

Now, we have said that any spin-one particle can have three values of Jz, namelyÂ +1,Â 0,Â â1Â (the three states we saw in the Stern-Gerlach experiment). But light is screwy; it has only two states. It does not have the zero case. This strange lack is related to the fact that light cannot stand still. For a particle of spinÂ jÂ which is standing still, there must be theÂ 2j+1Â possible states with values of JzÂ going in steps ofÂ 1Â fromÂ âjÂ toÂ +j. But it turns out that for something of spinÂ jÂ with zero mass only the states with the componentsÂ +jÂ andÂ âjÂ along the direction of motion exist. For example, light does not have three states, but only twoâalthough a photon is still an object of spin one.”

In his typical style and franknessâfor which he is revered by some (like me) but disliked by othersâhe admits this is very puzzling, and not obvious at all! Let me quote him once more:

“How is this consistent with our earlier proofsâbased on what happens under rotations in spaceâthat for spin-one particles three states are necessary? For a particle at rest, rotations can be made about any axis without changing the momentum state. Particles with zero rest mass (like photons and neutrinos) cannot be at rest; only rotations about the axis along the direction of motion do not change the momentum state. Arguments about rotations around one axis only are insufficient to prove that three states are required. We have tried to find at least a proof that the component of angular momentum along the direction of motion must for a zero mass particle be an integral multiple ofÂ Ä§/2âand not something likeÂ Ä§/3.Â Even using all sorts of properties of the Lorentz transformation and what not, we failed. Maybe itâs not true. Weâll have to talk about it with Prof. Wigner, who knows all about such things.”

The reference to Eugene Wigner is historically interesting. Feynman probably knew him very wellâif only because they had both worked together on the Manhattan Projectâand it’s true Wigner was not only a great physicist but a mathematical genius as well. However, Feynman probably quotes him here for the 1963 Nobel Prize he got for… Well… Wigner’s “contributions to the theory of the atomic nucleus and elementary particles,Â particularly through the discovery and application of fundamental symmetry principles.” đ I’ll let you figure out how what I write about in this post, and symmetry arguments, might be related. đ

That’s it for today, folks! I hope you enjoyed this. đ

Post scriptum: The mainÂ disadvantage of the flywheel interpretation is that it doesn’t explain interference: waves interfereâsome rotating mass doesn’t. Ultimately, the wave and flywheel interpretation must, somehow, be compatible. One way to think about it is that the electron can only move as it doesâin a “local circulatory motion”âif there is aÂ forceÂ on it that makes it move the way it does. That force must be gravitational because… Well… There is no other candidate, is there? [We’re not talking some electron orbital hereâsome negative charge orbiting around a positive nucleus. We’re just considering the electron itself.] So we just need to prove that our rotating arrow willÂ alsoÂ represent a force, whose components will make our electron move the way it does. That should not be difficult. The analogy of the V-twin engine should do the trick. I’ll deal with that in my next post. If we’re able to provide such proof (which, as mentioned, should not be difficult), it will be a wonderful illustration of the complementarity principle. đ

However, just thinking about it does raise some questions already. Circular motion like this can be explained in two equivalent ways. The most obvious way to think about it is to assume some central field. It’s the planetary model (illustrated below). However, that doesn’t suit our purposes because it’s hard – if possible at all – to relate it to the wavefunction oscillation.

The second model is our two-spring or V-twin engine modelÂ (illustrated below), but then whatÂ isÂ the mass here? One hypothesis that comes to mind is that we’re constantly accelerating and decelerating an electric charge (the electron charge)âagainst all other charges in the Universe, so to speak. So that’s a force over a distanceâenergy. And energy has an equivalent mass.

The question which remains open, then, is the following: what is the nature of this force? In previous posts, I suggested it might be gravitational, but so here we’re back to the drawing board: we’re talking an electrical force, but applied to someÂ massÂ which acquires mass because of… Well… Because of the forceâbecause of the oscillation (the moving charge) itself. Hmm…. I need to think about this.

# The speed of light as an angular velocity

Over the weekend, I worked on a revised version of my paper on a physical interpretation of the wavefunction. However, I forgot to add the final remarks on the speed of light as an angular velocity. I know… This post is for my faithful followers only. It is dense, but let me add the missing bits here:

Post scriptum (29 October):Â Einsteinâs view on aether theories probably still holds true: âWe may say that according to the general theory of relativity space is endowed with physical qualities; in this sense, therefore, there exists an aether. According to the general theory of relativity, space without aether is unthinkable â for in such space there not only would be no propagation of light, but also no possibility of existence for standards of space and time (measuring-rods and clocks), nor therefore any space-time intervals in the physical sense. But this aether may not be thought of as endowed with the quality characteristic of ponderable media, as consisting of parts which may be tracked through time. The idea of motion may not be applied to it.â

The above quote is taken from the Wikipedia article on aether theories. The same article also quotes Robert Laughlin, the 1998 Nobel Laureate in Physics, who said this about aether in 2005: âIt is ironic that Einstein’s most creative work, the general theory of relativity, should boil down to conceptualizing space as a medium when his original premise [in special relativity] was that no such medium existed. [âŚ] The word ‘aether’ has extremely negative connotations in theoretical physics because of its past association with opposition to relativity. This is unfortunate because, stripped of these connotations, it rather nicely captures the way most physicists actually think about the vacuum. [âŚ]The modern concept of the vacuum of space, confirmed every day by experiment, is a relativistic aether. But we do not call it this because it is taboo.â

I really love this: a relativistic aether. MyÂ interpretation of the wavefunction is veryÂ consistent with that.

# Wavefunctions as gravitational waves

This is the paper I always wanted to write. It is there now, and I think it is good – and that‘s an understatement. đ It is probably best to download it as a pdf-file from the viXra.org site because this was a rather fast ‘copy and paste’ job from the Word version of the paper, so there may be issues with boldface notation (vector notation), italics and, most importantly, with formulas – which I, sadly, have to ‘snip’ into this WordPress blog, as they don’t have an easy copy function for mathematical formulas.

It’s great stuff. If you have been following my blog – and many of you have – you will want to digest this. đ

Abstract : This paper explores the implications of associating the components of the wavefunction with a physical dimension: force per unit mass â which is, of course, the dimension of acceleration (m/s2) and gravitational fields. The classical electromagnetic field equations for energy densities, the Poynting vector and spin angular momentum are then re-derived by substituting the electromagnetic N/C unit of field strength (mass per unit charge) by the new N/kg = m/s2 dimension.

The results are elegant and insightful. For example, the energy densities are proportional to the square of the absolute value of the wavefunction and, hence, to the probabilities, which establishes a physical normalization condition. Also, SchrĂśdingerâs wave equation may then, effectively, be interpreted as a diffusion equation for energy, and the wavefunction itself can be interpreted as a propagating gravitational wave. Finally, as an added bonus, concepts such as the Compton scattering radius for a particle, spin angular momentum, and the boson-fermion dichotomy, can also be explained more intuitively.

While the approach offers a physical interpretation of the wavefunction, the author argues that the core of the Copenhagen interpretations revolves around the complementarity principle, which remains unchallenged because the interpretation of amplitude waves as traveling fields does not explain the particle nature of matter.

# Introduction

This is not another introduction to quantum mechanics. We assume the reader is already familiar with the key principles and, importantly, with the basic math. We offer an interpretation of wave mechanics. As such, we do not challenge the complementarity principle: the physical interpretation of the wavefunction that is offered here explains the wave nature of matter only. It explains diffraction and interference of amplitudes but it does not explain why a particle will hit the detector not as a wave but as a particle. Hence, the Copenhagen interpretation of the wavefunction remains relevant: we just push its boundaries.

The basic ideas in this paper stem from a simple observation: the geometric similarity between the quantum-mechanical wavefunctions and electromagnetic waves is remarkably similar. The components of both waves are orthogonal to the direction of propagation and to each other. Only the relative phase differs : the electric and magnetic field vectors (E and B) have the same phase. In contrast, the phase of the real and imaginary part of the (elementary) wavefunction (Ď = aÂˇeâiâÎ¸ = aâcosÎ¸ – aâsinÎ¸) differ by 90 degrees (Ď/2).[1] Pursuing the analogy, we explore the following question: if the oscillating electric and magnetic field vectors of an electromagnetic wave carry the energy that one associates with the wave, can we analyze the real and imaginary part of the wavefunction in a similar way?

We show the answer is positive and remarkably straightforward.  If the physical dimension of the electromagnetic field is expressed in newton per coulomb (force per unit charge), then the physical dimension of the components of the wavefunction may be associated with force per unit mass (newton per kg).[2] Of course, force over some distance is energy. The question then becomes: what is the energy concept here? Kinetic? Potential? Both?

The similarity between the energy of a (one-dimensional) linear oscillator (E = mÂˇa2ÂˇĎ2/2) and Einsteinâs relativistic energy equation E = mâc2 inspires us to interpret the energy as a two-dimensional oscillation of mass. To assist the reader, we construct a two-piston engine metaphor.[3] We then adapt the formula for the electromagnetic energy density to calculate the energy densities for the wave function. The results are elegant and intuitive: the energy densities are proportional to the square of the absolute value of the wavefunction and, hence, to the probabilities. SchrĂśdingerâs wave equation may then, effectively, be interpreted as a diffusion equation for energy itself.

As an added bonus, concepts such as the Compton scattering radius for a particle and spin angular, as well as the boson-fermion dichotomy can be explained in a fully intuitive way.[4]

Of course, such interpretation is also an interpretation of the wavefunction itself, and the immediate reaction of the reader is predictable: the electric and magnetic field vectors are, somehow, to be looked at as real vectors. In contrast, the real and imaginary components of the wavefunction are not. However, this objection needs to be phrased more carefully. First, it may be noted that, in a classical analysis, the magnetic force is a pseudovector itself.[5] Second, a suitable choice of coordinates may make quantum-mechanical rotation matrices irrelevant.[6]

Therefore, the author is of the opinion that this little paper may provide some fresh perspective on the question, thereby further exploring Einsteinâs basic sentiment in regard to quantum mechanics, which may be summarized as follows: there must be some physical explanation for the calculated probabilities.[7]

We will, therefore, start with Einsteinâs relativistic energy equation (E = mc2) and wonder what it could possibly tell us.

# I. Energy as a two-dimensional oscillation of mass

The structural similarity between the relativistic energy formula, the formula for the total energy of an oscillator, and the kinetic energy of a moving body, is striking:

1. E = mc2
2. E = mĎ2/2
3. E = mv2/2

In these formulas, Ď, v and c all describe some velocity.[8] Of course, there is the 1/2 factor in the E = mĎ2/2 formula[9], but that is exactly the point we are going to explore here: can we think of an oscillation in two dimensions, so it stores an amount of energy that is equal to E = 2ÂˇmÂˇĎ2/2 = mÂˇĎ2?

That is easy enough. Think, for example, of a V-2 engine with the pistons at a 90-degree angle, as illustrated below. The 90Â° angle makes it possible to perfectly balance the counterweight and the pistons, thereby ensuring smooth travel at all times. With permanently closed valves, the air inside the cylinder compresses and decompresses as the pistons move up and down and provides, therefore, a restoring force. As such, it will store potential energy, just like a spring, and the motion of the pistons will also reflect that of a mass on a spring. Hence, we can describe it by a sinusoidal function, with the zero point at the center of each cylinder. We can, therefore, think of the moving pistons as harmonic oscillators, just like mechanical springs.

Figure 1: Oscillations in two dimensions

If we assume there is no friction, we have a perpetuum mobile here. The compressed air and the rotating counterweight (which, combined with the crankshaft, acts as a flywheel[10]) store the potential energy. The moving masses of the pistons store the kinetic energy of the system.[11]

At this point, it is probably good to quickly review the relevant math. If the magnitude of the oscillation is equal to a, then the motion of the piston (or the mass on a spring) will be described by x = aÂˇcos(ĎÂˇt + Î).[12] Needless to say, Î is just a phase factor which defines our t = 0 point, and Ď is the natural angular frequency of our oscillator. Because of the 90Â° angle between the two cylinders, Î would be 0 for one oscillator, and âĎ/2 for the other. Hence, the motion of one piston is given by x = aÂˇcos(ĎÂˇt), while the motion of the other is given by x = aÂˇcos(ĎÂˇtâĎ/2) = aÂˇsin(ĎÂˇt).

The kinetic and potential energy of one oscillator (think of one piston or one spring only) can then be calculated as:

1. K.E. = T = mÂˇv2/2 = (1/2)ÂˇmÂˇĎ2Âˇa2Âˇsin2(ĎÂˇt + Î)
2. P.E. = U = kÂˇx2/2 = (1/2)ÂˇkÂˇa2Âˇcos2(ĎÂˇt + Î)

The coefficient k in the potential energy formula characterizes the restoring force: F = âkÂˇx. From the dynamics involved, it is obvious that k must be equal to mÂˇĎ2. Hence, the total energy is equal to:

E = T + U = (1/2)Âˇ mÂˇĎ2Âˇa2Âˇ[sin2(ĎÂˇt + Î) + cos2(ĎÂˇt + Î)] = mÂˇa2ÂˇĎ2/2

To facilitate the calculations, we will briefly assume k = mÂˇĎ2 and a are equal to 1. The motion of our first oscillator is given by the cos(ĎÂˇt) = cosÎ¸ function (Î¸ = ĎÂˇt), and its kinetic energy will be equal to sin2Î¸. Hence, the (instantaneous) change in kinetic energy at any point in time will be equal to:

d(sin2Î¸)/dÎ¸ = 2âsinÎ¸âd(sinÎ¸)/dÎ¸ = 2âsinÎ¸âcosÎ¸

Let us look at the second oscillator now. Just think of the second piston going up and down in the V-2 engine. Its motion is given by the sinÎ¸ function, which is equal to cos(Î¸âĎ /2). Hence, its kinetic energy is equal to sin2(Î¸âĎ /2), and how it changes â as a function of Î¸ â will be equal to:

2âsin(Î¸âĎ /2)âcos(Î¸âĎ /2) = = â2âcosÎ¸âsinÎ¸ = â2âsinÎ¸âcosÎ¸

We have our perpetuum mobile! While transferring kinetic energy from one piston to the other, the crankshaft will rotate with a constant angular velocity: linear motion becomes circular motion, and vice versa, and the total energy that is stored in the system is T + U = ma2Ď2.

We have a great metaphor here. Somehow, in this beautiful interplay between linear and circular motion, energy is borrowed from one place and then returns to the other, cycle after cycle. We know the wavefunction consist of a sine and a cosine: the cosine is the real component, and the sine is the imaginary component. Could they be equally real? Could each represent half of the total energy of our particle? Should we think of the c in our E = mc2 formula as an angular velocity?

These are sensible questions. Let us explore them.

# II. The wavefunction as a two-dimensional oscillation

The elementary wavefunction is written as:

Ď = aÂˇeâi[EÂˇt â pâx]/Ä§aÂˇeâi[EÂˇt â pâx]/Ä§ = aÂˇcos(pâx/Ä§ Eât/Ä§) + iÂˇaÂˇsin(pâx/Ä§ Eât/Ä§)

When considering a particle at rest (p = 0) this reduces to:

Ď = aÂˇeâiâEÂˇt/Ä§ = aÂˇcos(Eât/Ä§) + iÂˇaÂˇsin(Eât/Ä§) = aÂˇcos(Eât/Ä§) iÂˇaÂˇsin(Eât/Ä§)

Let us remind ourselves of the geometry involved, which is illustrated below. Note that the argument of the wavefunction rotates clockwise with time, while the mathematical convention for measuring the phase angle (Ď) is counter-clockwise.

Figure 2: Eulerâs formula

If we assume the momentum p is all in the x-direction, then the p and x vectors will have the same direction, and pâx/Ä§ reduces to pâx/Ä§. Most illustrations â such as the one below â will either freeze x or, else, t. Alternatively, one can google web animations varying both. The point is: we also have a two-dimensional oscillation here. These two dimensions are perpendicular to the direction of propagation of the wavefunction. For example, if the wavefunction propagates in the x-direction, then the oscillations are along the y– and z-axis, which we may refer to as the real and imaginary axis. Note how the phase difference between the cosine and the sine  â the real and imaginary part of our wavefunction â appear to give some spin to the whole. I will come back to this.

Figure 3: Geometric representation of the wavefunction

Hence, if we would say these oscillations carry half of the total energy of the particle, then we may refer to the real and imaginary energy of the particle respectively, and the interplay between the real and the imaginary part of the wavefunction may then describe how energy propagates through space over time.

Let us consider, once again, a particle at rest. Hence, p = 0 and the (elementary) wavefunction reduces to Ď = aÂˇeâiâEÂˇt/Ä§. Hence, the angular velocity of both oscillations, at some point x, is given by Ď = -E/Ä§. Now, the energy of our particle includes all of the energy â kinetic, potential and rest energy â and is, therefore, equal to E = mc2.

Can we, somehow, relate this to the mÂˇa2ÂˇĎ2 energy formula for our V-2 perpetuum mobile? Our wavefunction has an amplitude too. Now, if the oscillations of the real and imaginary wavefunction store the energy of our particle, then their amplitude will surely matter. In fact, the energy of an oscillation is, in general, proportional to the square of the amplitude: E Âľ a2. We may, therefore, think that the a2 factor in the E = mÂˇa2ÂˇĎ2 energy will surely be relevant as well.

However, here is a complication: an actual particle is localized in space and can, therefore, not be represented by the elementary wavefunction. We must build a wave packet for that: a sum of wavefunctions, each with their own amplitude ak, and their own Ďi = -Ei/Ä§. Each of these wavefunctions will contribute some energy to the total energy of the wave packet. To calculate the contribution of each wave to the total, both ai as well as Ei will matter.

What is Ei? Ei varies around some average E, which we can associate with some average mass m: m = E/c2. The Uncertainty Principle kicks in here. The analysis becomes more complicated, but a formula such as the one below might make sense:We can re-write this as:What is the meaning of this equation? We may look at it as some sort of physical normalization condition when building up the Fourier sum. Of course, we should relate this to the mathematical normalization condition for the wavefunction. Our intuition tells us that the probabilities must be related to the energy densities, but how exactly? We will come back to this question in a moment. Let us first think some more about the enigma: what is mass?

Before we do so, let us quickly calculate the value of c2Ä§2: it is about 1Â´1051 N2âm4. Let us also do a dimensional analysis: the physical dimensions of the E = mÂˇa2ÂˇĎ2 equation make sense if we express m in kg, a in m, and Ď in rad/s. We then get: [E] = kgâm2/s2 = (Nâs2/m)âm2/s2 = Nâm = J. The dimensions of the left- and right-hand side of the physical normalization condition is N3âm5.

# III. What is mass?

We came up, playfully, with a meaningful interpretation for energy: it is a two-dimensional oscillation of mass. But what is mass? A new aether theory is, of course, not an option, but then what is it that is oscillating? To understand the physics behind equations, it is always good to do an analysis of the physical dimensions in the equation. Let us start with Einsteinâs energy equation once again. If we want to look at mass, we should re-write it as m = E/c2:

[m] = [E/c2] = J/(m/s)2 = NÂˇmâs2/m2 = NÂˇs2/m = kg

This is not very helpful. It only reminds us of Newtonâs definition of a mass: mass is that what gets accelerated by a force. At this point, we may want to think of the physical significance of the absolute nature of the speed of light. Einsteinâs E = mc2 equation implies we can write the ratio between the energy and the mass of any particle is always the same, so we can write, for example:This reminds us of the Ď2= C1/L or Ď2 = k/m of harmonic oscillators once again.[13] The key difference is that the Ď2= C1/L and Ď2 = k/m formulas introduce two or more degrees of freedom.[14] In contrast, c2= E/m for any particle, always. However, that is exactly the point: we can modulate the resistance, inductance and capacitance of electric circuits, and the stiffness of springs and the masses we put on them, but we live in one physical space only: our spacetime. Hence, the speed of light c emerges here as the defining property of spacetime â the resonant frequency, so to speak. We have no further degrees of freedom here.

The Planck-Einstein relation (for photons) and the de Broglie equation (for matter-particles) have an interesting feature: both imply that the energy of the oscillation is proportional to the frequency, with Planckâs constant as the constant of proportionality. Now, for one-dimensional oscillations â think of a guitar string, for example â we know the energy will be proportional to the square of the frequency. It is a remarkable observation: the two-dimensional matter-wave, or the electromagnetic wave, gives us two waves for the price of one, so to speak, each carrying half of the total energy of the oscillation but, as a result, we get a proportionality between E and f instead of between E and f2.

However, such reflections do not answer the fundamental question we started out with: what is mass? At this point, it is hard to go beyond the circular definition that is implied by Einsteinâs formula: energy is a two-dimensional oscillation of mass, and mass packs energy, and c emerges us as the property of spacetime that defines how exactly.

When everything is said and done, this does not go beyond stating that mass is some scalar field. Now, a scalar field is, quite simply, some real number that we associate with a position in spacetime. The Higgs field is a scalar field but, of course, the theory behind it goes much beyond stating that we should think of mass as some scalar field. The fundamental question is: why and how does energy, or matter, condense into elementary particles? That is what the Higgs mechanism is about but, as this paper is exploratory only, we cannot even start explaining the basics of it.

What we can do, however, is look at the wave equation again (SchrĂśdingerâs equation), as we can now analyze it as an energy diffusion equation.

# IV. SchrĂśdingerâs equation as an energy diffusion equation

The interpretation of SchrĂśdingerâs equation as a diffusion equation is straightforward. Feynman (Lectures, III-16-1) briefly summarizes it as follows:

âWe can think of SchrĂśdingerâs equation as describing the diffusion of the probability amplitude from one point to the next. [âŚ] But the imaginary coefficient in front of the derivative makes the behavior completely different from the ordinary diffusion such as you would have for a gas spreading out along a thin tube. Ordinary diffusion gives rise to real exponential solutions, whereas the solutions of SchrĂśdingerâs equation are complex waves.â[17]

Let us review the basic math. For a particle moving in free space â with no external force fields acting on it â there is no potential (U = 0) and, therefore, the UĎ term disappears. Therefore, SchrĂśdingerâs equation reduces to:

âĎ(x, t)/ât = iÂˇ(1/2)Âˇ(Ä§/meff)Âˇâ2Ď(x, t)

The ubiquitous diffusion equation in physics is:

âĎ(x, t)/ât = DÂˇâ2Ď(x, t)

The structural similarity is obvious. The key difference between both equations is that the wave equation gives us two equations for the price of one. Indeed, because Ď is a complex-valued function, with a real and an imaginary part, we get the following equations[18]:

1. Re(âĎ/ât) = â(1/2)Âˇ(Ä§/meff)ÂˇIm(â2Ď)
2. Im(âĎ/ât) = (1/2)Âˇ(Ä§/meff)ÂˇRe(â2Ď)

These equations make us think of the equations for an electromagnetic wave in free space (no stationary charges or currents):

1. âB/ât = ââĂE
2. âE/ât = c2âĂB

The above equations effectively describe a propagation mechanism in spacetime, as illustrated below.

Figure 4: Propagation mechanisms

The Laplacian operator (â2), when operating on a scalar quantity, gives us a flux density, i.e. something expressed per square meter (1/m2). In this case, it is operating on Ď(x, t), so what is the dimension of our wavefunction Ď(x, t)? To answer that question, we should analyze the diffusion constant in SchrĂśdingerâs equation, i.e. the (1/2)Âˇ(Ä§/meff) factor:

1. As a mathematical constant of proportionality, it will quantify the relationship between both derivatives (i.e. the time derivative and the Laplacian);
2. As a physical constant, it will ensure the physical dimensions on both sides of the equation are compatible.

Now, the Ä§/meff factor is expressed in (NÂˇmÂˇs)/(NÂˇ s2/m) = m2/s. Hence, it does ensure the dimensions on both sides of the equation are, effectively, the same: âĎ/ât is a time derivative and, therefore, its dimension is s1 while, as mentioned above, the dimension of â2Ď is m2. However, this does not solve our basic question: what is the dimension of the real and imaginary part of our wavefunction?

At this point, mainstream physicists will say: it does not have a physical dimension, and there is no geometric interpretation of SchrĂśdingerâs equation. One may argue, effectively, that its argument, (pâx – Eât)/Ä§, is just a number and, therefore, that the real and imaginary part of Ď is also just some number.

To this, we may object that Ä§ may be looked as a mathematical scaling constant only. If we do that, then the argument of Ď will, effectively, be expressed in action units, i.e. in NÂˇmÂˇs. It then does make sense to also associate a physical dimension with the real and imaginary part of Ď. What could it be?

We may have a closer look at Maxwellâs equations for inspiration here. The electric field vector is expressed in newton (the unit of force) per unit of charge (coulomb). Now, there is something interesting here. The physical dimension of the magnetic field is N/C divided by m/s.[19] We may write B as the following vector cross-product: B = (1/c)âexĂE, with ex the unit vector pointing in the x-direction (i.e. the direction of propagation of the wave). Hence, we may associate the (1/c)âexĂ operator, which amounts to a rotation by 90 degrees, with the s/m dimension. Now, multiplication by i also amounts to a rotation by 90Â° degrees. Hence, we may boldly write: B = (1/c)âexĂE = (1/c)âiâE. This allows us to also geometrically interpret SchrĂśdingerâs equation in the way we interpreted it above (see Figure 3).[20]

Still, we have not answered the question as to what the physical dimension of the real and imaginary part of our wavefunction should be. At this point, we may be inspired by the structural similarity between Newtonâs and Coulombâs force laws:Hence, if the electric field vector E is expressed in force per unit charge (N/C), then we may want to think of associating the real part of our wavefunction with a force per unit mass (N/kg). We can, of course, do a substitution here, because the mass unit (1 kg) is equivalent to 1 NÂˇs2/m. Hence, our N/kg dimension becomes:

N/kg = N/(NÂˇs2/m)= m/s2

What is this: m/s2? Is that the dimension of the aÂˇcosÎ¸ term in the aÂˇeâiÎ¸ aÂˇcosÎ¸ â iÂˇaÂˇsinÎ¸ wavefunction?

My answer is: why not? Think of it: m/s2 is the physical dimension of acceleration: the increase or decrease in velocity (m/s) per second. It ensures the wavefunction for any particle â matter-particles or particles with zero rest mass (photons) â and the associated wave equation (which has to be the same for all, as the spacetime we live in is one) are mutually consistent.

In this regard, we should think of how we would model a gravitational wave. The physical dimension would surely be the same: force per mass unit. It all makes sense: wavefunctions may, perhaps, be interpreted as traveling distortions of spacetime, i.e. as tiny gravitational waves.

# V. Energy densities and flows

Pursuing the geometric equivalence between the equations for an electromagnetic wave and SchrĂśdingerâs equation, we can now, perhaps, see if there is an equivalent for the energy density. For an electromagnetic wave, we know that the energy density is given by the following formula:E and B are the electric and magnetic field vector respectively. The Poynting vector will give us the directional energy flux, i.e. the energy flow per unit area per unit time. We write:Needless to say, the ââ operator is the divergence and, therefore, gives us the magnitude of a (vector) fieldâs source or sink at a given point. To be precise, the divergence gives us the volume density of the outward flux of a vector field from an infinitesimal volume around a given point. In this case, it gives us the volume density of the flux of S.

We can analyze the dimensions of the equation for the energy density as follows:

1. E is measured in newton per coulomb, so [EâE] = [E2] = N2/C2.
2. B is measured in (N/C)/(m/s), so we get [BâB] = [B2] = (N2/C2)Âˇ(s2/m2). However, the dimension of our c2 factor is (m2/s2) and so weâre also left with N2/C2.
3. The Ďľ0 is the electric constant, aka as the vacuum permittivity. As a physical constant, it should ensure the dimensions on both sides of the equation work out, and they do: [Îľ0] = C2/(NÂˇm2) and, therefore, if we multiply that with N2/C2, we find that is expressed in J/m3.[21]

Replacing the newton per coulomb unit (N/C) by the newton per kg unit (N/kg) in the formulas above should give us the equivalent of the energy density for the wavefunction. We just need to substitute Ďľ0 for an equivalent constant. We may to give it a try. If the energy densities can be calculated â which are also mass densities, obviously â then the probabilities should be proportional to them.

Let us first see what we get for a photon, assuming the electromagnetic wave represents its wavefunction. Substituting B for (1/c)âiâE or for â(1/c)âiâE gives us the following result:Zero!? An unexpected result! Or not? We have no stationary charges and no currents: only an electromagnetic wave in free space. Hence, the local energy conservation principle needs to be respected at all points in space and in time. The geometry makes sense of the result: for an electromagnetic wave, the magnitudes of E and B reach their maximum, minimum and zero point simultaneously, as shown below.[22] This is because their phase is the same.

Figure 5: Electromagnetic wave: E and B

Should we expect a similar result for the energy densities that we would associate with the real and imaginary part of the matter-wave? For the matter-wave, we have a phase difference between aÂˇcosÎ¸ and aÂˇsinÎ¸, which gives a different picture of the propagation of the wave (see Figure 3).[23] In fact, the geometry of the suggestion suggests some inherent spin, which is interesting. I will come back to this. Let us first guess those densities. Making abstraction of any scaling constants, we may write:We get what we hoped to get: the absolute square of our amplitude is, effectively, an energy density !

|Ď|2  = |aÂˇeâiâEÂˇt/Ä§|2 = a2 = u

This is very deep. A photon has no rest mass, so it borrows and returns energy from empty space as it travels through it. In contrast, a matter-wave carries energy and, therefore, has some (rest) mass. It is therefore associated with an energy density, and this energy density gives us the probabilities. Of course, we need to fine-tune the analysis to account for the fact that we have a wave packet rather than a single wave, but that should be feasible.

As mentioned, the phase difference between the real and imaginary part of our wavefunction (a cosine and a sine function) appear to give some spin to our particle. We do not have this particularity for a photon. Of course, photons are bosons, i.e. spin-zero particles, while elementary matter-particles are fermions with spin-1/2. Hence, our geometric interpretation of the wavefunction suggests that, after all, there may be some more intuitive explanation of the fundamental dichotomy between bosons and fermions, which puzzled even Feynman:

âWhy is it that particles with half-integral spin are Fermi particles, whereas particles with integral spin are Bose particles? We apologize for the fact that we cannot give you an elementary explanation. An explanation has been worked out by Pauli from complicated arguments of quantum field theory and relativity. He has shown that the two must necessarily go together, but we have not been able to find a way of reproducing his arguments on an elementary level. It appears to be one of the few places in physics where there is a rule which can be stated very simply, but for which no one has found a simple and easy explanation. The explanation is deep down in relativistic quantum mechanics. This probably means that we do not have a complete understanding of the fundamental principle involved.â (Feynman, Lectures, III-4-1)

The physical interpretation of the wavefunction, as presented here, may provide some better understanding of âthe fundamental principle involvedâ: the physical dimension of the oscillation is just very different. That is all: it is force per unit charge for photons, and force per unit mass for matter-particles. We will examine the question of spin somewhat more carefully in section VII. Let us first examine the matter-wave some more.

# VI. Group and phase velocity of the matter-wave

The geometric representation of the matter-wave (see Figure 3) suggests a traveling wave and, yes, of course: the matter-wave effectively travels through space and time. But what is traveling, exactly? It is the pulse â or the signal â only: the phase velocity of the wave is just a mathematical concept and, even in our physical interpretation of the wavefunction, the same is true for the group velocity of our wave packet. The oscillation is two-dimensional, but perpendicular to the direction of travel of the wave. Hence, nothing actually moves with our particle.

Here, we should also reiterate that we did not answer the question as to what is oscillating up and down and/or sideways: we only associated a physical dimension with the components of the wavefunction â newton per kg (force per unit mass), to be precise. We were inspired to do so because of the physical dimension of the electric and magnetic field vectors (newton per coulomb, i.e. force per unit charge) we associate with electromagnetic waves which, for all practical purposes, we currently treat as the wavefunction for a photon. This made it possible to calculate the associated energy densities and a Poynting vector for energy dissipation. In addition, we showed that SchrĂśdinger’s equation itself then becomes a diffusion equation for energy. However, let us now focus some more on the asymmetry which is introduced by the phase difference between the real and the imaginary part of the wavefunction. Look at the mathematical shape of the elementary wavefunction once again:

Ď = aÂˇeâi[EÂˇt â pâx]/Ä§aÂˇeâi[EÂˇt â pâx]/Ä§ = aÂˇcos(pâx/Ä§ â Eât/Ä§) + iÂˇaÂˇsin(pâx/Ä§ â Eât/Ä§)

The minus sign in the argument of our sine and cosine function defines the direction of travel: an F(xâvât) wavefunction will always describe some wave that is traveling in the positive x-direction (with the wave velocity), while an F(x+vât) wavefunction will travel in the negative x-direction. For a geometric interpretation of the wavefunction in three dimensions, we need to agree on how to define i or, what amounts to the same, a convention on how to define clockwise and counterclockwise directions: if we look at a clock from the back, then its hand will be moving counterclockwise. So we need to establish the equivalent of the right-hand rule. However, let us not worry about that now. Let us focus on the interpretation. To ease the analysis, we’ll assume we’re looking at a particle at rest. Hence, p = 0, and the wavefunction reduces to:

Ď = aÂˇeâiâEÂˇt/Ä§ = aÂˇcos(âEât/Ä§) + iÂˇaÂˇsin(âE0ât/Ä§) = aÂˇcos(E0ât/Ä§) â iÂˇaÂˇsin(E0ât/Ä§)

E0 is, of course, the rest mass of our particle and, now that we are here, we should probably wonder whose time we are talking about: is it our time, or is the proper time of our particle? Well… In this situation, we are both at rest so it does not matter: t is, effectively, the proper time so perhaps we should write it as t0. It does not matter. You can see what we expect to see: E0/Ä§ pops up as the natural frequency of our matter-particle: (E0/Ä§)ât = Ďât. Remembering the Ď = 2ĎÂˇf = 2Ď/T and T = 1/formulas, we can associate a period and a frequency with this wave, using the Ď = 2ĎÂˇf = 2Ď/T. Noting that Ä§ = h/2Ď, we find the following:

T = 2ĎÂˇ(Ä§/E0) = h/E0 â = E0/h = m0c2/h

This is interesting, because we can look at the period as a natural unit of time for our particle. What about the wavelength? That is tricky because we need to distinguish between group and phase velocity here. The group velocity (vg) should be zero here, because we assume our particle does not move. In contrast, the phase velocity is given by vp = ÎťÂˇ= (2Ď/k)Âˇ(Ď/2Ď) = Ď/k. In fact, we’ve got something funny here: the wavenumber k = p/Ä§ is zero, because we assume the particle is at rest, so p = 0. So we have a division by zero here, which is rather strange. What do we get assuming the particle is not at rest? We write:

vp = Ď/k = (E/Ä§)/(p/Ä§) = E/p = E/(mÂˇvg) = (mÂˇc2)/(mÂˇvg) = c2/vg

This is interesting: it establishes a reciprocal relation between the phase and the group velocity, with as a simple scaling constant. Indeed, the graph below shows the shape of the function does not change with the value of c, and we may also re-write the relation above as:

vp/= Î˛p = c/vp = 1/Î˛g = 1/(c/vp)

Figure 6: Reciprocal relation between phase and group velocity

We can also write the mentioned relationship as vpÂˇvg = c2, which reminds us of the relationship between the electric and magnetic constant (1/Îľ0)Âˇ(1/Îź0) = c2. This is interesting in light of the fact we can re-write this as (cÂˇÎľ0)Âˇ(cÂˇÎź0) = 1, which shows electricity and magnetism are just two sides of the same coin, so to speak.[24]

Interesting, but how do we interpret the math? What about the implications of the zero value for wavenumber k = p/Ä§. We would probably like to think it implies the elementary wavefunction should always be associated with some momentum, because the concept of zero momentum clearly leads to weird math: something times zero cannot be equal to c2! Such interpretation is also consistent with the Uncertainty Principle: if ÎxÂˇÎp âĽ Ä§, then neither Îx nor Îp can be zero. In other words, the Uncertainty Principle tells us that the idea of a pointlike particle actually being at some specific point in time and in space does not make sense: it has to move. It tells us that our concept of dimensionless points in time and space are mathematical notions only. Actual particles – including photons – are always a bit spread out, so to speak, and – importantly – they have to move.

For a photon, this is self-evident. It has no rest mass, no rest energy, and, therefore, it is going to move at the speed of light itself. We write: p = mÂˇc = mÂˇc2/= E/c. Using the relationship above, we get:

vp = Ď/k = (E/Ä§)/(p/Ä§) = E/p = c â vg = c2/vp = c2/c = c

This is good: we started out with some reflections on the matter-wave, but here we get an interpretation of the electromagnetic wave as a wavefunction for the photon. But let us get back to our matter-wave. In regard to our interpretation of a particle having to move, we should remind ourselves, once again, of the fact that an actual particle is always localized in space and that it can, therefore, not be represented by the elementary wavefunction Ď = aÂˇeâi[EÂˇt â pâx]/Ä§ or, for a particle at rest, the Ď = aÂˇeâiâEÂˇt/Ä§ function. We must build a wave packet for that: a sum of wavefunctions, each with their own amplitude ai, and their own Ďi = âEi/Ä§. Indeed, in section II, we showed that each of these wavefunctions will contribute some energy to the total energy of the wave packet and that, to calculate the contribution of each wave to the total, both ai as well as Ei matter. This may or may not resolve the apparent paradox. Let us look at the group velocity.

To calculate a meaningful group velocity, we must assume the vg = âĎi/âki = â(Ei/Ä§)/â(pi/Ä§) = â(Ei)/â(pi) exists. So we must have some dispersion relation. How do we calculate it? We need to calculate Ďi as a function of ki here, or Ei as a function of pi. How do we do that? Well… There are a few ways to go about it but one interesting way of doing it is to re-write SchrĂśdinger’s equation as we did, i.e. by distinguishing the real and imaginary parts of the âĎ/ât =iÂˇ[Ä§/(2m)]Âˇâ2Ď wave equation and, hence, re-write it as the following pair of two equations:

1. Re(âĎ/ât) = â[Ä§/(2meff)]ÂˇIm(â2Ď) â ĎÂˇcos(kx â Ďt) = k2Âˇ[Ä§/(2meff)]Âˇcos(kx â Ďt)
2. Im(âĎ/ât) = [Ä§/(2meff)]ÂˇRe(â2Ď) â ĎÂˇsin(kx â Ďt) = k2Âˇ[Ä§/(2meff)]Âˇsin(kx â Ďt)

Both equations imply the following dispersion relation:

Ď = Ä§Âˇk2/(2meff)

Of course, we need to think about the subscripts now: we have Ďi, ki, but… What about meff or, dropping the subscript, m? Do we write it as mi? If so, what is it? Well… It is the equivalent mass of Ei obviously, and so we get it from the mass-energy equivalence relation: mi = Ei/c2. It is a fine point, but one most people forget about: they usually just write m. However, if there is uncertainty in the energy, then Einstein’s mass-energy relation tells us we must have some uncertainty in the (equivalent) mass too. Here, I should refer back to Section II: Ei varies around some average energy E and, therefore, the Uncertainty Principle kicks in.

# VII. Explaining spin

The elementary wavefunction vector â i.e. the vector sum of the real and imaginary component â rotates around the x-axis, which gives us the direction of propagation of the wave (see Figure 3). Its magnitude remains constant. In contrast, the magnitude of the electromagnetic vector â defined as the vector sum of the electric and magnetic field vectors â oscillates between zero and some maximum (see Figure 5).

We already mentioned that the rotation of the wavefunction vector appears to give some spin to the particle. Of course, a circularly polarized wave would also appear to have spin (think of the E and B vectors rotating around the direction of propagation – as opposed to oscillating up and down or sideways only). In fact, a circularly polarized light does carry angular momentum, as the equivalent mass of its energy may be thought of as rotating as well. But so here we are looking at a matter-wave.

The basic idea is the following: if we look at Ď = aÂˇeâiâEÂˇt/Ä§ as some real vector â as a two-dimensional oscillation of mass, to be precise â then we may associate its rotation around the direction of propagation with some torque. The illustration below reminds of the math here.

Figure 7: Torque and angular momentum vectors

A torque on some mass about a fixed axis gives it angular momentum, which we can write as the vector cross-product L = rĂp or, perhaps easier for our purposes here as the product of an angular velocity (Ď) and rotational inertia (I), aka as the moment of inertia or the angular mass. We write:

L = IÂˇĎ

Note we can write L and Ď in boldface here because they are (axial) vectors. If we consider their magnitudes only, we write L = IÂˇĎ (no boldface). We can now do some calculations. Let us start with the angular velocity. In our previous posts, we showed that the period of the matter-wave is equal to T = 2ĎÂˇ(Ä§/E0). Hence, the angular velocity must be equal to:

Ď = 2Ď/[2ĎÂˇ(Ä§/E0)] = E0/Ä§

We also know the distance r, so that is the magnitude of r in the LrĂp vector cross-product: it is just a, so that is the magnitude of Ď = aÂˇeâiâEÂˇt/Ä§. Now, the momentum (p) is the product of a linear velocity (v) – in this case, the tangential velocity – and some mass (m): p = mÂˇv. If we switch to scalar instead of vector quantities, then the (tangential) velocity is given by v = rÂˇĎ. So now we only need to think about what we should use for m or, if we want to work with the angular velocity (Ď), the angular mass (I). Here we need to make some assumption about the mass (or energy) distribution. Now, it may or may not sense to assume the energy in the oscillation â and, therefore, the mass â is distributed uniformly. In that case, we may use the formula for the angular mass of a solid cylinder: I = mÂˇr2/2. If we keep the analysis non-relativistic, then m = m0. Of course, the energy-mass equivalence tells us that m0 = E0/c2. Hence, this is what we get:

L = IÂˇĎ = (m0Âˇr2/2)Âˇ(E0/Ä§) = (1/2)Âˇa2Âˇ(E0/c2)Âˇ(E0/Ä§) = a2ÂˇE02/(2ÂˇÄ§Âˇc2)

Does it make sense? Maybe. Maybe not. Let us do a dimensional analysis: that wonât check our logic, but it makes sure we made no mistakes when mapping mathematical and physical spaces. We have m2ÂˇJ2 = m2ÂˇN2Âˇm2 in the numerator and NÂˇmÂˇsÂˇm2/s2 in the denominator. Hence, the dimensions work out: we get NÂˇmÂˇs as the dimension for L, which is, effectively, the physical dimension of angular momentum. It is also the action dimension, of course, and that cannot be a coincidence. Also note that the E = mc2 equation allows us to re-write it as:

L = a2ÂˇE02/(2ÂˇÄ§Âˇc2)

Of course, in quantum mechanics, we associate spin with the magnetic moment of a charged particle, not with its mass as such. Is there way to link the formula above to the one we have for the quantum-mechanical angular momentum, which is also measured in NÂˇmÂˇs units, and which can only take on one of two possible values: J = +Ä§/2 and âÄ§/2? It looks like a long shot, right? How do we go from (1/2)Âˇa2Âˇm02/Ä§ to Âą (1/2)âÄ§? Let us do a numerical example. The energy of an electron is typically 0.510 MeV Âť 8.1871Ă10â14 Nâm, and aâŚ What value should we take for a?

We have an obvious trio of candidates here: the Bohr radius, the classical electron radius (aka the Thompon scattering length), and the Compton scattering radius.

Let us start with the Bohr radius, so that is about 0.Ă10â10 Nâm. We get L = a2ÂˇE02/(2ÂˇÄ§Âˇc2) = 9.9Ă10â31 Nâmâs. Now that is about 1.88Ă104 times Ä§/2. That is a huge factor. The Bohr radius cannot be right: we are not looking at an electron in an orbital here. To show it does not make sense, we may want to double-check the analysis by doing the calculation in another way. We said each oscillation will always pack 6.626070040(81)Ă10â34 joule in energy. So our electron should pack about 1.24Ă10â20 oscillations. The angular momentum (L) we get when using the Bohr radius for a and the value of 6.626Ă10â34 joule for E0 and the Bohr radius is equal to 6.49Ă10â59 Nâmâs. So that is the angular momentum per oscillation. When we multiply this with the number of oscillations (1.24Ă10â20), we get about 8.01Ă10â51 Nâmâs, so that is a totally different number.

The classical electron radius is about 2.818Ă10â15 m. We get an L that is equal to about 2.81Ă10â39 Nâmâs, so now it is a tiny fraction of Ä§/2! Hence, this leads us nowhere. Let us go for our last chance to get a meaningful result! Let us use the Compton scattering length, so that is about 2.42631Ă10â12 m.

This gives us an L of 2.08Ă10â33 Nâmâs, which is only 20 times Ä§. This is not so bad, but it is good enough? Let us calculate it the other way around: what value should we take for a so as to ensure L = a2ÂˇE02/(2ÂˇÄ§Âˇc2) = Ä§/2? Let us write it out:

In fact, this is the formula for the so-called reduced Compton wavelength. This is perfect. We found what we wanted to find. Substituting this value for a (you can calculate it: it is about 3.8616Ă10â33 m), we get what we should find:

This is a rather spectacular result, and one that would â a priori â support the interpretation of the wavefunction that is being suggested in this paper.

# VIII. The boson-fermion dichotomy

Let us do some more thinking on the boson-fermion dichotomy. Again, we should remind ourselves that an actual particle is localized in space and that it can, therefore, not be represented by the elementary wavefunction Ď = aÂˇeâi[EÂˇt â pâx]/Ä§ or, for a particle at rest, the Ď = aÂˇeâiâEÂˇt/Ä§ function. We must build a wave packet for that: a sum of wavefunctions, each with their own amplitude ai, and their own Ďi = âEi/Ä§. Each of these wavefunctions will contribute some energy to the total energy of the wave packet. Now, we can have another wild but logical theory about this.

Think of the apparent right-handedness of the elementary wavefunction: surely, Nature can’t be bothered about our convention of measuring phase angles clockwise or counterclockwise. Also, the angular momentum can be positive or negative: J = +Ä§/2 or âÄ§/2. Hence, we would probably like to think that an actual particle – think of an electron, or whatever other particle you’d think of – may consist of right-handed as well as left-handed elementary waves. To be precise, we may think they either consist of (elementary) right-handed waves or, else, of (elementary) left-handed waves. An elementary right-handed wave would be written as:

Ď(Î¸i= aiÂˇ(cosÎ¸i + iÂˇsinÎ¸i)

In contrast, an elementary left-handed wave would be written as:

Ď(Î¸i= aiÂˇ(cosÎ¸i â iÂˇsinÎ¸i)

How does that work out with the E0Âˇt argument of our wavefunction? Position is position, and direction is direction, but time? Time has only one direction, but Nature surely does not care how we count time: counting like 1, 2, 3, etcetera or like â1, â2, â3, etcetera is just the same. If we count like 1, 2, 3, etcetera, then we write our wavefunction like:

Ď = aÂˇcos(E0ât/Ä§) â iÂˇaÂˇsin(E0ât/Ä§)

If we count time like â1, â2, â3, etcetera then we write it as:

Ď = aÂˇcos(âE0ât/Ä§) â iÂˇaÂˇsin(âE0ât/Ä§)= aÂˇcos(E0ât/Ä§) + iÂˇaÂˇsin(E0ât/Ä§)

Hence, it is just like the left- or right-handed circular polarization of an electromagnetic wave: we can have both for the matter-wave too! This, then, should explain why we can have either positive or negative quantum-mechanical spin (+Ä§/2 or âÄ§/2). It is the usual thing: we have two mathematical possibilities here, and so we must have two physical situations that correspond to it.

It is only natural. If we have left- and right-handed photons – or, generalizing, left- and right-handed bosons – then we should also have left- and right-handed fermions (electrons, protons, etcetera). Back to the dichotomy. The textbook analysis of the dichotomy between bosons and fermions may be epitomized by Richard Feynmanâs Lecture on it (Feynman, III-4), which is confusing and â I would dare to say â even inconsistent: how are photons or electrons supposed to know that they need to interfere with a positive or a negative sign? They are not supposed to know anything: knowledge is part of our interpretation of whatever it is that is going on there.

Hence, it is probably best to keep it simple, and think of the dichotomy in terms of the different physical dimensions of the oscillation: newton per kg versus newton per coulomb. And then, of course, we should also note that matter-particles have a rest mass and, therefore, actually carry charge. Photons do not. But both are two-dimensional oscillations, and the point is: the so-called vacuum – and the rest mass of our particle (which is zero for the photon and non-zero for everything else) – give us the natural frequency for both oscillations, which is beautifully summed up in that remarkable equation for the group and phase velocity of the wavefunction, which applies to photons as well as matter-particles:

(vphaseÂˇc)Âˇ(vgroupÂˇc) = 1 â vpÂˇvg = c2

The final question then is: why are photons spin-zero particles? Well… We should first remind ourselves of the fact that they do have spin when circularly polarized.[25] Here we may think of the rotation of the equivalent mass of their energy. However, if they are linearly polarized, then there is no spin. Even for circularly polarized waves, the spin angular momentum of photons is a weird concept. If photons have no (rest) mass, then they cannot carry any charge. They should, therefore, not have any magnetic moment. Indeed, what I wrote above shows an explanation of quantum-mechanical spin requires both mass as well as charge.[26]

# IX. Concluding remarks

There are, of course, other ways to look at the matter â literally. For example, we can imagine two-dimensional oscillations as circular rather than linear oscillations. Think of a tiny ball, whose center of mass stays where it is, as depicted below. Any rotation â around any axis â will be some combination of a rotation around the two other axes. Hence, we may want to think of a two-dimensional oscillation as an oscillation of a polar and azimuthal angle.

Figure 8: Two-dimensional circular movement

The point of this paper is not to make any definite statements. That would be foolish. Its objective is just to challenge the simplistic mainstream viewpoint on the reality of the wavefunction. Stating that it is a mathematical construct only without physical significance amounts to saying it has no meaning at all. That is, clearly, a non-sustainable proposition.

The interpretation that is offered here looks at amplitude waves as traveling fields. Their physical dimension may be expressed in force per mass unit, as opposed to electromagnetic waves, whose amplitudes are expressed in force per (electric) charge unit. Also, the amplitudes of matter-waves incorporate a phase factor, but this may actually explain the rather enigmatic dichotomy between fermions and bosons and is, therefore, an added bonus.

The interpretation that is offered here has some advantages over other explanations, as it explains the how of diffraction and interference. However, while it offers a great explanation of the wave nature of matter, it does not explain its particle nature: while we think of the energy as being spread out, we will still observe electrons and photons as pointlike particles once they hit the detector. Why is it that a detector can sort of âhookâ the whole blob of energy, so to speak?

The interpretation of the wavefunction that is offered here does not explain this. Hence, the complementarity principle of the Copenhagen interpretation of the wavefunction surely remains relevant.

# Appendix 1: The de Broglie relations and energy

The 1/2 factor in SchrĂśdingerâs equation is related to the concept of the effective mass (meff). It is easy to make the wrong calculations. For example, when playing with the famous de Broglie relations â aka as the matter-wave equations â one may be tempted to derive the following energy concept:

1. E = hÂˇf and p = h/Îť. Therefore, f = E/h and Îť = p/h.
2. v = fÂˇÎť = (E/h)â(p/h) = E/p
3. p = mÂˇv. Therefore, E = vÂˇp = mÂˇv2

E = mÂˇv2? This resembles the E = mc2 equation and, therefore, one may be enthused by the discovery, especially because the mÂˇv2 also pops up when working with the Least Action Principle in classical mechanics, which states that the path that is followed by a particle will minimize the following integral:Now, we can choose any reference point for the potential energy but, to reflect the energy conservation law, we can select a reference point that ensures the sum of the kinetic and the potential energy is zero throughout the time interval. If the force field is uniform, then the integrand will, effectively, be equal to KE â PE = mÂˇv2.[27]

However, that is classical mechanics and, therefore, not so relevant in the context of the de Broglie equations, and the apparent paradox should be solved by distinguishing between the group and the phase velocity of the matter wave.

# Appendix 2: The concept of the effective mass

The effective mass â as used in SchrĂśdingerâs equation â is a rather enigmatic concept. To make sure we are making the right analysis here, I should start by noting you will usually see SchrĂśdingerâs equation written as:This formulation includes a term with the potential energy (U). In free space (no potential), this term disappears, and the equation can be re-written as:

âĎ(x, t)/ât = iÂˇ(1/2)Âˇ(Ä§/meff)Âˇâ2Ď(x, t)

We just moved the iÂˇÄ§ coefficient to the other side, noting that 1/i = –i. Now, in one-dimensional space, and assuming Ď is just the elementary wavefunction (so we substitute aÂˇeâiâ[EÂˇt â pâx]/Ä§ for Ď), this implies the following:

âaÂˇiÂˇ(E/Ä§)Âˇeâiâ[EÂˇt â pâx]/Ä§ = âiÂˇ(Ä§/2meff)ÂˇaÂˇ(p2/Ä§2)Âˇ eâiâ[EÂˇt â pâx]/Ä§

â E = p2/(2meff) â meff = mâ(v/c)2/2 = mâÎ˛2/2

It is an ugly formula: it resembles the kinetic energy formula (K.E. = mâv2/2) but it is, in fact, something completely different. The Î˛2/2 factor ensures the effective mass is always a fraction of the mass itself. To get rid of the ugly 1/2 factor, we may re-define meff as two times the old meff (hence, meffNEW = 2âmeffOLD), as a result of which the formula will look somewhat better:

meff = mâ(v/c)2 = mâÎ˛2

We know Î˛ varies between 0 and 1 and, therefore, meff will vary between 0 and m. Feynman drops the subscript, and just writes meff as m in his textbook (see Feynman, III-19). On the other hand, the electron mass as used is also the electron mass that is used to calculate the size of an atom (see Feynman, III-2-4). As such, the two mass concepts are, effectively, mutually compatible. It is confusing because the same mass is often defined as the mass of a stationary electron (see, for example, the article on it in the online Wikipedia encyclopedia[28]).

In the context of the derivation of the electron orbitals, we do have the potential energy term â which is the equivalent of a source term in a diffusion equation â and that may explain why the above-mentioned meff = mâ(v/c)2 = mâÎ˛2 formula does not apply.

# References

This paper discusses general principles in physics only. Hence, references can be limited to references to physics textbooks only. For ease of reading, any reference to additional material has been limited to a more popular undergrad textbook that can be consulted online: Feynmanâs Lectures on Physics (http://www.feynmanlectures.caltech.edu). References are per volume, per chapter and per section. For example, Feynman III-19-3 refers to Volume III, Chapter 19, Section 3.

# Notes

[1] Of course, an actual particle is localized in space and can, therefore, not be represented by the elementary wavefunction Ď = aÂˇeâiâÎ¸aÂˇeâi[EÂˇt â pâx]/Ä§ = aÂˇ(cosÎ¸ iÂˇaÂˇsinÎ¸). We must build a wave packet for that: a sum of wavefunctions, each with its own amplitude ak and its own argument Î¸k = (Ekât – pkâx)/Ä§. This is dealt with in this paper as part of the discussion on the mathematical and physical interpretation of the normalization condition.

[2] The N/kg dimension immediately, and naturally, reduces to the dimension of acceleration (m/s2), thereby facilitating a direct interpretation in terms of Newtonâs force law.

[3] In physics, a two-spring metaphor is more common. Hence, the pistons in the authorâs perpetuum mobile may be replaced by springs.

[4] The author re-derives the equation for the Compton scattering radius in section VII of the paper.

[5] The magnetic force can be analyzed as a relativistic effect (see Feynman II-13-6). The dichotomy between the electric force as a polar vector and the magnetic force as an axial vector disappears in the relativistic four-vector representation of electromagnetism.

[6] For example, when using SchrĂśdingerâs equation in a central field (think of the electron around a proton), the use of polar coordinates is recommended, as it ensures the symmetry of the Hamiltonian under all rotations (see Feynman III-19-3)

[7] This sentiment is usually summed up in the apocryphal quote: âGod does not play dice.âThe actual quote comes out of one of Einsteinâs private letters to Cornelius Lanczos, another scientist who had also emigrated to the US. The full quote is as follows: “You are the only person I know who has the same attitude towards physics as I have: belief in the comprehension of reality through something basically simple and unified… It seems hard to sneak a look at God’s cards. But that He plays dice and uses ‘telepathic’ methods… is something that I cannot believe for a single moment.” (Helen Dukas and Banesh Hoffman, Albert Einstein, the Human Side: New Glimpses from His Archives, 1979)

[8] Of course, both are different velocities: Ď is an angular velocity, while v is a linear velocity: Ď is measured in radians per second, while v is measured in meter per second. However, the definition of a radian implies radians are measured in distance units. Hence, the physical dimensions are, effectively, the same. As for the formula for the total energy of an oscillator, we should actually write: E = mÂˇa2âĎ2/2. The additional factor (a) is the (maximum) amplitude of the oscillator.

[9] We also have a 1/2 factor in the E = mv2/2 formula. Two remarks may be made here. First, it may be noted this is a non-relativistic formula and, more importantly, incorporates kinetic energy only. Using the Lorentz factor (Îł), we can write the relativistically correct formula for the kinetic energy as K.E. = E â E0 = mvc2 â m0c2 = m0Îłc2 â m0c2 = m0c2(Îł â 1). As for the exclusion of the potential energy, we may note that we may choose our reference point for the potential energy such that the kinetic and potential energy mirror each other. The energy concept that then emerges is the one that is used in the context of the Principle of Least Action: it equals E = mv2. Appendix 1 provides some notes on that.

[10] Instead of two cylinders with pistons, one may also think of connecting two springs with a crankshaft.

[11] It is interesting to note that we may look at the energy in the rotating flywheel as potential energy because it is energy that is associated with motion, albeit circular motion. In physics, one may associate a rotating object with kinetic energy using the rotational equivalent of mass and linear velocity, i.e. rotational inertia (I) and angular velocity Ď. The kinetic energy of a rotating object is then given by K.E. = (1/2)ÂˇIÂˇĎ2.

[12] Because of the sideways motion of the connecting rods, the sinusoidal function will describe the linear motion only approximately, but you can easily imagine the idealized limit situation.

[13] The Ď2= 1/LC formula gives us the natural or resonant frequency for a electric circuit consisting of a resistor (R), an inductor (L), and a capacitor (C). Writing the formula as Ď2= C1/L introduces the concept of elastance, which is the equivalent of the mechanical stiffness (k) of a spring.

[14] The resistance in an electric circuit introduces a damping factor. When analyzing a mechanical spring, one may also want to introduce a drag coefficient. Both are usually defined as a fraction of the inertia, which is the mass for a spring and the inductance for an electric circuit. Hence, we would write the resistance for a spring as Îłm and as R = ÎłL respectively.

[15] Photons are emitted by atomic oscillators: atoms going from one state (energy level) to another. Feynman (Lectures, I-33-3) shows us how to calculate the Q of these atomic oscillators: it is of the order of 108, which means the wave train will last about 10â8 seconds (to be precise, that is the time it takes for the radiation to die out by a factor 1/e). For example, for sodium light, the radiation will last about 3.2Ă10â8 seconds (this is the so-called decay time Ď). Now, because the frequency of sodium light is some 500 THz (500Ă1012 oscillations per second), this makes for some 16 million oscillations. There is an interesting paradox here: the speed of light tells us that such wave train will have a length of about 9.6 m! How is that to be reconciled with the pointlike nature of a photon? The paradox can only be explained by relativistic length contraction: in an analysis like this, one need to distinguish the reference frame of the photon â riding along the wave as it is being emitted, so to speak â and our stationary reference frame, which is that of the emitting atom.

[16] This is a general result and is reflected in the K.E. = T = (1/2)ÂˇmÂˇĎ2Âˇa2Âˇsin2(ĎÂˇt + Î) and the P.E. = U = kÂˇx2/2 = (1/2)Âˇ mÂˇĎ2Âˇa2Âˇcos2(ĎÂˇt + Î) formulas for the linear oscillator.

[17] Feynman further formalizes this in his Lecture on Superconductivity (Feynman, III-21-2), in which he refers to SchrĂśdingerâs equation as the âequation for continuity of probabilitiesâ. The analysis is centered on the local conservation of energy, which confirms the interpretation of SchrĂśdingerâs equation as an energy diffusion equation.

[18] The meff is the effective mass of the particle, which depends on the medium. For example, an electron traveling in a solid (a transistor, for example) will have a different effective mass than in an atom. In free space, we can drop the subscript and just write meff = m. Appendix 2 provides some additional notes on the concept. As for the equations, they are easily derived from noting that two complex numbers a + iâb and c + iâd are equal if, and only if, their real and imaginary parts are the same. Now, the âĎ/ât = iâ(Ä§/meff)ââ2Ď equation amounts to writing something like this: a + iâb = iâ(c + iâd). Now, remembering that i2 = â1, you can easily figure out that iâ(c + iâd) = iâc + i2âd = â d + iâc.

[19] The dimension of B is usually written as N/(mâA), using the SI unit for current, i.e. the ampere (A). However, 1 C = 1 Aâs and, hence, 1 N/(mâA) = 1 (N/C)/(m/s).

[20] Of course, multiplication with i amounts to a counterclockwise rotation. Hence, multiplication by –i also amounts to a rotation by 90 degrees, but clockwise. Now, to uniquely identify the clockwise and counterclockwise directions, we need to establish the equivalent of the right-hand rule for a proper geometric interpretation of SchrĂśdingerâs equation in three-dimensional space: if we look at a clock from the back, then its hand will be moving counterclockwise. When writing B = (1/c)âiâE, we assume we are looking in the negative x-direction. If we are looking in the positive x-direction, we should write: B = -(1/c)âiâE. Of course, Nature does not care about our conventions. Hence, both should give the same results in calculations. We will show in a moment they do.

[21] In fact, when multiplying C2/(NÂˇm2) with N2/C2, we get N/m2, but we can multiply this with 1 = m/m to get the desired result. It is significant that an energy density (joule per unit volume) can also be measured in newton (force per unit area.

[22] The illustration shows a linearly polarized wave, but the obtained result is general.

[23] The sine and cosine are essentially the same functions, except for the difference in the phase: sinÎ¸ = cos(Î¸âĎ /2).

[24] I must thank a physics blogger for re-writing the 1/(Îľ0ÂˇÎź0) = c2 equation like this. See: http://reciprocal.systems/phpBB3/viewtopic.php?t=236 (retrieved on 29 September 2017).

[25] A circularly polarized electromagnetic wave may be analyzed as consisting of two perpendicular electromagnetic plane waves of equal amplitude and 90Â° difference in phase.

[26] Of course, the reader will now wonder: what about neutrons? How to explain neutron spin? Neutrons are neutral. That is correct, but neutrons are not elementary: they consist of (charged) quarks. Hence, neutron spin can (or should) be explained by the spin of the underlying quarks.

[27] We detailed the mathematical framework and detailed calculations in the following online article: https://readingfeynman.org/2017/09/15/the-principle-of-least-action-re-visited.

[28] https://en.wikipedia.org/wiki/Electron_rest_mass (retrieved on 29 September 2017).

# Thinking again…

One of the comments on my other blog made me think I should, perhaps, write something on waves again. The animation below shows theÂ elementaryÂ wavefunctionÂ Ď =Â aÂˇeâiÎ¸Â = Ď =Â aÂˇeâiÂˇÎ¸Â Â = aÂˇeâi(ĎÂˇtâkÂˇx)Â = aÂˇeâ(i/Ä§)Âˇ(EÂˇtâpÂˇx)Â .We know this elementary wavefunction cannotÂ represent a real-lifeÂ particle. Indeed, the aÂˇeâiÂˇÎ¸Â function implies the probability of finding the particle – an electron, a photon, or whatever – would be equal to P(x, t) = |Ď(x, t)|2Â = |aÂˇeâ(i/Ä§)Âˇ(EÂˇtâpÂˇx)|2Â = |a|2Âˇ|eâ(i/Ä§)Âˇ(EÂˇtâpÂˇx)|2Â = |a|2Âˇ12= a2Â everywhere. Hence, the particle would be everywhere – and, therefore, nowhere really. We need to localize the wave – or build a wave packet. We can do so by introducing uncertainty: we then addÂ a potentially infinite number of these elementary wavefunctions with slightly different values for E and p, and various amplitudes a. Each of these amplitudes will then reflect theÂ contributionÂ to the composite wave, which – in three-dimensional space – we can write as:

Ď(r, t) = eâiÂˇ(E/Ä§)ÂˇtÂˇf(r)

As I explained in previous posts (see, for example, my recent postÂ on reality and perception), theÂ f(r) function basically provides some envelope for the two-dimensional eâiÂˇÎ¸Â =Â eâiÂˇ(E/Ä§)ÂˇtÂ = cosÎ¸ + iÂˇsinÎ¸Â oscillation, with rÂ = (x, y, z),Â Î¸ = (E/Ä§)ÂˇtÂ = ĎÂˇtÂ and Ď = E/Ä§.

Note that it looks like the wave propagatesÂ from left to right – in theÂ positive direction of an axis which we may refer to as the x-axis. Also note this perception results from the fact that, naturally, we’d associate time with theÂ rotationÂ of that arrow at the center – i.e. with the motion in the illustration,Â while the spatial dimensions are just what they are: linear spatial dimensions. [This point is, perhaps, somewhat less self-evident than you may think at first.]

Now, the axis which points upwards is usually referred to as the z-axis, and the third and final axis – which points towardsÂ us –Â would then be the y-axis, obviously.Â Unfortunately, this definition would violate the so-called right-hand rule for defining a proper reference frame: the figures below shows the two possibilities – a left-handed and a right-handed reference frame – and it’s the right-handed reference (i.e. the illustration on the right) which we have to use in order to correctly define all directions, including the direction ofÂ rotationÂ of the argument of the wavefunction.Hence, if we don’t change the direction of the y– and z-axes – so we keep defining the z-axis as the axis pointing upwards, and the y-axis as the axis pointing towardsÂ us – then the positive direction of the x-axis would actually be the direction from right to left, and we should say that the elementary wavefunction in the animation above seems to propagate in the negativeÂ x-direction. [Note that this left- or right-hand rule is quite astonishing: simply swapping the direction ofÂ oneÂ axis of a left-handed frame makes it right-handed, and vice versa.]

Note my language when I talk about the direction of propagation of our wave. I wrote: it looks like, or it seems toÂ go in this or that direction. And I mean that: there is no real travelingÂ here. At this point, you may want to review a post I wrote for my son, which explains the basic math behind waves, and in which I also explained the animation below.

Note how the peaks and troughs of this pulse seem to move leftwards, but the wave packet (or theÂ groupÂ or theÂ envelopeÂ of the waveâwhatever you want to call it) moves to the right. The point is: the pulse itself doesn’tÂ travel left or right. Think of the horizontal axis in the illustration above as an oscillating guitar string: each point on the string just moves up and down. Likewise, if our repeated pulse would represent a physical wave in water, for example, then the water just stays where it is: it just moves up and down. Likewise, if we shake up some rope, the rope is not going anywhere: we just started some motionÂ that is traveling down the rope.Â In other words, the phase velocity is just a mathematical concept. The peaks and troughs that seem to be traveling are just mathematical points that are âtravelingâ left or right. Thatâs why thereâs no limit on the phase velocity: it canÂ – and, according to quantum mechanics, actually willÂ –Â exceed the speed of light. In contrast, the groupÂ velocity – which is the actual speed of the particle that is being represented by the wavefunction – may approachÂ – or, in the case of a massless photon, will actually equalÂ –Â the speed of light, but will never exceedÂ it, and itsÂ directionÂ will, obviously, have aÂ physicalÂ significance as it is, effectively, the direction of travel of our particle – be it an electron, a photon (electromagnetic radiation), or whatever.

Hence, you should not think theÂ spinÂ of a particle – integer or half-integer – is somehow related to the direction of rotation of the argument of the elementary wavefunction. It isn’t: Nature doesn’t give a damn about our mathematical conventions, and that’s what the direction of rotation of the argument of that wavefunction is: just some mathematical convention. That’s why we write aÂˇeâi(ĎÂˇtâkÂˇx)Â rather thanÂ aÂˇei(ĎÂˇt+kÂˇx)Â orÂ aÂˇei(ĎÂˇtâkÂˇx): it’s just because of the right-hand rule for coordinate frames, and also because Euler defined the counter-clockwise direction as theÂ positive direction of an angle. There’s nothing more to it.

OK. That’s obvious. Let me now return to my interpretation of Einstein’s E = mÂˇc2Â formula (see my previous posts on this). I noted that, in the reference frame of the particle itself (see my basics page), the elementary wavefunction aÂˇeâ(i/Ä§)Âˇ(EÂˇtâpÂˇx)Â reduces to aÂˇeâ(i/Ä§)Âˇ(E’Âˇt’): the origin of the reference frame then coincides with (the center of) our particle itself, and the wavefunction only varies with the time in the inertial reference frame (i.e. the properÂ time t’), with the rest energy of the object (E’) as the time scale factor. How should we interpret this?

Well… Energy is force times distance, and force is defined as that what causes some massÂ toÂ accelerate. To be precise, theÂ newtonÂ – as the unit of force – is defined as theÂ magnitude of a force which would cause a mass of one kg to accelerate with one meter per secondÂ per second. Per second per second. This is not a typo: 1 N corresponds to 1 kg times 1 m/sÂ per second, i.e. 1 kgÂˇm/s2. So… Because energy is force times distance, the unit of energyÂ may be expressed in units of kgÂˇm/s2Âˇm, or kgÂˇm2/s2, i.e. the unit of mass times the unit ofÂ velocity squared. To sum it all up:

1 J = 1 NÂˇm = 1 kgÂˇ(m/s)2

This reflects the physical dimensionsÂ on both sides of theÂ E = mÂˇc2Â formula again but… Well… How should weÂ interpretÂ this? Look at the animation below once more, and imagine the green dot is some tinyÂ massÂ moving around the origin, in an equally tiny circle. We’ve gotÂ twoÂ oscillations here: each packingÂ halfÂ of the total energy of… Well… Whatever it is that our elementary wavefunction might represent in realityÂ – which we don’t know, of course.

Now, the blue and the red dot – i.e. the horizontal and vertical projectionÂ of the green dot –Â accelerate up and down. If we look carefully, we see these dots accelerateÂ towardsÂ the zero point and, once they’ve crossed it, theyÂ decelerate, so as to allow for a reversal of direction: the blue dot goes up, and then down. Likewise, the red dot does the same. The interplay between the two oscillations, because of the 90Â° phase difference, is interesting: if the blue dot is at maximum speed (near or at the origin), the red dot reverses speed (its speed is, therefore, (almost) nil), and vice versa. The metaphor of our frictionless V-2 engine, our perpetuum mobile,Â comes to mind once more.

The question is: what’s going on, really?

My answer is: I don’t know. I do think that, somehow, energy should be thought of as some two-dimensional oscillation of something – something which we refer to asÂ mass, but we didn’t define mass very clearly either. It also, somehow, combines linear and rotational motion. Each of the two dimensions packs half of the energy of the particle that is being represented by our wavefunction. It is, therefore, only logical that the physical unitÂ of both is to be expressed as a force over some distance – which is, effectively, the physical dimension of energy – or the rotational equivalent of them: torqueÂ over some angle.Â Indeed, the analogy between linear and angular movement is obvious: theÂ kineticÂ energy of a rotating object is equal to K.E. = (1/2)ÂˇIÂˇĎ2. In this formula, I is the rotational inertiaÂ – i.e. the rotational equivalent of mass – and Ď is the angular velocity – i.e. the rotational equivalent of linearÂ velocity. Noting that the (average) kinetic energy in any system must be equal to the (average) potential energy in the system, we can add both, so we get a formula which is structurallyÂ similar to theÂ E = mÂˇc2Â formula. But isÂ it the same? Is the effective mass of some object the sum of an almost infinite number of quantaÂ that incorporate some kind ofÂ rotationalÂ motion? And – if we use the right units – is the angular velocity of these infinitesimally small rotations effectively equal to the speed of light?

I am not sure. Not at all, really. But, so far, I can’t think of any explanation of the wavefunction that would make more sense than this one. I just need to keep trying to find better ways toÂ articulateÂ orÂ imagineÂ what might be going on. đ In this regard, I’d like to add a point – which may or may not be relevant. When I talked about that guitar string, or the water wave, and wrote that each point on the string – or each water drop – just moves up and down, we should think of the physicality of the situation: when the string oscillates, itsÂ lengthÂ increases. So it’s only because our string is flexible that it can vibrate between the fixed points at its ends. For a rope that’sÂ notÂ flexible, the end points would need to move in and out with the oscillation. Look at the illustration below, for example: the two kids who are holding rope must come closer to each other, so as to provide the necessary space inside of the oscillation for the other kid. đThe next illustration – of how water waves actually propagate – is, perhaps, more relevant. Just think of a two-dimensional equivalent – and of the two oscillations as being transverseÂ waves, as opposed to longitudinal.Â See how string theory starts making sense? đ

The most fundamental question remains the same: what is it,Â exactly, that is oscillating here? What is theÂ field? It’s always some force on some charge – but what charge, exactly? Mass? What is it? Well… I don’t have the answer to that. It’s the same as asking: what isÂ electricÂ charge,Â really? So the question is: what’s theÂ realityÂ of mass, of electric charge, or whatever other charge that causes a force toÂ actÂ on it?

If youÂ know, please letÂ meÂ know. đ

Post scriptum: The fact that we’re talking someÂ two-dimensional oscillation here – think of a surface now – explains the probability formula: we need toÂ squareÂ the absolute value of the amplitude to get it. And normalize, of course. Also note that, when normalizing, we’d expect to get some factor involvingÂ Ď somewhere, because we’re talking someÂ circularÂ surface – as opposed to a rectangular one. But I’ll letÂ youÂ figure that out. đ

# Reality and perception

It’s quite easy to get lost in all of the math when talking quantum mechanics. In this post, I’d like to freewheel a bit. I’ll basically try to relate the wavefunction we’ve derived for the electron orbitals to the more speculative posts I wrote on how toÂ interpretÂ the wavefunction. So… Well… Let’s go. đ

If there is one thing you should remember from all of the stuff I wrote in my previous posts, then it’s that the wavefunction for an electron orbital – Ď(x, t), so that’s a complex-valued function in twoÂ variables (position and time) – canÂ be written as the product of two functions in oneÂ variable:

Ď(x, t) = eâiÂˇ(E/Ä§)ÂˇtÂˇf(x)

In fact, we wrote f(x) as Ď(x), but I told you how confusing that is: the Ď(x) and Ď(x, t) functions are, obviously,Â veryÂ different. To be precise,Â theÂ f(x) = Ď(x) function basically provides some envelope for the two-dimensional eiÎ¸Â =Â eâiÂˇ(E/Ä§)ÂˇtÂ = cosÎ¸ + iÂˇsinÎ¸Â oscillation – as depicted below (Î¸ = â(E/Ä§)ÂˇtÂ = ĎÂˇtÂ with Ď = âE/Ä§).When analyzing this animation – look at the movement of the green, red and blue dots respectively – one cannot miss the equivalence between this oscillation and the movement of a mass on a spring – as depicted below.The eâiÂˇ(E/Ä§)ÂˇtÂ function just gives us twoÂ springs for the price of one. đ Now, you may want to imagine some kind of elastic medium – Feynman’s famous drum-head, perhaps đ – and you may also want to think of all of this in terms of superimposed waves but… Well… I’d need to review if that’s really relevant to what we’re discussing here, so I’d rather notÂ make things too complicated and stick to basics.

First note that the amplitude of the two linear oscillations above is normalized: the maximum displacement of the object from equilibrium, in the positive or negative direction, which we may denote by x = ÂąA, is equal to one. Hence, the energy formula is just the sum of the potential and kinetic energy: T + U = (1/2)ÂˇA2ÂˇmÂˇĎ2Â = (1/2)ÂˇmÂˇĎ2. But so we haveÂ twoÂ springs and, therefore, the energy in this two-dimensional oscillation is equal to E = 2Âˇ(1/2)ÂˇmÂˇĎ2Â =Â mÂˇĎ2.

This formula is structurally similar to Einstein’sÂ E = mÂˇc2Â formula. Hence, one may want to assume that the energy of some particle (an electron, in our case, because we’re discussing electron orbitals here)Â is just the two-dimensional motion of itsÂ mass. To put it differently, we might also want to think that the oscillating real and imaginary component of our wavefunction each store one halfÂ of the total energy of our particle.

However, the interpretation of this rather bold statement is not so straightforward. First, you should note that the Ď in the E =Â mÂˇĎ2Â formula is an angularÂ velocity, as opposed to the cÂ in theÂ E = mÂˇc2Â formula, which is a linear velocity. Angular velocities are expressed inÂ radiansÂ per second, while linear velocities are expressed inÂ meterÂ per second. However, while theÂ radianÂ measures an angle, we know it does so by measuring a length. Hence, if our distance unit is 1 m, an angle of 2ĎÂ rad will correspond to a length of 2ĎÂ meter, i.e. the circumference of the unit circle. So… Well… The two velocities mayÂ notÂ be so different after all.

There are other questions here. In fact, the other questions are probably more relevant. First, we should note that the Ď in the E =Â mÂˇĎ2Â can take on any value. For a mechanical spring, Ď will be a function of (1) the stiffnessÂ of the spring (which we usually denote by k, and which is typically measured in newton (N) per meter) and (2) the mass (m) on the spring. To be precise, we write:Â Ď2Â = k/m – or, what amounts to the same, ĎÂ = â(k/m). Both k and m are variablesÂ and, therefore, Ď can really be anything. In contrast, we know that c is a constant: cÂ equalsÂ 299,792,458 meter per second, to be precise. So we have this rather remarkable expression: cÂ = â(E/m), and it is valid for anyÂ particle – our electron, or the proton at the center, or our hydrogen atom as a whole. It is also valid for more complicated atoms, of course. In fact, it is valid forÂ anyÂ system.

Hence, we need to take another look at the energy conceptÂ that is used in our Ď(x, t) = eâiÂˇ(E/Ä§)ÂˇtÂˇf(x) wavefunction. You’ll remember (if not, youÂ should) that the E here is equal to EnÂ = â13.6 eV, â3.4 eV, â1.5 eV and so on, for nÂ = 1, 2, 3, etc. Hence, this energy concept is rather particular. As Feynman puts it: “The energies are negative because we picked our zero point as the energy of an electron located far from the proton. When it is close to the proton, its energy is less, so somewhat below zero. The energy is lowest (most negative) for n = 1, and increases toward zero with increasing n.”

Now, this is theÂ one and onlyÂ issue I have with the standard physics story. I mentioned it in one of my previous posts and, just for clarity, let me copy what I wrote at the time:

Feynman gives us a rather casual explanation [on choosing a zero point for measuring energy] in one of his very firstÂ LecturesÂ on quantum mechanics, where he writes the following:Â âIf we have a âconditionâ which is a mixture of two different states with different energies, then the amplitude for each of the two states will vary with time according to an equation likeÂ aÂˇeâiĎt, with Ä§ÂˇĎ =Â EÂ = mÂˇc2. Hence, we can write the amplitude for the two states, for example as:

eâi(E1/Ä§)ÂˇtÂ and eâi(E2/Ä§)Âˇt

And if we have some combination of the two, we will have an interference. But notice that if we added a constant to both energies, it wouldnât make any difference. If somebody else were to use a different scale of energy in which all the energies were increased (or decreased) by a constant amountâsay, by the amount Aâthen the amplitudes in the two states would, from his point of view, be:

eâi(E1+A)Âˇt/Ä§Â and eâi(E2+A)Âˇt/Ä§

All of his amplitudes would be multiplied by the same factor eâi(A/Ä§)Âˇt, and all linear combinations, or interferences, would have the same factor. When we take the absolute squares to find the probabilities, all the answers would be the same. The choice of an origin for our energy scale makes no difference; we can measure energy from any zero we want. For relativistic purposes it is nice to measure the energy so that the rest mass is included, but for many purposes that arenât relativistic it is often nice to subtract some standard amount from all energies that appear. For instance, in the case of an atom, it is usually convenient to subtract the energy MsÂˇc2, where MsÂ is the mass of all the separate piecesâthe nucleus and the electronsâwhich is, of course, different from the mass of the atom. For other problems, it may be useful to subtract from all energies the amount MgÂˇc2, where MgÂ is the mass of the whole atom in the ground state; then the energy that appears is just the excitation energy of the atom. So, sometimes we may shift our zero of energy by some very large constant, but it doesnât make any difference, provided we shift all the energies in a particular calculation by the same constant.â

Itâs a rather long quotation, but itâs important. The key phrase here is, obviously, the following: âFor other problems, it may be useful to subtract from all energies the amount MgÂˇc2, where MgÂ is the mass of the whole atom in the ground state; then the energy that appears is just the excitation energy of the atom.â So thatâs what heâs doing when solving SchrĂśdingerâs equation. However, I should make the following point here: if we shift the origin of our energy scale, it does not make any difference in regard to theÂ probabilitiesÂ we calculate, but it obviously does make a difference in terms of our wavefunction itself. To be precise, itsÂ densityÂ in time will beÂ veryÂ different. Hence, if weâd want to give the wavefunction someÂ physicalÂ meaning â which is what Iâve been trying to do all along â itÂ doesÂ make a huge difference. When we leave the rest mass of all of the pieces in our system out, we can no longer pretend we capture their energy.

So… Well… There you go. If we’d want to try to interpret our Ď(x, t) = eâiÂˇ(En/Ä§)ÂˇtÂˇf(x) function as a two-dimensional oscillation of theÂ massÂ of our electron, the energy concept in it – so that’s the EnÂ in it – should include all pieces. Most notably, it should also include the electron’sÂ rest energy, i.e. its energy when it is notÂ in a bound state. This rest energy is equal to 0.511 MeV. […]Â Read this again: 0.511 mega-electronvolt (106Â eV), so that’s huge as compared to the tiny energy values we mentioned so far (â13.6 eV, â3.4 eV, â1.5 eV,…).

Of course, this gives us a rather phenomenal order of magnitude for the oscillation that we’re looking at. Let’s quickly calculate it. We need to convert to SI units,Â of course: 0.511 MeV is about 8.2Ă10â14Â jouleÂ (J), and so the associated frequencyÂ is equal toÂ Î˝ = E/h = (8.2Ă10â14Â J)/(6.626Ă10â34 JÂˇs) â 1.23559Ă1020Â cycles per second. Now, I know such number doesn’t say all that much: just note it’s the same order of magnitude as the frequency of gamma raysÂ and… Well… No. I won’t say more. You should try to think about this for yourself. [If you do,Â think – for starters – aboutÂ the difference between bosons and fermions: matter-particles are fermions, and photons are bosons. Their nature is very different.]

The correspondingÂ angularÂ frequency is just the same number but multiplied by 2Ď (one cycle corresponds to 2ĎÂ radiansÂ and, hence, Ď = 2ĎÂˇÎ˝ = 7.76344Ă1020Â radÂ per second. Now, if our green dot would be moving around the origin, along the circumference of our unit circle, then its horizontal and/or vertical velocity would approach the same value. Think of it. We have thisÂ eiÎ¸Â =Â eâiÂˇ(E/Ä§)ÂˇtÂ =Â eiÂˇĎÂˇtÂ = cos(ĎÂˇt) +Â iÂˇsin(ĎÂˇt) function, with Ď = E/Ä§. So theÂ cos(ĎÂˇt) captures the motion along the horizontal axis, while the sin(ĎÂˇt) function captures the motion along the vertical axis.Â Now, the velocity along the horizontalÂ axis as a function of time is given by the following formula:

v(t) = d[x(t)]/dt = d[cos(ĎÂˇt)]/dt =Â âĎÂˇsin(ĎÂˇt)

Likewise, the velocity along theÂ verticalÂ axis is given byÂ v(t) = d[sin(ĎÂˇt)]/dt = ĎÂˇcos(ĎÂˇt). These are interesting formulas: they show the velocity (v) along one of the two axes is always lessÂ than theÂ angular velocity (Ď). To be precise, the velocity vÂ approaches – or, in the limit, is equal to –Â the angular velocity Ď when ĎÂˇt is equal to ĎÂˇtÂ = 0,Â Ď/2, Ď or 3Ď/2. So… Well… 7.76344Ă1020Â meterÂ per second!? That’s like 2.6Â trillionÂ times the speed of light. So that’s not possible, of course!

That’s where theÂ amplitudeÂ of our wavefunction comes in – our envelope functionÂ f(x): the green dot doesÂ notÂ move along the unit circle. The circle is much tinier and, hence, the oscillation shouldÂ notÂ exceed the speed of light. In fact, I should probably try to prove it oscillatesÂ atÂ the speed of light, thereby respecting Einstein’s universal formula:

cÂ = â(E/m)

Written like this – rather than as you know it: E = mÂˇc2Â – this formula shows the speed of light is just a property of spacetime, just like the ĎÂ = â(k/m) formula (or the ĎÂ = â(1/LC) formula for a resonant AC circuit) shows that Ď, the naturalÂ frequency of our oscillator, is a characteristic of the system.

Am I absolutely certain of what I am writing here? No. My level of understanding of physics is still that of an undergrad. But… Well… It all makes a lot of sense, doesn’t it? đ

Now, I said there were a fewÂ obvious questions, and so far I answered only one. The other obvious question is why energy would appear to us as mass in motionÂ in two dimensions only. Why is it an oscillation in a plane? We might imagine a third spring, so to speak, moving in and out from us, right? Also, energyÂ densitiesÂ are measured per unitÂ volume, right?

NowÂ that‘s a clever question, and I must admit I can’t answer it right now. However, I do suspect it’s got to do with the fact that the wavefunction depends on the orientation of our reference frame. If we rotate it, it changes. So it’s like we’ve lost one degree of freedom already, so only two are left. Or think of the third direction as the direction of propagationÂ of the wave. đÂ Also, we should re-read what we wrote about the Poynting vector for the matter wave, or what Feynman wrote about probabilityÂ currents. Let me give you some appetite for that by noting that we can re-writeÂ jouleÂ per cubic meter (J/m3) asÂ newtonÂ perÂ squareÂ meter: J/m3Â = NÂˇm/m3Â = N/m2. [Remember: the unit of energy is force times distance. In fact, looking at Einstein’s formula, I’d say it’s kgÂˇm2/s2Â (mass times a squared velocity), but that simplifies to the same: kgÂˇm2/s2Â = [N/(m/s2)]Âˇm2/s2.]

I should probably also remindÂ you that there is no three-dimensional equivalent of Euler’s formula, and the way the kinetic and potential energy of those two oscillations works together is rather unique. Remember I illustrated it with the image of a V-2 engine in previous posts. There is no such thing as a V-3 engine. [Well… There actually is – but not with the third cylinder being positioned sideways.]

But… Then… Well… Perhaps we should think of some weird combination ofÂ twoÂ V-2 engines. The illustration below shows the superposition of twoÂ one-dimensional waves – I think – one traveling east-west and back, and the other one traveling north-south and back. So, yes, we may to think of Feynman’s drum-head again – but combiningÂ two-dimensional waves –Â twoÂ waves thatÂ bothÂ have an imaginary as well as a real dimension

Hmm… Not sure. If we go down this path, we’d need to add a third dimension – so w’d have a super-weird V-6 engine! As mentioned above, the wavefunction does depend on our reference frame: we’re looking at stuff from a certain directionÂ and, therefore, we can only see what goes up and down, and what goes left or right. We can’t see what comes near and what goes away from us. Also think of the particularities involved in measuring angular momentum – or the magnetic moment of some particle. We’re measuring that along one direction only! Hence, it’s probably no use to imagine we’re looking atÂ threeÂ waves simultaneously!

In any case…Â I’ll let you think about all of this. I do feel I am on to something. I am convinced that my interpretation of the wavefunction as anÂ energy propagationÂ mechanism, or asÂ energy itselfÂ – as a two-dimensional oscillation of mass – makes sense. đ

Of course, I haven’t answered oneÂ keyÂ question here: whatÂ isÂ mass? What is that green dot – in reality, that is? At this point, we can only waffle – probably best to just give its standard definition: mass is a measure ofÂ inertia. A resistance to acceleration or deceleration, or to changing direction. But that doesn’t say much. I hate to say that – in many ways – all that I’ve learned so far hasÂ deepenedÂ the mystery, rather than solve it. The more we understand, the less we understand? But… Well… That’s all for today, folks ! Have fun working through it for yourself. đ

Post scriptum: I’ve simplified the wavefunction a bit. As I noted in my post on it, the complex exponential is actually equal toÂ eâiÂˇ[(E/Ä§)ÂˇtÂ âÂ mÂˇĎ], so we’ve got a phase shift because of m, the quantum number which denotes the z-component of the angular momentum. But that’s a minor detail that shouldn’t trouble or worry you here.

# An interpretation of the wavefunction

This is my umpteenth post on the same topic. đŚ It is obvious that this search for a sensibleÂ interpretation is consuming me. Why? I am not sure. Studying physics is frustrating. As a leading physicist puts it:

“TheÂ teaching of quantum mechanics these days usuallyÂ follows the same dogma: firstly, the student is told about the failure of classical physics atÂ the beginning of the last century; secondly, the heroic confusions of the founding fathersÂ are described and the student is given to understand that no humble undergraduate studentÂ could hope to actually understand quantum mechanics for himself; thirdly, a deus exÂ machina arrives in the form of a set of postulates (the SchrĂśdinger equation, the collapseÂ of the wavefunction, etc); fourthly, a bombardment of experimental verifications is given,Â so that the student cannot doubt that QM is correct; fifthly, the student learns how toÂ solve the problems that will appear on the exam paper, hopefully with as little thought asÂ possible.”

That’s obviously not the way we want to understand quantum mechanics. [With we,Â I mean, me, of course, and you, if you’re reading this blog.]Â Of course, that doesn’t mean I don’t believe Richard Feynman, one of the greatest physicists ever, when he tells us no one, including himself, understands physics quite the way we’dÂ likeÂ to understand it. Such statements should not prevent us from tryingÂ harder. So let’s look for betterÂ metaphors.Â The animation below shows the two components of the archetypal wavefunction â a simple sine and cosine. They’re the same function actually, but their phases differ by 90 degrees (Ď/2).

It makes me think of a V-2 engine with the pistons at a 90-degree angle. Look at the illustration below, which I took from a rather simple article on cars and engines that has nothing to do with quantum mechanics. Think of the moving pistons as harmonic oscillators, like springs.

We will also think of theÂ center of each cylinder as the zero point: think of that point as a point where – if we’re looking at one cylinder alone – the internal and external pressure balance each other, so the piston would not move… Well… If it weren’t for the other piston, because the second piston isÂ not at the centerÂ when the first is. In fact, it is easy to verify and compare the following positions of both pistons, as well as the associated dynamics of the situation:

 Piston 1 Piston 2 Motion of Piston 1 Motion Piston 2 Top Center Compressed air will push piston down Piston moves down against external pressure Center Bottom Piston moves down against external pressure External air pressure will push piston up Bottom Center External air pressure will push piston up Piston moves further up and compresses the air Center Top Piston moves further up and compresses the air Compressed air will push piston down

When the pistons move, their linear motion will be described by a sinusoidal function: a sine or a cosine. In fact, the 90-degree V-2 configuration ensures that the linear motion of the two pistons will be exactly the same, except for a phase difference of 90 degrees. [Of course, because of the sideways motion of the connecting rods, our sine and cosine function describes the linear motion only approximately, but you can easily imagine the idealizedÂ limit situation. If not, check Feynman’s description of the harmonic oscillator.]

The question is: if we’d have a set-up like this, two springs – or two harmonic oscillators – attached to a shaft through a crank, would this really work as a perpetuum mobile? We obviously talk energyÂ being transferred back and forth between the rotating shaft and the moving pistons… So… Well… Let’s model this: the totalÂ energy, potentialÂ andÂ kinetic, in each harmonic oscillator is constant. Hence, the piston only delivers or receivesÂ kineticÂ energy from the rotating mass of the shaft.

Now, in physics, that’s a bit of an oxymoron: we don’t think of negative or positive kinetic (or potential) energy in the context of oscillators. We don’t think of the direction of energy. But… Well… If we’ve got twoÂ oscillators, our picture changes, and so we may have to adjust our thinking here.

Let me start by giving you an authoritative derivation of the various formulas involved here, taking the example of the physical spring as an oscillatorâbut the formulas are basically the same forÂ any harmonic oscillator.

The first formula is a general description of the motion of our oscillator. The coefficient in front of the cosine function (a)Â is the maximum amplitude. Of course, you will also recognize Ď0Â as theÂ naturalÂ frequency of the oscillator, andÂ Î as the phase factor, which takes into account our t = 0 point. In our case, for example, we have two oscillators with a phase difference equal to Ď/2 and, hence, Î would be 0 for one oscillator, and âĎ/2 for the other. [The formula to apply here is sinÎ¸ = cos(Î¸ â Ď/2).] Also note that we can equate our Î¸ argument to Ď0Âˇt.Â Now, ifÂ aÂ = 1 (which is the case here), then these formulas simplify to:

1. K.E. = T = mÂˇv2/2 =Â mÂˇĎ02Âˇsin2(Î¸ + Î) = mÂˇĎ02Âˇsin2(Ď0Âˇt + Î)
2. P.E. = U = kÂˇx2/2 = kÂˇcos2(Î¸ + Î)

The coefficient k in the potential energy formula characterizes the force: F = âkÂˇx. The minus sign reminds us our oscillator wants to return to the center point, so the force pulls back. From the dynamics involved, it is obvious that k must be equal to mÂˇĎ02., so that gives us the famous T + U = mÂˇĎ02/2 formula or, including aÂ once again, T + U = mÂˇa2ÂˇĎ02/2.

Now, if we normalizeÂ our functions by equating k to one (k = 1), thenÂ the motion ofÂ our first oscillator is given by the cosÎ¸ function, and its kinetic energy will be equal toÂ sin2Î¸. Hence, the (instantaneous)Â changeÂ in kinetic energy at any point in time will be equal to:

d(sin2Î¸)/dÎ¸ = 2âsinÎ¸âd(sinÎ¸)/dt = 2âsinÎ¸âcosÎ¸

Let’s look at the second oscillator now. Just think of the second piston going up and down in our V-twin engine. Its motion is given by theÂ sinÎ¸ function which, as mentioned above, is equal to cos(Î¸âĎ /2). Hence, its kinetic energy is equal toÂ sin2(Î¸âĎ /2), and how itÂ changesÂ – as a function of Î¸ – will be equal to:

2âsin(Î¸âĎ /2)âcos(Î¸âĎ /2) =Â = â2âcosÎ¸âsinÎ¸ = â2âsinÎ¸âcosÎ¸

We have our perpetuum mobile! While transferring kinetic energy from one piston to the other, the rotating shaft moves at constant speed. Linear motion becomes circular motion, and vice versa, in a frictionless Universe. We have the metaphor we were looking for!

Somehow, in this beautiful interplay between linear and circular motion, energy is being borrowed from one place to another, and then returned. From what place to what place? I am not sure. We may call it the real and imaginary energy space respectively, but what does that mean? One thing is for sure, however: the interplay between the real and imaginary part of the wavefunction describes how energy propagates through space!

How exactly? Again, I am not sure. Energy is, obviously, mass in motion – as evidenced by the E = mÂˇc2Â equation, and it may not have any direction (when everything is said and done, it’s a scalarÂ quantity without direction), but the energy in a linear motion is surely different from that in a circular motion, and our metaphor suggests we need to think somewhat more along those lines. Perhaps we will, one day, able toÂ square this circle. đ

SchrĂśdinger’s equation

Let’s analyze the interplay between the real and imaginary part of the wavefunction through an analysis of SchrĂśdinger’s equation, which we write as:

iÂˇÄ§ââĎ/ât = â(Ä§2/2m)ââ2Ď + VÂˇĎ

We can do a quick dimensional analysis of both sides:

• [iÂˇÄ§ââĎ/ât] = Nâmâs/s = Nâm
• [â(Ä§2/2m)ââ2Ď] = Nâm3/m2 = Nâm
• [VÂˇĎ] = Nâm

Note the dimension of the ‘diffusion’ constantÂ Ä§2/2m: [Ä§2/2m] = N2âm2âs2/kg = N2âm2âs2/(NÂˇs2/m) = Nâm3. Also note that, in order for the dimensions to come out alright, the dimension of V â the potential â must be that of energy. Hence, Feynmanâs description of it as the potential energy â rather than the potential tout courtÂ â is somewhat confusing but correct: V must equal the potential energy of the electron. Hence, V is not the conventional (potential) energy of the unit charge (1 coulomb). Instead, the natural unit of charge is used here, i.e. the charge of the electron itself.

Now, SchrĂśdingerâs equation â without the VÂˇĎ term â can be written as the following pair of equations:

1. Re(âĎ/ât) = â(1/2)â(Ä§/m)âIm(â2Ď)
2. Im(âĎ/ât) = (1/2)â(Ä§/m)âRe(â2Ď)

This closely resembles the propagation mechanism of an electromagnetic wave as described by Maxwell’s equation for free space (i.e. a space with no charges), but E and B are vectors, not scalars. How do we get this result. Well… Ď is a complex function, which we can write as a + iâb. Likewise, âĎ/ât is a complex function, which we can write as c + iâd, and â2Ď can then be written as e + iâf. If we temporarily forget about the coefficients (Ä§, Ä§2/m and V), then SchrĂśdingerâs equation – including VÂˇĎ term – amounts to writing something like this:

iâ(c + iâd) = â(e + iâf) + (a + iâb) â a + iâb = iâc â d + e+ iâf Â â a = âd + e and b = c + f

Hence, we can now write:

1. VâRe(Ď) = âÄ§âIm(âĎ/ât) + (1/2)â( Ä§2/m)âRe(â2Ď)
2. VâIm(Ď) = Ä§âRe(âĎ/ât) + (1/2)â( Ä§2/m)âIm(â2Ď)

This simplifies to the two equations above for V = 0, i.e. when there is no potential (electron in free space). Now we can bring the Re and Im operators into the brackets to get:

1. VâRe(Ď) = âÄ§ââIm (Ď)/ât + (1/2)â( Ä§2/m)ââ2Re(Ď)
2. VâIm(Ď) = Ä§ââRe(Ď)/ât + (1/2)â( Ä§2/m)ââ2Im(Ď)

This is very interesting, because we can re-write this using the quantum-mechanical energy operator H = â(Ä§2/2m)ââ2 + VÂˇ (note the multiplication sign after the V, which we do not have â for obvious reasons â for the â(Ä§2/2m)ââ2 expression):

1. H[Re (Ď)] = âÄ§ââIm(Ď)/ât
2. H[Im(Ď)] = Ä§ââRe(Ď)/ât

A dimensional analysis shows us both sides are, once again, expressed in Nâm. Itâs a beautiful expression because â if we write the real and imaginary part of Ď as râcosÎ¸ and râsinÎ¸, we get:

1. H[cosÎ¸] = âÄ§ââsinÎ¸/ât = EâcosÎ¸
2. H[sinÎ¸] = Ä§ââcosÎ¸/ât = EâsinÎ¸

Indeed, Î¸Â = (Eât â pâx)/Ä§ and, hence, âÄ§ââsinÎ¸/ât = Ä§âcosÎ¸âE/Ä§ = EâcosÎ¸ and Ä§ââcosÎ¸/ât = Ä§âsinÎ¸âE/Ä§ = EâsinÎ¸.Â  Now we can combine the two equations in one equation again and write:

H[râ(cosÎ¸ + iâsinÎ¸)] = râ(EâcosÎ¸ + iâsinÎ¸) â H[Ď] = EâĎ

The operator H â applied to the wavefunction â gives us the (scalar) product of the energy E and the wavefunction itself. Isn’t this strange?

Hmm… I need to further verify and explain this result… I’ll probably do so in yet another post on the same topic… đ

Post scriptum: The symmetry of our V-2 engine – or perpetuum mobileÂ – is interesting: its cross-section has only one axis of symmetry. Hence, we may associate some angle with it, so as to define its orientation in the two-dimensional cross-sectional plane. Of course, the cross-sectional plane itself is at right angles to the crankshaft axis, which we may also associate with some angle in three-dimensional space. Hence, its geometry defines two orthogonal directions which, in turn, define a spherical coordinate system, as shown below.

We may, therefore, say that three-dimensional space is actually being impliedÂ byÂ the geometry of our V-2 engine. Now thatÂ isÂ interesting, isn’t it? đ

# Re-visiting the matter wave (II)

My previous post was, once again, littered with formulas – even if I had not intended it to be that way: I want to convey some kind ofÂ understandingÂ of what an electron – or any particle at the atomic scale – actuallyÂ is – with the minimum number of formulas necessary.

We know particles display wave behavior: when an electron beam encounters an obstacle or a slit that is somewhat comparable in size to its wavelength, we’ll observe diffraction, or interference. [I have to insert a quick note on terminology here: the terms diffraction and interference are often used interchangeably, but there is a tendency to use interference when we have more than one wave source and diffraction when there is only one wave source. However, I’ll immediately add that distinction is somewhat artificial. Do we have one or two wave sources in a double-slit experiment? There is one beam but the two slits break it up in two and, hence, we would call it interference. If it’s only one slit, there is also an interference pattern, but the phenomenon will be referred to as diffraction.]

We also know that the wavelength we are talking about it here is notÂ the wavelength of some electromagnetic wave, like light. It’s the wavelength of a de BroglieÂ wave, i.e. a matter wave: such wave is represented by an (oscillating) complex numberÂ – so we need to keep track of a real and an imaginary part – representing a so-called probability amplitude Î¨(x, t) whose modulus squared (âÎ¨(x, t)â2) is the probability of actually detecting the electron at point x at time t. [The purists will say that complex numbers can’t oscillate – but I am sure you get the idea.]

You should read the phrase above twice: we cannot know where the electron actually is. We can only calculate probabilities (and, of course, compare them with the probabilities we get from experiments). Hence, when the wave function tells us the probability is greatest at point x at time t, then we may be lucky when we actually probe point x at time t and find it there, but it may also notÂ be there. In fact, the probability of finding itÂ exactlyÂ at some point x at someÂ definiteÂ time t is zero. That’s just a characteristic of such probability density functions: we need to probe some regionÂ Îx in some time intervalÂ Ît.

If you think that is not very satisfactory, there’s actually a very common-sense explanation that has nothing to do with quantum mechanics:Â our scientific instruments do not allow us to go beyond a certain scale anyway. Indeed, the resolution of the best electron microscopes, for example, is some 50 picometer (1 pm = 1Ă10â12Â m): that’s small (and resolutions get higher by the year), but so it implies that we are not looking at pointsÂ – as defined in math that is: so that’s something with zeroÂ dimension – but at pixels of sizeÂ Îx = 50Ă10â12Â m.

The same goes for time. Time is measured by atomic clocks nowadays but even these clocks do ‘tick’, and these ‘ticks’ are discrete. Atomic clocks take advantage of the property ofÂ atoms to resonate atÂ extremelyÂ consistent frequencies. I’ll say something more about resonance soon – because it’s very relevant for what I am writing about in this post – but, for the moment, just note that, for example, Caesium-133 (which was used to build the first atomic clock) oscillates at 9,192,631,770 cycles per second. In fact, the International Bureau of Standards and WeightsÂ re-defined the (time) second in 1967 to correspond to “theÂ duration ofÂ 9,192,631,770Â periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the Caesium-133 atom at rest at a temperature of 0 K.”

Don’t worry about it: the point to note is that when it comes to measuring time, we also have an uncertainty. Now, when using this Caesium-133 atomic clock, this uncertainty would be in the range ofÂ Âą9.2Ă10â9Â seconds (so that’s nanoseconds: 1 ns = 1Ă10â9Â s), because that’s the rate at which this clock ‘ticks’. However, there are other (much more plausible) ways of measuring time: some of the unstable baryonsÂ have lifetimes in the range of a few picoseconds only (1 ps = 1Ă10â12Â s) and the reallyÂ unstable ones – known as baryon resonancesÂ – have lifetimes in the 1Ă10â22Â to 1Ă10â24Â s range. This we can only measure because they leave some trace after these particle collisions in particle accelerators and, because we have some idea about their speed, we can calculate their lifetime from the (limited) distance they travel before disintegrating. The thing to remember is that for time also, we have to make do with time pixels Â instead of time points, so there is a Ît as well. [In case you wonder what baryons are: they are particles consisting of three quarks, and the proton and the neutron are the most prominent (and most stable) representatives of this family of particles.] Â

So what’s the size of an electron? Well… It depends. We need to distinguish two very different things: (1) the size of the area where we are likely to findÂ the electron, and (2) the size of the electron itself. Let’s start with the latter, because that’s the easiest question to answer: there is a so-called classical electron radius re, which is also known as the Thompson scattering length, which has been calculated as:

$r_\mathrm{e} = \frac{1}{4\pi\varepsilon_0}\frac{e^2}{m_{\mathrm{e}} c^2} = 2.817 940 3267(27) \times 10^{-15} \mathrm{m}$As for the constants in this formula, you know these by now: the speed of light c, the electron charge e, its massÂ me, and the permittivity of free space Îľe. For whatever it’s worth (becauseÂ you should note that, in quantum mechanics, electrons do not have a size: they are treated as point-like particles, so they have a point charge and zero dimension), that’s small. It’s in the femtometer range (1 fm = 1Ă10â15Â m). You may or may not remember that the size of a proton is in the femtometer range as well – 1.7 fm to be precise – and we had a femtometer size estimate for quarks as well: 0.7 m. So we have the rather remarkable result that the much heavier proton (its rest mass is 938 MeV/c2Â sas opposed to only 0.511 MeV MeV/c2, so the proton is 1835 times heavier) is 1.65 timesÂ smaller than the electron. That’s something to be explored later: for the moment, we’ll just assume the electron wiggles around a bit more – exactlyÂ because it’s lighter.Â Here you just have to note that this ‘classical’ electron radius does measure something: it’s something ‘hard’ and ‘real’ because it scatters, absorbs or deflects photons (and/or other particles). In one of my previous posts, I explained how particle accelerators probe things at the femtometer scale, so I’ll refer you to that post (End of the Road to Reality?) and move on to the next question.

The question concerning the area where we are likely to detect the electron is more interesting in light of the topic of this post (the nature of these matter waves). It is given by that wave function and, from my previous post, you’ll remember that we’re talking the nanometer scale here (1 nm = 1Ă10â9Â m), so that’s aÂ millionÂ times larger than the femtometer scale. Indeed, we’ve calculated aÂ de BroglieÂ wavelength of 0.33 nanometer for relatively slow-moving electrons (electrons in orbit), and the slits used in single- or double-slit experiments with electrons are also nanotechnology. In fact, now that we are here, it’s probably good to look at those experiments in detail.

The illustration below relates the actual experimental set-up of aÂ double-slit experiment performed in 2012 to Feynman’s 1965 thought experiment. Indeed, in 1965, the nanotechnology you need for this kind of experiment was not yet available, although the phenomenon of electron diffraction had been confirmed experimentally already in 1925 in the famous Davisson-Germer experiment. [It’s famous not only because electron diffraction was a weird thing to contemplate at the time but also because it confirmed the de BroglieÂ hypothesis only two years after Louis de BroglieÂ had advanced it!]. But so here is the experiment which Feynman thought would never be possible because of technology constraints:

The insert in the upper-left corner shows the two slits: they are each 50 nanometer wide (50Ă10â9Â m) and 4 micrometer tall (4Ă10â6Â m). [The thing in the middle of the slits is just a little support. Please do take a few seconds to contemplate the technology behind this feat: 50 nm is 50 millionthsÂ of a millimeter. Try to imagine dividing one millimeter in ten, and then one of these tenths in ten again, and again, and once again, again, and again. You just can’tÂ imagineÂ that, because our mind is used to addition/subtraction and – to some extent – with multiplication/division: our mind can’t deal with with exponentiation really – because it’s not a everyday phenomenon.] The second inset (in the upper-right corner) shows the mask that can be moved to close one or both slits partially or completely.

Now, 50 nanometer is 150 times larger than the 0.33 nanometer range we got for ‘our’ electron, but it’s small enough to show diffraction and/or interference. [In fact, in this experiment (done byÂ Bach, Pope, Liou and BatelaanÂ from the University of Nebraska-Lincoln less than two years ago indeed), the beam consisted of electrons with an (average) energy of 600 eV and a de BroglieÂ wavelength of 50 picometer. So that’s like the electrons used in electron microscopes. 50 pm is 6.6 times smaller than the 0.33 nm wavelength we calculated for our low-energy (70 eV) electron – but then the energy and the fact these electrons are guided in electromagnetic fields explain the difference. Let’s go to the results.

TheÂ illustration below shows the predictedÂ pattern next to the observedÂ pattern for the two scenarios:

1. We first close slit 2, let a lot of electrons go through it, and so we get a pattern described by the probabilityÂ densityÂ function P1 = âÎŚ1â2. Here we see no interference but a typical diffraction pattern: the intensityÂ follows a more or less normal (i.e. Gaussian) distribution. WeÂ thenÂ close slit 1 (and open slit 2 again), again let a lot of electrons through, and get a pattern described by the probabilityÂ densityÂ function P2Â = âÎŚ2â2.Â So that’s how we get P1Â andÂ P2.
2. We then open both slits, let a whole electrons through, and get according to the pattern described by probability density function P12Â = âÎŚ1+ÎŚ2â2, which we get notÂ from adding the probabilities P1Â andÂ P2 (hence, P12Â â Â Â P1Â + P2) – as one would expect if electrons would behave like particles – butÂ from adding the probabilityÂ amplitudes. We haveÂ interference, rather than diffraction.

But so whatÂ exactly is interfering? Well… The electrons. But that can’t be, can it?

The electrons are obviouslyÂ particles, as evidenced from the impact they make – one by one – as they hit the screen as shown below. [If you want to know what screen, let me quote the researchers: “The resulting patterns were magnified by an electrostatic quadrupole lens and imagedÂ on a two-dimensional microchannel plate and phosphorus screen, then recorded with aÂ charge-coupled device camera. […]Â ToÂ study the build-up of the diffraction pattern, each electron was localized using a âblobâÂ detection scheme: each detection was replaced by a blob, whose size representsÂ the error in the localization of the detection scheme. The blobs were compiled togetherÂ to form the electron diffraction patterns.” So there you go.]

Look carefully at how this interference pattern becomes ‘reality’ as the electrons hit the screen one by one. And then say it:Â WAW !Â

Indeed, as predicted by Feynman (and any other physics professor at the time), even ifÂ the electrons go through the slits one by one, they will interfere – with themselves so to speak. [In case you wonder if these electrons really went through one by one, let meÂ quote the researchers once again: “The electron sourceâs intensity was reduced so that the electron detection rate in the pattern was about 1 Hz. At this rate and kinetic energy, the average distanceÂ between consecutive electrons was 2.3 Ă 106 meters. This ensures that only one electronÂ is present in the 1 meter long system at any one time, thus eliminating electron-electronÂ interactions.” You don’t need to be a scientist or engineer to understand that, isn’t it?]

While this is very spooky, I have not seen any better way to describe theÂ realityÂ of the de BroglieÂ wave: the particle is notÂ some point-like thing but aÂ matter wave, as evidenced from the fact that it does interfere with itself when forced to move through two slits – or through one slit, as evidenced by the diffraction patterns built up in this experiment when closing one of the two slits: the electrons went through one by one as well!

But so how does it relate to the characteristics of that wave packet which I described in my previous post? Let me sum up the salient conclusions from that discussion:

1. The wavelengthÂ Îť of a wave packet is calculated directly from the momentum by usingÂ de Broglie‘sÂ second relation:Â Îť = h/p. In this case, the wavelength of the electrons averaged 50 picometer. That’s relatively small as compared to the width of the slit (50 nm) – a thousand times smaller actually! – but, as evidenced by the experiment, it’s small enough to show the ‘reality’ of the de BroglieÂ wave.
2. From a math point (but, of course, Nature does not care about our math), we can decompose the wave packet in a finite or infinite number of component waves. Such decomposition is referred to, in the first case (finite number of composite waves or discrete calculus) as aÂ Fourier analysis, or, in the second case,Â as aÂ Fourier transform. A Fourier transform maps our (continuous) wave function, Î¨(x), to a (continuous) wave function in the momentum space, which we noted asÂ Ď(p). [In fact, we noted it as ÎŚ(p) but I don’t want to create confusion with the ÎŚ symbol used in the experiment, which is actually the wave function in space, so Î¨(x) is ÎŚ(x) in the experiment – if you know what I mean.] The point to note is that uncertainty about momentum is related to uncertainty about position. In this case, we’ll have pretty standard electrons (so not much variation in momentum), and so the location of the wave packet in space should be fairly precise as well.
3. The groupÂ velocity of the wave packet (vg) – i.e. theÂ envelopeÂ in which ourÂ Î¨ wave oscillates –Â equals the speed of our electron (v), but the phase velocity (i.e. the speed of our Î¨ wave itself) is superluminal: we showed it’s equal to (vp) =Â E/p = Â  c2/v = c/Î˛, with Î˛ = v/c, so that’s the ratio of the speed of our electron and the speed of light. Hence, the phase velocity will always be superluminal but will approach c as the speed of our particle approaches c. For slow-moving particles, we get astonishing values for the phase velocity, like more than a hundred times the speed of light for the electron we looked at in our previous post. That’s weird but it does not contradict relativity: if it helps, one can think of the wave packet as a modulation of an incredibly fast-moving ‘carrier wave’.Â

Is any of this relevant? Does it help you to imagineÂ what the electron actually is? Or what that matter wave actually is? Probably not. YouÂ will still wonder: How does itÂ lookÂ like? What is itÂ in reality?

That’s hard to say. If the experiment above does not convey any ‘reality’ according to you, then perhaps the illustration below will help. It’s one I have used in another post too (An Easy Piece: Introducing Quantum Mechanics and the Wave Function). I took it from Wikipedia, and itÂ represents “the (likely) space in which aÂ single electron on the 5d atomic orbital of an atom would be found.” The solid body shows the places where the electronâs probability density (so thatâs the squared modulus of the probabilityÂ amplitude) is above a certain value â so itâs basically the area where the likelihood of finding the electron is higher than elsewhere. The hueÂ on the colored surface shows the complex phaseÂ of the wave function.

So… Does this help?Â

You will wonder why the shape is so complicated (but it’s beautiful, isn’t it?) but that has to do with quantum-mechanical calculations involving quantum-mechanical quantities such as spin and other machinery which I don’t master (yet). I think there’s always a bit of a gap between ‘first principles’ in physics and the ‘model’ of a real-life situation (like a real-life electron in this case), but it’s surely the case in quantum mechanics!Â That being said, when looking at the illustration above, you should be aware of the fact that you are actually looking at a 3D representation of the wave function of an electron in orbit.Â

Indeed, wave functions of electrons in orbit are somewhat less random than – let’s say – the wave function of one of those baryon resonances I mentioned above. As mentioned in my Not So Easy Piece, in which I introduced the SchrĂśdinger equationÂ (i.e. one of my previous posts), they are solutions of a second-order partial differential equation – known as the SchrĂśdinger wave equation indeed – which basically incorporates one key condition: these solutions – which are (atomic or molecular) âorbitalsâ indeed – have to correspond to so-calledÂ stationary states orÂ standing waves. Now what’s the ‘reality’ of that?Â

The illustration below comes from Wikipedia once again (Wikipedia is an incredible resource for autodidacts like me indeed) and so you can check the article (on stationary states) for more details if needed. Let me just summarize the basics:

1. A stationary state is calledÂ stationaryÂ because the system remains in the same ‘state’ independent of time. That does notÂ mean the wave function is stationary. On the contrary, the wave function changes as function of both time and space – Î¨ = Î¨(x, t) remember? – but it represents a so-called standing wave.
2. Each of these possibleÂ states corresponds to an energy state, which is given through theÂ de Broglie relation: E = hf. So the energy of the state is proportional to the oscillation frequency of the (standing) wave, and Planck’s constant is the factor of proportionality. From a formal point of view, that’s actually the one and only condition we impose on the ‘system’, and so it immediately yields the so-called time-independent SchrĂśdinger equation, which I briefly explained in the above-mentioned Not So Easy Piece (but I will not write it down here because it would only confuse you even more). Just look at these so-called harmonicÂ oscillators below:

A and B represent a harmonic oscillator in classical mechanics: a ball with some mass m (mass is a measure for inertia, remember?) on a spring oscillating back and forth. In case you’d wonder what the difference is between the two: both the amplitude as well as the frequency of the movement are different. đ A spring and a ball?

It represents a simple system. A harmonic oscillation is basically a resonanceÂ phenomenon: springs, electric circuits,… anything that swings, moves or oscillates (including large-scale things such as bridges and what have you – in his 1965Â LecturesÂ (Vol. I-23),Â Feynman even discusses resonance phenomena in the atmosphere in his Lectures) has some natural frequency Ď0, also referred to as the resonance frequency, at which it oscillates naturallyÂ indeed: that means it requires (relatively) little energy to keep it going. How much energy it takes exactlyÂ to keep them going depends on the frictional forces involved: because the springs in A and B keep going, there’s obviously no friction involved at all. [In physics, we say there is no damping.] However, both springs do have a differentÂ kÂ (that’s the key characteristic of a spring in Hooke’s Law, which describes how springs work), and the mass m of the ball might be different as well. Now, one can show that the period of this ‘natural’ movement will be equal to t0Â = 2Ď/Ď0Â = 2Ď(m/k)1/2Â or that Ď0Â = (m/k)â1/2. So we’ve got a A and a B situation which differ in k and m. Let’s go to the so-called quantum oscillator, illustrations C to H.

C to H in the illustration are six possible solutions to the SchrĂśdinger Equation for this situation. The horizontal axis is position (and so time is the variable) – but we could switch the two independent variables easily: as I said a number of times already, time and space are interchangeable in the argument representing the phase (Î¸) of a wave provided we use the right units (e.g. light-seconds for distance and seconds for time): Î¸ = Ďt – kx. Apart from the nice animation, the other great thing about these illustrations – and the main difference with resonance frequencies in the classical world – is that they show both the real part (blue) as well as the imaginary part (red) of the wave functionÂ as a function of space (fixed in the x axis) and time (the animation).

Is this ‘real’ enough? If it isn’t, I know of no way to make it any more ‘real’. Indeed, that’s key to understanding theÂ natureÂ of matter waves: we have to come to terms with the idea that these strange fluctuating mathematical quantities actually represent something. What? Well… The spooky thing that leads to the above-mentioned experimental results: electron diffraction and interference.Â

Let’s explore this quantum oscillator some more.Â Another key difference between natural frequencies in atomic physics (so the atomic scale) and resonance phenomena in ‘the big world’ is that there is more than one possibility: each of the six possibleÂ states above corresponds to a solution and an energy state indeed, which is given through theÂ de BroglieÂ relation: E = hf. However, in order to be fully complete, I have to mention that, while G and H are also solutions to the wave equation, they are actually not stationaryÂ states. The illustration below – which I took from the same Wikipedia article on stationary states – shows why. For stationary states, all observable properties of the state (such as the probability that the particle is at location x) are constant. For non-stationary states, the probabilities themselvesÂ fluctuate as a function of time (and space of obviously), so the observable propertiesÂ of the system are not constant. These solutions are solutions to the time-dependent SchrĂśdinger equation and, hence, they are, obviously,Â time-dependentÂ solutions.

We can find these time-dependent solutions by superimposing two stationary states, so we have a new wave functionÂ Î¨NÂ which is the sum of two others: Â Î¨NÂ =Â Î¨1Â Â +Â Î¨2. [If you include the normalization factor (as you should to make sure all probabilities add up to 1), it’s actuallyÂ Î¨NÂ =Â (2â1/2)(Î¨1Â Â +Â Î¨2).] So G and H above still represent a state of aÂ quantum harmonic oscillatorÂ (with a specific energy level proportional to h), but so they are not standing waves.

Let’s go back to our electron traveling in a more or less straight path. What’s the shape of the solution for that one? It could be anything. Well… Almost anything. As said, the only condition we can impose is that the envelope of the wave packet – its ‘general’ shape so to say – should not change. That because we should not have dispersion – as illustrated below. [Note that this illustration only represent the real orÂ the imaginary part – not both – but you get the idea.]

That being said, if we exclude dispersion (because a real-life electron traveling in a straight line doesn’t just disappear – as do dispersive wave packets), then, inside of that envelope, the weirdest things are possible – in theory that is. Indeed, NatureÂ does not care much about our Fourier transforms. So the example below, which shows a theoretical wave packet (again, the real or imaginary part only) based on some theoretical distribution of the wave numbers of the (infinite number) of component waves that make up the wave packet, may or may not represent our real-life electron. However, if our electron has any resemblance to real-life, then I would expect it to notÂ beÂ as well-behaved as the theoretical one that’s shown below.

The shape above is usually referred to as a Gaussian wave packet, because of the nice normal (Gaussian) probability density functions that are associated with it. But we can also imagine a ‘square’ wave packet: a somewhat weird shape but – in terms of the math involved – as consistent as the smooth Gaussian wave packet, in the sense that we can demonstrate that the wave packet is made up of an infinite number of waves with an angular frequency Ď that is linearly related to their wave number k, so the dispersion relation is Ď = ak + b. [Remember we need to impose that condition to ensure that our wave packet will not dissipate (or disperse or disappearÂ – whatever term you prefer.] That’s shown below: a Fourier analysis of a square wave.

While we can construct many theoretical shapes of wave packets that respect the ‘no dispersion!’ condition, we cannot know which one will actually representÂ that electron we’re trying to visualize. Worse, if push comes to shove, we don’t know if these matter waves (so these wave packets) actually consist of component waves (or time-independent stationary states or whatever).

[…] OK. Let me finally admit it: while I am trying to explain you the ‘reality’ of these matter waves, we actually don’t know how realÂ these matter waves actually are. We cannot ‘see’ or ‘touch’ them indeed. All that we know is that (i) assuming their existence, and (ii) assuming these matter waves are more or less well-behaved (e.g. that actual particles will be represented by a composite wave characterized by a linear dispersion relation between the angular frequencies and the wave numbers of its (theoretical) component waves) allows us to do all that arithmetic with these (complex-valued) probabilityÂ amplitudes. More importantly, all that arithmetic with these complex numbers actually yields (real-valued) probabilities that are consistent with the probabilities we obtain through repeated experiments.Â So that’s what’s real and ‘not so real’ I’d say.

Indeed, the bottom-line is that we do notÂ know what goes on inside that envelope. Worse, according to the commonly accepted Copenhagen interpretation of the Uncertainty Principle (and tons of experiments have been done to try to overthrow that interpretation – all to no avail), we never will.