# Tag Archives: Copenhagen explanation

# Quantum math: garbage in, garbage out?

This post is basically a continuation of my previous one but – as you can see from its title – it is much more aggressive in its language, as I was inspired by a very thoughtful comment on my previous post. Another advantage is that it avoids all of the math. đ It’s… Well… I admit it: it’s just a rant. đ [Those who wouldn’t appreciate the casual style of what follows, can download my paper on it – but that’s much longer and also has a lot more math in it – so it’s a much harder read than this ‘rant’.]

My previous post was actually triggered by an attempt to re-read Feynman’s Lectures on Quantum Mechanics, but in reverse order this time: from the last chapter to the first. [In case you doubt, I did follow the correct logical order when working my way through them for the first time because… Well… There is no other way to get through them otherwise. đ ] But then I was looking at Chapter 20. It’s a Lecture on quantum-mechanical operators – so that’s a topic which, in other textbooks, is usually tackled earlier on. When re-reading it, I realize why people quickly turn away from the topic of physics: it’s a lot of mathematical formulas which are supposed to reflect reality but, in practice, few – if any – of the mathematical concepts are actually being explained. Not in the first chapters of a textbook, not in its middle ones, and… Well… Nowhere, really. Why? Well… To be blunt: I think most physicists themselves don’t really understand what they’re talking about. In fact, as I have pointed out a couple of times already, Feynman himself admits so much:

âAtomic behaviorÂ appears peculiar and mysterious to everyoneâboth to the novice and to the experienced physicist.Â *Even the experts do not understand it the way they would like to*.â

SoâŠ WellâŠ If youâd be in need of a rather spectacular acknowledgement of the shortcomings of physics as a science, here you have it: if you don’t understand what physicists are trying to tell you, don’t worry about it, because they donât really understand it themselves. đ

Take the example of aÂ *physical state*, which is represented by aÂ *state vector*, which we can combine and re-combine using the properties of an abstractÂ *Hilbert space*.Â Frankly, I think the word is very misleading, because it actually doesn’t describe an *actual* physical state. Why? Well… If we look at this so-called physical state from another angle, then we need to *transform *it using a complicated set of transformation matrices. You’ll say: that’s what we need to do when going from one reference frame to another in classical mechanics as well, isn’t it?

Well… No. In classical mechanics, we’ll describe the physics using geometric vectors in three dimensions and, therefore, theÂ *baseÂ *of our reference frame doesn’t matter: because we’re usingÂ *realÂ *vectors (such as the electric of magnetic field vectors **E** and **B**), our orientation *vis-ĂĄ-vis* the object – theÂ *line of sight*, so to speak – doesn’t matter.

In contrast, in quantum mechanics, it does: SchrĂ¶dinger’s equation – and the wavefunction – has only two degrees of freedom, so to speak: its so-called real and its imaginary dimension. Worse, physicists refuse to give those two dimensions anyÂ *geometricÂ *interpretation. Why? I don’t know. As I show in my previous posts, it would be easy enough, right? We know both dimensions must be perpendicular to each other, so we just need to decide ifÂ *bothÂ *of them are going to be perpendicular to our line of sight. That’s it. We’ve only got two possibilities here which – in my humble view – explain why the matter-wave is different from an electromagnetic wave.

I actually can’t quite believe the craziness when it comes to interpreting the wavefunction: we get everything we’d want to know about our particle through these operators (momentum, energy, position, and whatever else you’d need to know), but mainstream physicists still tell us that the wavefunction is, somehow, not representing anything real. It might be because of that weird 720Â° symmetry – which, as far as I am concerned, confirms that those state vectors are not the right approach: you can’t represent a complex, asymmetrical shape by a ‘flat’ mathematical object!

* Huh?Â *Yes.Â TheÂ wavefunction is a ‘flat’ concept: it has two dimensions only, unlike theÂ

*realÂ*vectors physicists use to describe electromagnetic waves (which we may interpret as the wavefunction of the photon). Those have three dimensions, just like the mathematical space we project on events. Because the wavefunction is flat (think of a rotating disk), we have those cumbersome transformation matrices: each time we shift positionÂ

*vis-ĂĄ-vis*the object we’re looking at (

*das Ding an sich*, as Kant would call it), we need to change our description of it. And our description of it – the wavefunction – is all we have, so that’sÂ

*ourÂ*reality. However, because that reality changes as per our line of sight, physicists keep saying the wavefunction (orÂ

*das Ding an sichÂ*itself) is, somehow, not real.

Frankly,Â I do think physicists should take a basic philosophy course: you can’t describe what goes on in three-dimensional space if you’re going to use flat (two-dimensional) concepts, because the objects we’re trying to describe (e.g. non-symmetrical electron orbitals) aren’t flat. Let me quote one of Feynman’s famous lines on philosophers:Â âThese philosophersÂ areÂ alwaysÂ with us, struggling in the periphery toÂ tryÂ toÂ tell us something, but they never really understand the subtleties and depth of the problem.â (Feynman’s Lectures, Vol. I, Chapter 16)

Now, IÂ *loveÂ *Feynman’s Lectures but…Â Well… I’ve gone through them a couple of times now, so I do think I have an appreciation of the subtleties and depth of the problem now. And I tend to agree with some of the smarter philosophers: if you’re going to use ‘flat’ mathematical objects to describe three- or four-dimensional reality, then such approach will only get you where we are right now, and that’s a lot of mathematical* mumbo-jumbo*Â for the poor uninitiated. *Consistent* mumbo-jumbo, for sure, but mumbo-jumbo nevertheless. đ So, yes, I do think we need to re-invent quantum math. đ The description may look more complicated, but it would make more sense.

I mean… If physicists themselves have had continued discussions on the reality of the wavefunction for almost a hundred years now (SchrĂ¶dinger published his equation in 1926), then… Well… Then the physicists have a problem. Not the philosophers. đ As to how that new description might look like, see my papers on viXra.org. I firmly believe it can be done. This is just a hobby of mine, but… Well… That’s where my attention will go over the coming years. đ Perhaps quaternions are the answer but… Well… I don’t think so either – for reasons I’ll explain later. đ

**Post scriptum**: There are many nice videos on Dirac’s belt trick or, more generally, on 720Â° symmetries, but this links to one I particularly like. It clearly shows that the 720Â° symmetry requires, in effect, a special relation between the observer and the object that is being observed. It is, effectively, like there is a leather belt between them or, in this case, we have an arm between the glass and the person who is holding the glass. So it’s not like we are walking around the object (think of the glass of water) and making a full turn around it, so as to get back to where we were. No. *We are turning it around by 360Â°!Â *That’s a very different thing than just looking at it, walking around it, and then looking at it again. That explains the 720Â° symmetry: we need to turn it around twice to get it back to its original state. So… Well… The description is more about us and what we do with the object than about the object itself.Â That’s why I think the quantum-mechanical description is defective.

# Wavefunctions, perspectives, reference frames, representations and symmetries

Ouff ! This title is quite a mouthful, isn’t it? đ So… What’s the topic of the day? Well… In our previous posts, we developed a few key ideas in regard to a possible physical interpretation of the (elementary) wavefunction. It’s been an interesting excursion, and I summarized it in another pre-publication paper on the open arXiv.org site.

In my humble view, one of the toughest issues to deal with when thinking about geometric (orÂ *physical*) interpretations of the wavefunction is the fact that a wavefunction does not seem to obey the classical 360Â° symmetry in space. In this post, I want to muse a bit about this and show that… Well… It does and it doesn’t. It’s got to do with what happens when you change from one representational base (orÂ representation, *tout court*)Â to another which is… Well… Like changing the reference frame but, at the same time, it is also *more* than just a change of the reference frameâand so that explains the weird stuff (like that 720Â° symmetry of the amplitudes for spin-1/2 particles, for example).

I should warn you before you start reading: I’ll basically just pick up some statements from my paper (and previous posts) and develop some more thoughts on them. As a result, this post may not be very well structured. Hence, you may want to read the mentioned paperÂ first.

### The reality of directions

*Huh? *TheÂ *realityÂ *of directions? Yes. I warned you. This post may cause brain damage. đÂ The whole argument revolves around a *thoughtÂ *experimentâbut one whose results have been verified in zillions of experiments in university student labs so… Well… We do *notÂ *doubt the results and, therefore, we do not doubt the basic mathematical results: we just want to try to *understandÂ *them better.

So what is the set-up? Well… In the illustration below (Feynman, III, 6-3), Feynman compares the physics of two situations involving rather special beam splitters. Feynman calls them modified or âimprovedâ Stern-Gerlach apparatuses. The apparatus basically splits and then re-combines the two new beams along theÂ *z*-axis. It is also possible to block one of the beams, so we filter out only particles with their spinÂ *up*Â or, alternatively, with their spinÂ *down*. Spin (or angular momentum or the magnetic moment) as measured along theÂ *z*-axis, of courseâI should immediately add: we’re talking **theÂ z-axis of the apparatus** here.

The two situations involve a different *relative *orientation of the apparatuses: in (a), the angle is 0**Â°**, while in (b) we have a (right-handed) rotation of 90Â° about the *z*-axis. He then provesâusing geometry and logic onlyâthat the probabilities and, therefore, **the magnitudes of the amplitudes** (denoted byÂ

*C*

_{+}and

*C*

_{â}and

*Câ*

_{+}and

*Câ*

_{â}in the

*S*and

*T*representation respectively)

**must be the same, but the amplitudes**, notingâin his typical style, mixing academic and colloquial languageâthat âthere must be some way for a particle to tell that it has turned a corner in (b).â

*must*have different phasesThe various interpretations of what actually *happens* here may shed some light on the heated discussions on the *reality *of the wavefunctionâand of quantum states. In fact, I should note that Feynman’s argument revolves around quantum states. To be precise, the analysis is focused on two-state systems only, and the wavefunctionâwhich captures a continuum of possible states, so to speakâis introduced only later. However, we may look at the amplitude for a particle to be in theÂ *up*– or *down*-state as a wavefunction and, therefore (but do note that’s my humble opinion once more), the analysis is actuallyÂ *notÂ *all that different.

We *know*, from theory *and *experiment, that the amplitudes *are *different. For example, for the given difference in the *relative *orientation of the two apparatuses (90Â°), we *know* that the amplitudes are given by *Câ*_{+} = *e ^{i}*

^{âÏ/2}â

*C*

_{+}=

*e*

^{ i}^{âÏ/4}â

*C*

_{+}and

*Câ*

_{â}=

*e*

^{âiâÏ/2}â

*C*

_{+}=

*e*

^{â iâÏ/4}â

*C*

_{â}respectively (the amplitude to go from the down to the up state, or vice versa, is zero). Hence, yes,

**â**

*we**notÂ*the particle, Mr. Feynman!â

*that, in (b), the electron has, effectively, turned a corner.*

**know**ÂThe more subtle question here is the following: is the *reality* of the particle in the two setups the same? Feynman, of course, stays away from such philosophical question. He just notes that, while â(a) and (b) are differentâ, âthe probabilities are the sameâ. He refrains from making any statement on the particle itself: is or is it *not *the same? The common sense answer is obvious: of course, it is! The particle is the same, right? In (b), it just took a turnâso it is just going in some other direction. Thatâs all.

However, common sense is seldom a good guide when thinking about quantum-mechanical realities. Also, from a more philosophical point of view, one may argue that the reality of the particle is *not *the same: something mightâor *must*âhave *happened* to the electron because, when everything is said and done, the particle *did* take a turn in (b). It did *not *in (a). [Note that the difference between âmightâ and âmustâ in the previous phrase may well sum up the difference between a deterministic and a non-deterministic world view but… Well… This discussion is going to be way too philosophical already, so let’s refrain from inserting new language here.]

Let us think this through. The (a) and (b) set-up are, *obviously*, different but…Â *Wait a minute…*Â Nothing is obvious in quantum mechanics, right? How can weÂ *experimentally confirmÂ *thatÂ they are different?

* Huh?Â *I must be joking, right? You canÂ

*seeÂ*they are different, right? No.Â I am not joking. In physics, two things are different if we get differentÂ

*measurement*Â results. [That’s a bit of a simplified view of the ontological point of view of mainstream physicists, but you will have to admit I am not far off.] So… Well… We can’t see those amplitudes and so… Well… If we

*measure*the same thingâsame

*probabilities*, remember?âwhy are they different? Think of this: if we look at the two beam splitters as one singleÂ tube (anÂ

*ST*tube, we might say), then all we did in (b) was bend the tube. Pursuing the logic that says our particle is still the sameÂ

*even when it takes a turn*, we could say the tube is still the same, despite us having wrenched it over a 90Â° corner.

Now, I am sure you think I’ve just gone nuts, but just try*Â *to stick with me a little bit longer. Feynman actually acknowledges the same: we need to *experimentallyÂ **proveÂ *(a) and (b) are different. He does so by getting **aÂ thirdÂ apparatus **in

**(**, as shown below,

*U*)**whose**, so there is no difference there.

*relative*orientation to*T*is the same in both (a) and (b)Now, the axis ofÂ *UÂ *is not theÂ *z*-axis: it is theÂ *x*-axis in (a), and theÂ *y*-axis in (b). So what? Well… I will quote Feynman hereânot (only) because his words are more important than mine but also because every word matters here:

“The two apparatuses in (a) and (b) are, in fact, different, as we can see in the following way. Suppose that we put an apparatus in front ofÂ *SÂ *which produces a pure +*x*Â state. Such particles would be split into +*z* andÂ â*z* intoÂ beams inÂ *S*,Â but the two beams would be recombined to give aÂ +*x*Â state again at P_{1}âthe exit ofÂ *S*.Â The same thing happens again inÂ *T*.Â If we followÂ *TÂ *by a third apparatusÂ *U*,Â whose axis is in the +*x*Â direction and, as shown in (a), all the particles would go into the +Â beam ofÂ *U*.Â Now imagine what happens ifÂ *TÂ *and *UÂ *are swung aroundÂ *together*Â by 90Â°Â to the positions shown in (b).Â Again, theÂ *TÂ *apparatus puts out just what it takes in, so the particles that enterÂ *UÂ *are in a +*xÂ *stateÂ ** with respect toÂ S**,Â which is different. By symmetry, we would now expect only one-half of the particles to get through.”

I should note that (b) shows theÂ *UÂ *apparatus wide open so… Well… I must assume that’s a mistake (and should alert the current editors of the *LecturesÂ *to it): Feynman’s narrative tells us we should also imagine it with theÂ *minus *channel shut. InÂ *thatÂ *case, it should, effectively, filter approximately half of the particles out, while they all get through in (a). So that’s aÂ *measurementÂ *result which shows the direction, as weÂ *seeÂ *it, makes a difference.

Now, Feynman would be very angry with meâbecause, as mentioned, he hates philosophersâbut I’d say: this experiment proves that a direction is something real. Of course, the next philosophical question then is: whatÂ *isÂ *a direction? I could answer this by pointing to the experiment above: a direction is something that alters the probabilities between the *S**T**U* tube as set up in (a) versus the *S**T**U* tube in (b). In factâbut, I admit, that would be pretty ridiculousâwe could use the varying probabilities as we wrench this tube over varying angles toÂ *define* an angle! But… Well… While that’s a perfectly logical argument, I agree it doesn’t sound very sensical.

OK. Next step. What follows may cause brain damage. đ Please abandon all pre-conceived notions and definitions for a while and think through the following logic.

You know this stuff is about transformations of amplitudes (or wavefunctions), right? [And you also want to hear about those special 720Â° symmetry, right? No worries. We’ll get there.] So the questions all revolve around this: what happens to amplitudes (or the wavefunction) when we go from one reference frameâorÂ *representation*, as it’s referred to in quantum mechanicsâto another?

Well… I should immediately correct myself here: a reference frame and a representation are two different things. They areÂ *relatedÂ *but… Well… Different… *Quite* different. Not same-same but different. đ I’ll explain why later. Let’s go for it.

Before talking representations, let us first think about what we reallyÂ *mean* by changing the *reference frame*. To change it, we first need to answer the question: what *is *our reference frame? It is a mathematical notion, of course, but then it is also more than that: it is *ourÂ *reference frame. We use it to make measurements. That’s obvious, you’ll say, but let me make a more formal statement here:

**The reference frame is given by (1) the geometry **(or theÂ *shape*, if that sounds easier to you)** of the measurement apparatus**Â (so that’s the experimental set-up) here) and** (2) our perspective of it.**

If we would want to sound academic, we might refer to Kant and other philosophers here, who told usâ230 years agoâthat the mathematical idea of a three-dimensional reference frame is grounded in our intuitive notions of up and down, and left and right. [If you doubt this, think about the necessity of the various right-hand rules and conventions that we cannot do without in math, and in physics.] But so we do not want to sound academic. Let us be practical. Just think about the following.Â The apparatus gives us two *directions*:

(1) TheÂ *upÂ *direction, whichÂ *weÂ associate* with theÂ positive direction of theÂ *z*-axis, and

(2) the direction of travel of our particle, whichÂ *we associate*Â with the positive direction of theÂ *y*-axis.

Now, if we have two axes, then the third axis (theÂ *x*-axis) will be given by the right-hand rule, right? So we may say the apparatus gives us the reference frame. Full stop.Â So… Well… Everything is relative? Is this reference frame relative? Are directions relative? That’s what you’ve been told, but think about this:Â relativeÂ *to what?*Â Here is where the object meets the subject. What’s relative? What’s absolute?Â Frankly, I’ve started to think that, in this particular situation, we should, perhaps, not use these two terms. I am *notÂ *saying thatÂ our *observation* of what *physically* happens here gives these two directions any *absolute *character but… Well… You will have to admit they are more than just some mathematical construct: when everything is said and done, we will have to admit that these two directions are *real*. because… Well… They’re part of theÂ *realityÂ *that we are observing, right? And the third one… Well… That’s given by our perspectiveâby our right-hand rule, which is… Well… *OurÂ *right-hand rule.

Of course, now you’ll say: if you think that ârelativeâ and âabsoluteâ are ambiguous terms and that we, therefore, may want to avoid them a bit more, then ârealâ and its opposite (unreal?) are ambiguous terms too, right? WellâŠ Maybe. What language would *youÂ *suggest? đ Just stick to the story for a while. I am not done yet. So… Yes… WhatÂ *isÂ *theirÂ *reality*?Â Let’s think about that in the next section.

### Perspectives, reference frames and symmetries

You’ve done some mental exercises already as you’ve been working your way through the previous section, but you’ll need to do plenty more. In fact, they may become physical exercise too: when I first thought about these things (symmetries and, more importantly, *a*symmetries in space), I found myself walking around the table with some asymmetrical everyday objects and papers with arrows and clocks and other stuff on itâeffectively analyzing what right-hand screw, thumb or grip rules actuallyÂ *mean*. đ

So… Well… **I want you to distinguishâjust for a whileâbetween the notion of a reference frame (think of the x–y–z reference frame that comes with the apparatus) and yourÂ perspective on it.** What’s our perspective on it? Well… You may be looking from the top, or from the side and, if from the side, from the left-hand side or the right-hand sideâwhich, if you think about it, you can only

*defineÂ*in terms of the various positive and negative directions of the various axes. đÂ If you think this is getting ridiculous… Well… Don’t. Feynman himselfÂ doesn’t think this is ridiculous, because he starts his own “long and abstract side tour” on transformations with a very simple explanation of how the top and side

*view*of the apparatus are related to theÂ

*axesÂ*(i.e. the reference frame) that comes with it. You don’t believe me? This is theÂ

*very*first illustration of hisÂ

*LectureÂ*on this:

He uses it to explain the apparatus (which we don’t do here because you’re supposed to already know how these (modified or improved) Stern-Gerlach apparatuses work). So let’s continue this story. Suppose that we are looking in the *positive*Â *y*-directionâso thatâs the direction in which our particle is movingâthen we might imagine how it would look like whenÂ *weÂ *would make a 180Â°Â turn and look at the situation from the other side, so to speak. We do not change the reference frame (i.e. the *orientation*) of the apparatus here: we just change our *perspective *on it. Instead of seeing particles going *away from us*, into the apparatus, we now see particles comingÂ *towardsÂ *us, out of the apparatus.

What happensâbut that’s not scientific language, of courseâis that left becomes right, and right becomes left. Top still is top, and bottom is bottom. We are looking now in theÂ *negativeÂ y*-direction, and the positive direction of the *x*-axisâwhich pointed right when we were looking in the positiveÂ *y*-directionânow points left. I see you nodding your head nowâbecause you’ve heard about parity inversions, mirror symmetries and what have youâand I hear you say: “That’s the mirror world, right?”

No. It is not. I wrote about this in another post: the world in the mirror is theÂ world in the mirror. We don’t get a mirror image of an object by going around it and looking at its back side. I can’t dwell too much on this (just check that post, and another one who talks about the same), but so don’t try to connect it to the discussions on symmetry-breaking and what have you. Just stick toÂ *this *story, which is about transformations of amplitudes (or wavefunctions). [If you really want to knowâbut I know this sounds counterintuitiveâthe mirror world doesn’t really switch left for right. Your reflection doesn’t do a 180 degree turn: it is just reversed front to back, with no rotation at all. It’s only your brain which *mentally*Â adds (or subtracts) the 180 degree turn that you assume must have happened from the observed front to back reversal. So the left to right reversal is onlyÂ *apparent*. It’s a common misconception, and… Well… I’ll let you figure this out yourself. I need to move on.]Â Just note the following:

- TheÂ
*xyz*Â reference frame remains a valid right-handed reference frame. Of course it does: it comes with our beam splitter, and we can’t change its*reality*, right? We’re just looking at it from another angle. OurÂ*perspectiveÂ*on it has changed. - However, if we think of the real and imaginary part of the wavefunction describing the electrons that are going through our apparatus as perpendicular oscillations (as shown below)âa cosine and sine function respectivelyâthen our change in perspectiveÂ
*might*, effectively, mess up our convention for measuring angles.

I am not saying itÂ *does*. Not now, at least. I am just saying it *might*. It depends on the plane of the oscillation, as I’ll explain in a few moments. Think of this: we measure angles *counter*clockwise, right? As shown below… But… Well… If the thing below would be some funny clock going backwardsâyou’ve surely seen them in a bar or so, right?âthen… Well… If they’d be transparent, and you’d go around them, you’d see them as going… Yes… Clockwise. đ [This should remind you of a discussion on real versus pseudo-vectors, or polar versus axial vectors, but… Well… We don’t want to complicate the story here.]

Now, *ifÂ *we wouldÂ assume this clock represents something realâand, of course, **I am thinking of theÂ elementary wavefunctionÂ e^{i}^{Îž}Â =Â cosÎž +Â iÂ·sinÎž now**âthen… Well… Then it will look different when we go around it. When going around our backwards clock above and looking at it from… Well… The back, we’d describe it, naively, as… Well…Â

*Think! What’s your answer? Give me the formula!Â*đ

[…]

We’d see it asÂ *e*^{âi}^{Îž}Â =Â *cos*(âÎž) +Â *i*Â·*sin*(âÎž) =Â *cos*Îž âÂ *i*Â·*sin*Îž, right? The hand of our clock now goes clockwise, so that’s theÂ *oppositeÂ *direction of our convention for measuring angles. Hence, instead ofÂ *e*^{i}^{Îž}, we writeÂ *e*^{âi}^{Îž}, right? So that’s the complex conjugate. So we’ve got a differentÂ *imageÂ *of the same thing here. *Not* good. *Not good at all.*

You’ll say: *so what? *We can fix this thing easily, right?Â YouÂ don’t need the convention for measuring angles or for the imaginary unit (*i*) here.Â This particle is moving, right? So if you’d want to look at the elementary wavefunction as some sort of circularly polarized beam (which, I admit, is very much what I would like to do, but its polarization is rather particular as I’ll explain in a minute), then you just need to define *left- and right-handed angles* as per the standard right-hand screw rule (illustrated below).Â *To hell with the counterclockwise convention for measuring angles!*

You are right. WeÂ *couldÂ *use the right-hand rule more consistently. We could, in fact, use it as anÂ *alternativeÂ *convention for measuring angles: we could, effectively, measure them clockwise *or* counterclockwise depending on the direction of our particle.Â But… Well… The fact is:Â *we don’t*. We do *not* use that alternative convention when we talk about the wavefunction. Physicists do use theÂ *counterclockwise*Â convention ** all of the time** and just jot down these complex exponential functions and don’t realize that,Â

*if they are to represent something real*, ourÂ

*perspective*Â on the reference frame matters. To put it differently, theÂ

*directionÂ*in which we are looking at things matters! Hence, the direction is

*not…Â*Well… I am tempted to say…

*NotÂ*relative at all but then… Well… We wanted to avoid that term, right? đ

[…]

I guess that, by now, your brain may suffered from various short-circuits. If not, stick with me a while longer. Let us analyze how our wavefunction model might be impacted by this symmetryâorÂ *a*symmetry, I should say.

### The flywheel model of an electron

In our previous posts, we offered a model that interprets the real and the imaginary part of the wavefunction as oscillations which each carry half of the total energy of the particle. These oscillations are perpendicular to each other, and the interplay between both is how energy propagates through spacetime. Let us recap the fundamental premises:

- The dimension of the matter-wave field vector is forceÂ per unit
*mass*(N/kg), as opposed to the force per unit*charge*(N/C) dimension of the electric field vector. This dimension is an acceleration (m/s^{2}), which is the dimension of the gravitational field. - We assume this gravitational disturbance causes our electron (or a charged
*mass*Â in general) to move about some center, combining linear and circular motion. This interpretation reconciles the wave-particle duality: fields interfere but if, at the same time, they do drive a pointlike particle, then we understand why, as Feynman puts it, âwhen you do find the electron some place, the entire charge is there.â Of course, we cannot prove anything here, but our elegant yet simple derivation of the Compton radius of an electron is… Well… Just nice. đ - Finally, and most importantly
*in the context of this discussion*, we noted that, in light of the direction of the magnetic moment of an electron in an inhomogeneous magnetic field,**the plane which circumscribes the circulatory motion of the electron should also**Hence, unlike an electromagnetic wave, theÂ*compriseÂ*the direction of its linear motion.*planeÂ*of the two-dimensional oscillation (so that’s the polarization plane, really) can*notÂ*be perpendicular to the direction of motion of our electron.

Let’s say some more about the latter point here. The illustrations below (one from Feynman, and the other is just open-source) show what we’re thinking of.Â The direction of the angular momentum (and the magnetic moment) of an electronâor, to be precise, its component as measured in the direction of the (inhomogeneous) magnetic field through which our electron is travelingâcan*not*Â be parallel to the direction of motion. On the contrary, it must be *perpendicular*Â to the direction of motion. In other words, if we imagine our electron as spinning around some center (see the illustration on the left-hand side), then the disk it circumscribes (i.e. theÂ *planeÂ *of the polarization)Â has toÂ *compriseÂ *the direction of motion.

Of course, we need to add another detail here. As my readers will know, we do not really have a precise direction of angular momentum in quantum physics. While there is no fully satisfactory explanation of this, the classical explanationâcombined with the quantization hypothesisâgoes a long way in explaining this: an object with an angular momentumÂ ** J**Â and a magnetic momentÂ

**Â that is**

*ÎŒ**not exactly*parallel to some magnetic fieldÂ

**B**, willÂ

*notÂ*line up: it willÂ

*precess*âand, as mentioned, the quantization of angular momentum may well explain the rest.Â [Well… Maybe… We haveÂ detailed our attempts in this regard in various posts on this (just search for

*spinÂ*orÂ

*angular momentumÂ*on this blog, and you’ll get a dozen posts or so), but these attempts are, admittedly, not

*fully satisfactory*. Having said that, they do go a long way in relating angles to spin numbers.]

The thing is: we do assume our electron is spinning around. If we look from theÂ *up*-direction *only*, then it will be spinningÂ *clockwise *if its angular momentum is down (so itsÂ *magnetic moment *isÂ *up*). Conversely, it will be spinningÂ *counter*clockwise if its angular momentum isÂ *up*. Let us take theÂ *up*-state. So we have a top view of the apparatus, and we see something like this:I know you are laughing aloud now but think of your amusement as a nice reward for having stuck to the story so far. Thank you. đ And, yes, do check it yourself by doing some drawings on your table or so, and then look at them from various directions as you walk around the table asâI am not ashamed to admit thisâI did when thinking about this. So what do we get when we change the perspective? Let us walk around it, *counterclockwise*, let’s say, so we’re measuring our angle of rotation as someÂ *positiveÂ *angle.Â Walking around itâin whatever direction, clockwise or counterclockwiseâdoesn’t change the counterclockwise direction of our… Well… That weird object that mightâjust *mightâ*represent an electron that has its spin up and that is traveling in the positive *y*-direction.

When we look in the direction of propagation (so that’s from left to right as you’re looking at this page), and we abstract away from its linear motion, then we could, vaguely, describe this by some wrenchedÂ *e ^{i}*

^{Îž}Â =Â

*cos*Îž +Â

*i*Â·

*sin*Îž function, right? The

*x-*andÂ

*y*-axesÂ

*of the apparatus*may be used to measure the cosine and sine components respectively.

Let us keep looking from the top but walk around it, rotating ourselves over a 180Â° angle so we’re looking in theÂ *negativeÂ *y-direction now. As I explained in one of those posts on symmetries, our mind will want to switch to a new reference frame: we’ll keep theÂ *z*-axis (up is up, and down is down), but we’ll want the positive direction of the *x*-axis to… Well… Point right. And we’ll want theÂ *y*-axis to point away, rather than towards us. In short, we have a transformation of the reference frame here:Â *z’* =Â *z*,Â *y’* = âÂ *y*, andÂ *x’* =Â âÂ *x*. Mind you, this is still a regular right-handed reference frame. [That’s the difference with aÂ *mirrorÂ *image: aÂ *mirroredÂ *right-hand reference frame is no longer right-handed.]Â So, in our new reference frame, that we choose to coincide with ourÂ *perspective*,Â we will now describe the same thing as someÂ â*cos*Îž âÂ *i*Â·*sin*Îž =Â â*e ^{i}*

^{Îž}Â function. Of course,Â â

*cos*Îž =Â

*cos*(Îž +Â Ï) andÂ â

*sin*Îž =Â

*sin*(Îž +Â Ï) so we can write this as:

â*cos*Îž âÂ *i*Â·*sin*Îž =Â *cos*(Îž +Â Ï) +Â *i*Â·*sin*Îž =Â *e ^{i}*

^{Â·(}

^{Îž+Ï)}Â =Â

*e*

^{i}^{Ï}Â·

*e*

^{i}^{Îž}Â = â

*e*

^{i}^{Îž}.

Sweet ! But… Well… First note this isÂ *notÂ *the complex conjugate:Â *e*^{âi}^{Îž}Â =Â *cos*Îž âÂ *i*Â·*sin*ÎžÂ â Â â*cos*Îž âÂ *i*Â·*sin*Îž =Â â*e ^{i}*

^{Îž}. Why is that? Aren’t we looking at the same clock, but from the back? No. The plane of polarization is different. Our clock is more like those in Dali’s painting: it’s flat. đ And, yes, let me lighten up the discussion with that painting here. đ We need to haveÂ

*someÂ*fun while torturing our brain, right?

So, because we assume the plane of polarization is different, we get anÂ â*e ^{i}*

^{Îž}Â function instead of aÂ

*e*

^{âi}

^{Îž}Â function.

Let us now think about the *e ^{i}*

^{Â·(}

^{Îž+Ï)}Â function. It’s the same asÂ â

*e*

^{i}^{Îž}Â but… Well… We walked around theÂ

*z*-axis taking a full 180Â° turn, right? So that’s Ï in radians. So that’s the

*phase shiftÂ*here.

*Hey!Â*Try the following now. Go back and walk around the apparatus once more, but letÂ the reference frame

*rotate with us*, as shown below. So we start left and look in the direction of propagation, and then we start moving about theÂ

*z*-axis (which points out of this page,

*toward*you, as you are looking at this), let’s say by some small angleÂ Î±. So we rotate the reference frame about theÂ

*z*-axis byÂ Î± and… Well… Of course, ourÂ

*e*

^{i}^{Â·}

^{Îž}Â now becomes anÂ ourÂ

*e*

^{i}^{Â·(}

^{Îž+Î±)}Â function, right? We’ve just derived the transformation coefficient for a rotation about theÂ

*z*-axis, didn’t we? It’s equal toÂ

*e*

^{i}^{Â·}

^{Î±}, right? We get the transformed wavefunction in the new reference frame by multiplying the old one byÂ

*e*

^{i}^{Â·}

^{Î±}, right? It’s equal toÂ

*e*

^{i}^{Â·}

^{Î±}Â·

*e*

^{i}^{Â·}

^{Îž}Â =Â

*e*

^{i}^{Â·(}

^{Îž+Î±)}, right?

Well…

[…]

No. The answer is: no. TheÂ transformation coefficient is notÂ *e ^{i}*

^{Â·}

^{Î±}Â butÂ

*e*

^{i}^{Â·}

^{Î±/2}. So we get an additional 1/2 factor in theÂ

*phase shift*.

* Huh?Â *Yes.Â That’s what it is: when we change the representation, by rotating our apparatus over some angle Î± about the

*z*-axis, then we will, effectively, get a new wavefunction, which will differ from the old one by a phase shift that is equal to onlyÂ

*half*ofÂ the rotation angle only.

** Huh?Â **Yes. It’s even weirder than that. For a spin

*downÂ*electron, the transformation coefficient is

*e*

^{âiÂ·}

^{Î±/2}, so we get an additional minus sign in the argument.

* Huh?Â *Yes.

I know you are terribly disappointed, but that’s how it is. That’s what hampers an easy geometric interpretation of the wavefunction. Paraphrasing Feynman, I’d say that, somehow, our electron not only knows whether or not it has taken a turn, but it also knows whether or not it is moving away from us or, conversely, towards us.

[…]

But…Â *Hey! Wait a minute! That’s it, right?Â *

What? Well… That’s it! The electron doesn’t know whether it’s moving away or towards us. That’s nonsense. But… Well… It’s like this:

**OurÂ e^{i}^{Â·}^{Î±}Â coefficient describes a rotation of the reference frame. In contrast, theÂ e^{i}^{Â·}^{Î±/2}Â andÂ e^{âiÂ·}^{Î±/2}Â coefficients describe what happens when we rotate the T apparatus! Now thatÂ is a very different proposition.Â **

Right! You got it! *Representations*Â and reference frames are different things.Â *QuiteÂ *different, I’d say: representations areÂ *real*, reference frames aren’tâbut then you don’t like philosophical language, do you? đÂ But think of it. When we just go about theÂ *z*-axis, a full 180Â°, but we don’t touch thatÂ *T*-apparatus, we don’t changeÂ *reality*. When we were looking at the electron while standing left to the apparatus, we watched the electrons going in and moving away from us, and when we go about theÂ *z*-axis, a full 180Â°, looking at it from the right-hand side, we see the electrons coming out, moving towards us. But it’s still the same reality. We simply change the reference frameâfrom *xyz* to *x’y’z’* to be precise: we doÂ *not *changeÂ the representation.

In contrast, **when we rotate theÂ TÂ apparatus over a full 180Â°, our electron now goes in the opposite direction. **And whether that’s away or towards us, that doesn’t matter: it was going in one direction while traveling throughÂ

*S*, and now it goes in the opposite directionâ

*relative to the direction it was going in S*, that is.

So what happens,Â *really*, when weÂ change the *representation*, rather than the reference frame? Well… Let’s think about that. đ

### Quantum-mechanical weirdness?

The transformation matrix for the amplitude of a system to be in anÂ *upÂ *orÂ *downÂ *state (and, hence, presumably, for a wavefunction) for a rotation about theÂ *z*-axis is the following one:

Feynman derives this matrix in a rather remarkable intellectualÂ *tour de forceÂ *in the 6th of hisÂ *Lectures on Quantum Mechanics*. So that’s pretty early on. He’s actually worried about that himself, apparently, and warns his students that “This chapter is a rather long and abstract side tour, and it does not introduce any idea which we will not also come to by a different route in later chapters. You can, therefore, skip over it, and come back later if you are interested.”

Well… That’s howÂ *IÂ *approached it. I skipped it, and didn’t worry about those transformations for quite a while. But… Well… You can’t avoid them. In some weird way, they are at the heart of the weirdness of quantum mechanics itself. Let us re-visit his argument. Feynman immediately gets that the whole transformation issue here is just a matter of finding an easy formula for that phase shift. Why? He doesn’t tell us. Lesser mortals like us must just assume that’s how the instinct of a genius works, right? đ So… Well… Because heÂ *knows*âfrom experimentâthat the coefficient isÂ *e ^{i}*

^{Â·}

^{Î±/2}Â instead of

*e*

^{i}^{Â·}

^{Î±}, he just says the phase shiftâwhich he denotes by Î»âmust be someÂ

*proportionalÂ*to the angle of rotationâwhich he denotes byÂ Ï rather than Î± (so as to avoid confusion with the

*EulerÂ*angleÂ Î±). So he writes:

Î» =Â mÂ·Ï

Initially, he also tries the obvious thing: m should be one, right? SoÂ Î» = Ï, right? Well… No. It can’t be. Feynman shows why that can’t be the case by adding a third apparatus once again, as shown below.

Let me quote him here, as I can’t explain it any better:

“SupposeÂ *T*Â is rotated byÂ 360Â°; then, clearly, it is right back at zero degrees, and we should haveÂ *Câ*_{+} = *C*_{+}Â andÂ *Câ*_{â} =Â *C*_{â}Â or,Â what is the same thing,Â *e ^{i}*

^{Â·mÂ·2Ï}Â = 1. We get m =Â 1. [But no!]Â

*This argument is wrong!*Â To see that it is, consider thatÂ

*TÂ*is rotated byÂ 180Â°. If mÂ were equal to 1, we would have

*Câ*

_{+}=Â

*e*

^{i}^{Â·Ï}

*C*

_{+}Â = â

*C*

_{+}Â and

*Câ*

_{â}=Â

*e*

^{â}

^{i}^{Â·Ï}

*C*

_{â}Â =Â â

*C*

_{â}. [Feynman works with

*statesÂ*here, instead of the wavefunction of the particle as a whole. I’ll come back to this.] However, this is just theÂ

*original*Â state all over again.Â

**Â amplitudes are just multiplied byÂ â1Â which gives back the original physical system. (It is again a case of a**

*Both***phase change.) This means that if the angle betweenÂ**

*common**TÂ*andÂ

*SÂ*is increased to 180Â°, the system would be indistinguishable from the zero-degree situation, and the particles would again go through the (+)Â state of theÂ

*UÂ*apparatus. AtÂ 180Â°, though, the (+)Â state of theÂ

*UÂ*apparatus is theÂ (â

*x*)Â state of the originalÂ

*S*Â apparatus. So a (+

*x*)Â state would become aÂ (â

*x*)Â state. But we have done nothing toÂ

*change*Â the original state; the answer is wrong. We cannot haveÂ m = 1.Â We must have the situation that a rotation byÂ 360Â°, andÂ

*no smaller angle*Â reproduces the same physical state. This will happen ifÂ m = 1/2.”

The result, of course, is this weird 720Â° symmetry. While we get the same *physics* after a 360Â° rotation of the *T* apparatus, we doÂ *notÂ *get the same amplitudes. We get the opposite (complex) number:Â *Câ*_{+} =Â *e ^{i}*

^{Â·2Ï/2}

*C*

_{+}Â = â

*C*

_{+}Â and

*Câ*

_{â}=Â

*e*

^{â}

^{i}^{Â·2Ï/2}

*C*

_{â}Â =Â â

*C*

_{â}. That’s OK, because… Well… It’s aÂ

*commonÂ*phase shift, so it’s just like changing the origin of time. Nothing more. Nothing less. Same physics. Same

*reality.*But… Well…Â

*Câ*

_{+}â Â â

*C*

_{+}Â andÂ

*Câ*

_{â}â Â â

*C*

_{â}, right? We only get our original amplitudes back if we rotate theÂ

*T*apparatus two times, so that’s by a full 720 degreesâas opposed to the 360Â° we’d expect.

Now, space is isotropic, right? So this 720Â° business doesn’t make sense, right?

Well… It does and it doesn’t. We shouldn’t dramatize the situation. What’s the *actual* difference between a complex number and its opposite? It’s like *x* orÂ â*x*, or *t* and â*t.Â *I’ve said this a couple of times already again, and I’ll keep saying it many times more:Â *NatureÂ *surely can’t be bothered by how we measure stuff, right? In the positive or the negative directionâthat’s just our choice, right?Â *OurÂ *convention. So… Well… It’s just like thatÂ â*e ^{i}*

^{Îž}Â function we got when looking at theÂ

*same*experimental set-up from the other side: ourÂ

*e*

^{i}^{Îž}Â and â

*e*

^{i}^{Îž}Â functions didÂ

*notÂ*describe a different reality. We just changed our perspective. TheÂ

*reference frame*. As such, the reference frame isn’tÂ

*real*. The experimental set-up is. AndâI know I will anger mainstream physicists with thisâtheÂ

*representationÂ*is. Yes. Let me say it loud and clear here:

**A different representation describes a different reality. **

In contrast, a different perspectiveâor a different reference frameâdoes not.

### Conventions

While you might have had a lot of trouble going through all of the weird stuff above, the point is: it isÂ *notÂ *all that weird. WeÂ *canÂ *understand quantum mechanics. And in a fairly intuitive way, really. It’s just that… Well… I think some of the conventions in physics hamper such understanding. Well… Let me be precise: one convention in particular, really. It’s that convention for measuring angles. Indeed, Mr. Leonhard Euler, back in the 18th century, might well be “the master of us all” (as Laplace is supposed to have said) but… Well… He couldn’t foresee how his omnipresent formulaâ*e*^{i}^{Îž}Â =Â *cos*Îž +Â *i*Â·*sin*Îžâwould, one day, be used to representÂ *something real*: an electron, or any elementary particle, really. If he *wouldÂ *have known, I am sure he would have noted what I am noting here:Â *NatureÂ *can’t be bothered by our conventions. Hence, ifÂ *e*^{i}^{Îž}Â represents something real, thenÂ *e*^{âi}^{Îž}Â must also represent something real. [Coz I admire this genius so much, I can’t resist the temptation. Here’s his portrait. He looks kinda funny here, doesn’t he? :-)]

Frankly, he would probably have understood quantum-mechanical theory as easily and instinctively as Dirac, I think, and I am pretty sure he would have *noted*âand, if he would have known about circularly polarized waves, probably *agreed* toâthatÂ *alternative *convention for measuring angles: we could, effectively, measure angles clockwise *or* counterclockwise depending on the direction of our particleâas opposed to Euler’s ‘one-size-fits-all’ counterclockwise convention. But so we didÂ *notÂ *adopt that alternative convention because… Well… We want to keep honoring Euler, I guess. đ

So… Well… If we’re going to keep honoring Euler by sticking to that ‘one-size-fits-all’ counterclockwise convention, then **I doÂ believe thatÂ e^{i}^{Îž}Â and e^{âi}^{Îž}Â represent twoÂ differentÂ realities: spin up versus spin down.**

Yes. In our geometric interpretation of the wavefunction, these are, effectively, two different spin directions. And… Well… These are *real* directions: we *seeÂ *something different when they go through a Stern-Gerlach apparatus. So it’s *not* just some convention toÂ *countÂ *things like 0, 1, 2, etcetera versus 0,Â â1,Â â2 etcetera. It’s the same story again: different but relatedÂ *mathematicalÂ *notions are (often) related to different but relatedÂ *physicalÂ *possibilities. So… Well… I think that’s what we’ve got here.Â Think of it. Mainstream quantum math treats all wavefunctions as right-handed but… Well…Â A particle with *up *spin is a different particle than one withÂ *downÂ *spin, right? And, again,Â *Nature*Â surely can*not*Â be bothered about our convention of measuring phase angles clockwise or counterclockwise, right? So… Well… Kinda obvious, right? đ

Let me spell out my conclusions here:

**1.** The angular momentum can be positive or, alternatively, negative: *J* = +Ä§/2 orÂ âÄ§/2. [Let me note that this is *not* obvious. Or less obvious than it seems, at first. In classical theory, you would expect an electron, or an atomic magnet, to line up with the field. Well… The Stern-Gerlach experiment shows they don’t: they keep their original orientation. Well… If the field is weak enough.]

**2.** Therefore, we would probably like to think that an *actual* particleâthink of an electron, or whatever other particle you’d think ofâcomes in twoÂ *variants*:Â right-handed and left-handed. They will, therefore,Â *either* consist of (elementary) right-handed waves or,Â *else*, (elementary) left-handed waves. An elementary right-handed wave would be written as: Ï(Îž* _{i}*)Â

*=*

*e*^{i}^{Îži}

*Â = a*Â·(

_{i}*cos*Îž

*+*

_{i}*iÂ·sin*Îž

*). In contrast,Â an elementary left-handed wave would be written as: Ï(Îž*

_{i}*)Â*

_{i}*=Â*

*e*^{âi}^{Îži}

*Â·(*

*Â =*a_{i}*cos*Îž

*â*

_{i}*iÂ·sin*Îž

*).Â So that’s the complex conjugate.*

_{i}So… Well… Yes, I think complex conjugates are not just someÂ *mathematicalÂ *notion: I believe they represent something real. It’s the usual thing:Â *NatureÂ *has shown us that (most) mathematical possibilities correspond to *realÂ *physical situations so… Well… Here you go. It is reallyÂ just like the left- or right-handed circular polarization of an electromagnetic wave: we can have both for the matter-wave too! [As for the differencesâdifferent polarization plane and dimensions and what have youâI’ve already summed those up, so I won’t repeat myself here.]Â The point is: ifÂ we have two differentÂ *physicalÂ *situations, we’ll want to have two different functions to describe it. Think of it like this: why would we haveÂ *two*âyes, I admit, two *relatedâ*amplitudes to describe the *upÂ *or *downÂ *state of the same system, but only one wavefunction for it?Â You tell me.

[…]

Authors like me are looked down upon by the so-called *professional* class of physicists. The few who bothered to react to my attempts to make sense of Einstein’s basic intuition in regard to the nature of the wavefunction all said pretty much the same thing: “Whatever your geometric (orÂ *physical*) interpretation of the wavefunction might be, it won’t be compatible with theÂ *isotropyÂ *of space. You cannot *imagineÂ *an object with a 720Â° symmetry. That’sÂ *geometrically *impossible.”

Well… Almost three years ago, I wrote the following on this blog: “As strange as it sounds, aÂ spin-1/2 particle needsÂ *twoÂ *full rotations (2Ă360Â°=720Â°) until it is again in the same state. Now, in regard to that particularity, youâll often read something like: â*There isÂ **nothing**Â in our macroscopic world which has a symmetry like that.*â Or, worse, â*Common sense tells us that something like that cannot exist, that it simply is impossible.*â [I wonât quote the site from which I took this quotes, because it is, in fact, the site of a very respectable Â research center!]*Â Bollocks!*Â TheÂ Wikipedia article on spinÂ has this wonderful animation: look at how the spirals flip between clockwise and counterclockwise orientations, and note that itâs only after spinning a full 720 degrees that this âpointâ returns to its original configuration after spinning a full 720 degrees.

So… Well… I am still pursuing my original dream which is… Well… Let me re-phrase what I wrote back in January 2015:

**Yes, weÂ canÂ actually imagine spin-1/2 particles**, and we actually do not need all that much imagination!

In fact, I am tempted to think that I’ve found a pretty good representation or… Well… A pretty goodÂ *image*, I should say, because… Well… A representation is something real, remember? đ

**Post scriptum** (10 December 2017):Â Our flywheel model of an electron makes sense, but also leaves many unanswered questions. The most obvious one question, perhaps, is: why theÂ *upÂ *andÂ *downÂ *state only?

I am not so worried about that question, even if I can’t answer it right away because… Well… Our apparatusâthe way weÂ *measureÂ *realityâis set up to measure the angular momentum (or the *magnetic moment*, to be precise) in one direction only. If our electron isÂ *captured*Â by someÂ *harmonicÂ *(or non-harmonic?) oscillation in multiple dimensions, then it should not be all that difficult to show its magnetic moment is going to align, somehow, in the same *or*, alternatively, the opposite direction of the magnetic field it is forced to travel through.

Of course, the analysis for the spinÂ *upÂ *situation (magnetic moment *down*) is quite peculiar: if our electron is aÂ *mini*-magnet, why would itÂ *notÂ *line up with the magnetic field? We understand the precession of a spinning top in a gravitational field, but…Â *Hey**… It’s actually not that different*. Try to imagine some spinning top on the ceiling. đ I am sure we can work out the math. đ The electron must be some gyroscope, really: it won’t change direction. In other words, its magnetic moment won’t line up. It will precess, and it can do so in two directions, depending on its *state*. đ […] At least, that’s why my instinct tells me. I admit I need to work out the math to convince you. đ

The second question is more important. If we just rotate the reference frame over 360Â°, we see the same thing: some rotating object which we, vaguely, describe by someÂ *e*^{+i}^{Â·Îž}Â functionâto be precise, I should say: by some *Fourier* sum of such functionsâor, if the rotation is in the other direction, by someÂ *e*^{âi}^{Â·Îž}Â function (again, you should read: aÂ *FourierÂ *sum of such functions). Now, the weird thing, as I tried to explain above is the following: if we rotate the object itself, over the sameÂ 360Â°, we get aÂ *differentÂ *object: ourÂ *e*^{i}^{Â·Îž}Â andÂ *e*^{âi}^{Â·Îž}Â function (again: think of aÂ *FourierÂ *sum, so that’s a waveÂ *packet*, really) becomes aÂ â*e*^{Â±i}^{Â·Îž}Â thing. We get aÂ *minusÂ *sign in front of it.Â So what happened here? What’s the difference, *really*?

Well… I don’t know. It’s very deep. If I do nothing, and you keep watching me while turning around me, for a fullÂ 360Â°, then you’ll end up where you were when you started and, importantly, you’ll see the same thing.Â *ExactlyÂ *the same thing: if I was anÂ *e*^{+i}^{Â·Îž}Â wave packet, I am still anÂ anÂ *e*^{+i}^{Â·Îž}Â wave packet now. OrÂ if I was an *e*^{âi}^{Â·Îž}Â wave packet, then I am still anÂ an *e*^{âi}^{Â·Îž}Â wave packet now. Easy. Logical. *Obvious*, right?

But so now we try something different:Â *IÂ *turn around, over a fullÂ 360Â° turn, and *youÂ *stay where you are. When I am back where I wasâlooking at you again, so to speakâthen… Well… I am not quite the same any more. Or… Well… Perhaps I am but youÂ *seeÂ *me differently. If I wasÂ *e*^{+i}^{Â·Îž}Â wave packet, then I’ve become aÂ â*e*^{+i}^{Â·Îž}Â wave packet now. Not *hugely* different but… Well… ThatÂ *minusÂ *sign matters, right? OrÂ If I wasÂ wave packet built up from elementaryÂ *a*Â·*e*^{âi}^{Â·Îž}Â waves, then I’ve become aÂ â*e*^{âi}^{Â·Îž}Â wave packet now. What happened?

It makes me think of the twin paradox in special relativity. We know it’s aÂ *paradox*âso that’s anÂ *apparentÂ *contradiction only: we know which twin stayed on Earth and which one traveled because of the gravitational forces on the traveling twin. The one who stays on Earth does not experience any acceleration or deceleration. Is it the same here? I mean… The one who’s turning around must experience someÂ *force*.

Can we relate this to the twin paradox? Maybe. Note that aÂ *minusÂ *sign in front of theÂ *e*^{âÂ±i}^{Â·Îž}Â functions amounts a minus sign in front of both the sine and cosine components. So… Well… The negative of a sine and cosine is the sine and cosine but with a phase shift of 180Â°: â*cos*Îž =Â *cos*(Îž Â± Ï) andÂ â*sin*Îž =Â *sin*(Îž Â± Ï). Now, adding or subtracting aÂ *commonÂ *phase factor to/from the argument of the wavefunction amounts toÂ *changingÂ *the origin of time. So… Well… I do think the twin paradox and this rather weird business of 360Â° and 720Â° symmetries are, effectively, related. đ

# Re-visiting the Uncertainty Principle

Let me, just like Feynman did in his last lecture on quantum electrodynamics for *Alix Mautner*, discuss some loose ends. Unlike Feynman, I will not be able to tie them up. However, just *describingÂ *them might be interesting and perhaps *you, my imaginary reader*, could actually *help me* with tying them up !Â Let’s first re-visit the* wave function for a photon*Â by way of introduction.

**The wave function for a photon**

Let’s not complicate things from the start and, hence, let’s first analyze a nice ** Gaussian wave packet**, such as the

*right-hand*graph below: Îš(x, t). It could be a

*de Broglie*wave representing an electron but here we’ll assume the wave packet might actually represent a

**.Â [Of course, do remember we should actually show both the real as well as the imaginary part of this complex-valued wave function but we don’t want to clutter the illustration and so it’s only one of the two (cosine or sine). The ‘other’ part (sine or cosine) is just the same but with a**

*photon**phase shift.Â*Indeed, remember that a complex number r

*e*

^{Îž}Â is equal to r(cosÎž +

*i*sinÎž), and the shape of the sine function is the same as the cosine function but it’s shifted to the left with Ï/2. So if we have one, we have the other. End of digression.]Â

The assumptions associated with this wonderful mathematical shape include the idea that the wave packet is a composite wave consisting of a large number of *harmonic* waves with wave numbers k_{1}, k_{2}_{,Â }k_{3},… all lying around some mean value **ÎŒ _{k}**. That is what is shown in the

*left-hand*graph. The mean value is actually noted as k-

*bar*in the illustration above but because I can’t find a k-bar symbol among the ‘special characters’ in the text editor tool bar here, I’ll use the statistical symbolsÂ ÎŒ and Ï to represent a mean value (ÎŒ) and some spread around it (Ï). In any case,Â we have a pretty normal shape here, resembling the Gaussian

*distribution*Â illustrated below.

These Gaussian distributions (also known as aÂ *density function*) have outliers, but you will catch 95,4% of the observations within the ÎŒÂ Â± *2*Ï interval, and 99.7% within theÂ ÎŒÂ Â± *3*Ï interval (that’s the so-called two- and three-sigma rule). Now, the *shapeÂ *of the left-handÂ graphÂ of the first illustration, mapping the relation between k and A(k), is the same as this Gaussian density function, and if you would take a little ruler and measure the spread of k on the horizontal axis, you would find that the values for k are effectively spread over an interval that’s somewhat bigger than *k-bar*_{Â }plus or minus 2Îk. So let’s say 95,4% of the values of k lie in the interval [ÎŒ_{kÂ }â 2Îk, ÎŒ_{k }+ 2Îk]. Hence, for all practical purposes, we can write thatÂ **ÎŒ _{kÂ }â 2Îk Â < k_{nÂ }< ÎŒ_{kÂ }+ 2Îk**. In any case, we do not care too much about the rest because their

*contribution*to the amplitude of the wave packet is minimal anyway, as we can see from that graph. Indeed, note that the A(k) values on the vertical axis of that graph do

*not*representÂ the

*densityÂ*of the k variable: there is only

*one*wave number for

*each*Â componentÂ wave, and so there’s no distribution or density function of k. These A(k) numbers represent the (maximum) amplitude of the component waves of our wave packetÂ Îš(x, t). In short, they are the values A(k) appearing in the summation formula for our composite wave, i.e. the wave packet:

I don’t want to dwell much more on the math here (I’ve done that in my other posts already): I just want you to get a general understanding of that ‘ideal’ wave packet possibly representing a photon above so you can follow the rest of my story. So we have a (theoretical) bunch of (component) waves with different wave numbers k_{n}, and **the spread in these wave numbers** – i.e. 2Îk, or let’s take 4Îk to make sure we catch (almost) all of them – **determines the length of the wave packet Îš**, which is written here as 2Îx, or 4Îx if we’d want to include (most of) the tail ends as well. What else can we say aboutÂ Îš? Well… Maybe something about velocities and all that? OK.

To calculate velocities, we need bothÂ Ï and k. Indeed, the *phaseÂ *velocity of a wave (*v*_{p}) is equal toÂ *v*_{pÂ }= Ï/k. Now, the wave number k of the wave packet *itself*Â – i.e. the wave number of the oscillating ‘carrier wave’ so to say – should be equal to ÎŒ_{kÂ }according to the article I took this illustration from. I should check that but, looking at that relationship between A(k) and k, I would not be surprised if the math behind is right. So we have the k for the wave packet itself (as opposed to the k’s of its components). However, I also need the *angular frequency* Ï.

So what is that Ï? Well… That will depend on all the Ï’s associated with all the k’s, isn’t it? It does. But, as I explained in a previous post, the *component *waves do not necessarily have to travel all at the same speed and so the relationship between Ï and k may not be simple. We would *loveÂ *that, of course, but Nature does what it wants. The only reasonable constraint we can impose on all thoseÂ Ï’s is that they should be some linear function of k. Indeed, if we do *notÂ *want our wave packet to dissipate (or disperse or, to put it even more plainly, to disappear), thenÂ the so-called dispersion relation Ï = Ï(k) should be linear, so Ï_{n }should be equal to Ï_{nÂ }= *a*k_{nÂ }+ *b*. What *a* and *b*? We don’t know. Random constants. But if the relationship is not linear, then the wave packet will disperse and it cannot possibly represent a particle – be it an electron or a photon.

I won’t go through the math all over again but in my *Re-visiting the Matter Wave (I)*, I used *the other* de Broglie relationship (E = Ä§Ï) to show that – for matter waves that do *notÂ *disperseÂ – we will find that the *phase velocity* will equal c/ÎČ, with ÎČ = *v*/*c*, i.e. the ratio of the speed of our particle (*v*) and the speed of light (*c*). But, of course, photons travel at the speed of light and, therefore, everything becomes very simple and the phase velocity of the wave packet of our photon would equal the group velocity. In short, we have:

*v*_{pÂ }= Ï/k =Â *v*_{gÂ }= âÏ/âk = *c*

Of course, I should add that the angular frequency of all component waves will also be equal to Ï = *c*k, so *all component waves*Â of the wave packet representing a photon are supposed to travel at the speed of light! **What an amazingly simple result!**

It is. In order to illustrate what we have here – especially the elegance and simplicity of that wave packet for a photon – I’ve uploaded two *gifÂ *files (see below). The first one could represent our ‘ideal’ photon: **group and phase velocity** (represented by the speed of the green and red dot respectively) **are the same**. Of course, our ‘ideal’ photon would only be one wave packet – not a bunch of them like here – but then you may want to think that the ‘beam’ below might represent a number of photons following each other in a regular procession.

The second animated *gif*Â below shows how **phase and group velocity can differ**. So that would be a (bunch of) wave packets representing a particle

*notÂ*traveling at the speed of light.Â The phase velocity here isÂ

*fasterÂ*than the group velocity (the red dot travels faster than the green dot). [One can actually also have a wave with positive group velocity and negative phase velocity – quite interesting ! – but so that would

*not*represent a particle wave.] Again, a particle would be represented by one wave packet only (so that’s the space

*betweenÂ two green dots*only)Â but, again, you may want to think of this as representing electrons following each other in a very regular procession.

*Â*

These illustrations (which I took, once again, from the online encyclopedia Wikipedia) are a wonderful pedagogic tool. I don’t know if it’s by coincidence but the group velocity of the *second *wave is actually somewhat slower than the first – so the photon versus electron comparison holds (electrons are supposed to move (much) slower). However, as for the phase velocities, they are the same for both waves and that wouldÂ *notÂ *reflect the results we found for matter waves. Indeed, you may or may not remember that we calculatedÂ *superluminal speeds *for the phase velocity of matter waves in that post I mentioned above (*Re-visiting the Matter Wave*): an electron traveling at a speed of 0.01*c *(1% of the speed of light)Â would be represented by a wave packet with a group velocity of 0.01*c* indeed, but itsÂ *phaseÂ *velocity would be *100 times *the speed of light, i.e. 100*c*. [That being said, the second illustration may be interpreted as *a little bit correct* as the red dot does travel faster than the green dot, which – as I explained – is not necessarily always the case when looking at such composite waves (we can have slower or even negative speeds).]

Of course, I should once again repeat that **we should not think that a photon or an electron is actually wriggling through space like this**: the oscillation only represent the real

*or*imaginary part of the complex-valued probability

*amplitude*associated with our ‘ideal’ photon or our ‘ideal’ electron. That’s all. So this wave is an ‘oscillating complex number’, so to say, whose modulus we have to square to get the probability to actually

*find*the photon (or electron) at some point x and some time t. However, the photon (or the electron) itself are just moving

*straight*from left to right, with a speed matching the group velocity of their wave function.

*Are they?Â *

Well… No. Or, to be more precise: *maybe*. ** WHAT?**Â Yes, that’s surely one ‘loose end’ worth mentioning! According to QED, photons also have an amplitude to travel faster or slower than light, and they are not necessarily moving in a straight line either.Â

**Yes. That’s the complicated business I discussed in my previous post. As for the amplitudes to travel faster or slower than light, Feynman dealt with them very summarily. Indeed, you’ll remember the illustration below, which shows that theÂ**

*WHAT?**contributionsÂ*of the amplitudes associated with slower or faster speed than light tend to nil because (a) their magnitude (or modulus) is smaller and (b) they point in the ‘wrong’ direction, i.e.

*not*the direction of travel.

Still, these amplitudes are there and – *Shock, horror !Â *– photons also have an amplitude to *not* travel in a straight line, especially when they are forced to travel through a narrow slit, or right next to some obstacle. That’s diffraction, described as “the apparent bending of waves around small obstacles and the spreading out of waves past small openings” in Wikipedia.Â

Diffraction is one of the many phenomena that Feynman deals with in his 1985 *Alix G. Mautner MemorialÂ **Lectures*. His explanation is easy:* “not enough arrows”* – read: not enough

*amplitudesÂ*to add. With few arrows, there are also few that cancel out indeed, and so the final arrow for the event is quite random, as shown in the illustrations below.

So… Not enough arrows… Feynman adds the following on this: “[For short distances] The nearby, nearly straight paths also make important contributions. So light doesn’tÂ *reallyÂ *travel only in a straight line; it “smells” the neighboring paths around it, and uses a small core of nearby space. In the same way, a mirror has to have enough size to reflect normally; if the mirror is too small for the core of neighboring paths, the light scatters in many directions, no matter where you put the mirror.” (QED, 1985, p. 54-56)

*Not enough arrows…* What does he mean by that? Not enough photons? No. Diffraction for photons works just the same as for electrons: even if the photons would go through the slit ** one by one**, we would have diffraction (see my

*Revisiting the Matter Wave (II)Â*post for a detailed discussion of the experiment). So

*even one photon*is likely to take some random direction left or right after going through a slit, rather than to go straight.

**Not enough arrows means not enough amplitudes.**But what amplitudes is he talking about?

These amplitudes have nothing to do with the wave function of our ideal photon we were discussing above: that’s the amplitude Îš(x, t) of a photon to beÂ *at point x at point t*. The amplitude Feynman is talking about is **the amplitude of a photon to go from point A to B along one of the infinitely many possible paths it could take**. As I explained in my previous post, we have to add all of these amplitudes to arrive at one big final arrow which, over longer distances, will usually be associated with a rather large probability that the photon will travel in a straight line and at the speed of light – which is why light seems to do at a macro-scale. đ

But back to that very succinct statement: ** not enough arrows. **That’s obviously a very

*relative*statement. Not enough

*as compared to what*?

**What measurement scale are we talking about here?**It’s obvious that the ‘scale’ of these arrows for electrons is different than for photons, because the 2012 diffraction experiment with electrons that I referred to used 50

*nano*meter slits (50Ă10

^{â9}Â m), while one of the many experiments demonstrating light diffraction using pretty standard (red) laser light used slits of some 100

*micro*meter (that 100Ă10

^{â6}Â m or – in units you are used to –

**).Â**

*0.1 millimeter*The key to the ‘scale’ here is the wavelength of theseÂ *de Broglie* waves: the slit needs to be ‘small enough’ *as compared to these de Broglie wavelengths*. For example, the width of the slit in the laser experiment corresponded to (roughly) 100 times the wavelength of the laser light, and the (de Broglie) wavelength of the electrons in that 2012 diffraction experiment was 50 picometer – that was actually a *thousandÂ times* the electron wavelength – but it was OK enough to demonstrate diffraction. *MuchÂ *larger slits would not have done the trick. So, when it comes to light, we have diffraction at scales that do not Â involve nanotechnology, but when it comes to matter particles, we’re not talking *micro* but *nano*: **that’s thousand times smaller**.

**The weird relation between energy and size**

Let’s re-visit the Uncertainty Principle, even if Feynman says we don’t need that (we just need to do the amplitude math and we have it all). We wrote the uncertainty principle using the more scientific *KennardÂ *formulation: Ï_{x}Ï_{pÂ }â„ Ä§/2, in which theÂ *sigmaÂ *symbol represents the standard deviation of position x and momentum p respectively. Now that’s confusing, you’ll say, because we were talking wave numbers, not momentum in the introduction above. Well… TheÂ wave number k of a *de BroglieÂ *wave is, of course, related to the momentum p of the particle we’re looking at: **p = Ä§k**. Hence, a spread in the wave numbers amounts to a spread in the momentum really and, as I wanted to talk scales, let’s now check the dimensions.

The value for Ä§ is about **1Ă10 ^{â34 }JouleÂ·seconds**Â (JÂ·s) (it’s aboutÂ 1.054571726(47)Ă10

^{â34}Â but let’s go with the gross approximation as for now). One JÂ·s is the same as oneÂ kgÂ·m

^{2}/s because 1 Joule is a shorthand for km kgÂ·m

^{2}/s

**. It’s a rather large unit and you probably know that physicists prefer electronVoltÂ·seconds (eVÂ·s) because of that. However, even in expressed in eVÂ·s the value forÂ Ä§ comes out**

^{2}**6.58211928(15)Ă10**

*astronomically small*:Â^{â16}Â eVÂ·s. In any case, because the JÂ·s makes dimensions come out right, I’ll stick to it for a while. What does this incredibleÂ

*smallÂ*factor of proportionality, both in theÂ

*de BroglieÂ*relations as well in that Kennard formulation of the uncertainty principle, imply? How does it work out from a math point of view?

Well… It’s literally a *quantum* of measurement: even if Feynman says the uncertainty principle should just be seen “in its historical context”, and that “we don’t need it for adding arrows”, it is a *consequence* of the (related) position-space and momentum-space wave functions for a particle. In case you would doubt that, check it on Wikipedia: the author of the article on the uncertainty principle *derives*Â it from these two wave functions, which form a so-called Fourier transform pair. But soÂ *what does it say really?*

Look at it. First, it says that we cannot know any of the two valuesÂ *exactlyÂ *(exactly means 100%) because then we have a zero standard deviation for one or the other variable, and then the inequality makes no sense anymore: zero is obviously not greater or equal toÂ 0.527286Ă10^{â34Â }JÂ·s.Â However, the inequality with the value for Â Ä§ plugged in shows * how close to zeroÂ *we can get with our measurements

*. Let’s check it out.Â*

Let’s use the assumption thatÂ *twoÂ *times the standard deviation (written as 2Îk or 2Îx on or above the two graphs in the very first illustration of this post) sort of captures the whole ‘range’ of the variable. It’s not a bad assumption: indeed, if Nature would follow normal distributions – and in our macro-world, that seems to be the case – then we’d capture 95.4 of them, so that’s good. Then we canÂ re-write the uncertainty principle as:

ÎxÂ·Ï_{pÂ }â„ Ä§ *or* Ï_{x}Â·ÎpÂ â„ Ä§

So that means we know x within some *interval* (or ‘range’ if you prefer that term)Â Îx *or, **else,Â *we know pÂ within someÂ *interval*Â (or ‘range’ if you prefer that term)Â Îp. But we want to know *both* within some range, you’ll say. Of course. In that case, the uncertainty principle can be written as:

ÎxÂ·Îp_{Â }â„ 2Ä§

*Huh? Why the factor 2?*Â Well… ** Each **of the two

*Â*Î ranges corresponds to

*2*Ï (hence, Ï

_{xÂ }= Îx/2 and Ï

_{pÂ }=Â Îp/2), and so we have (1/2)ÎxÂ·(1/2)Îp

_{Â }â„ Ä§/2. Note that if we would equate ourÂ Î with 3Ï to get 97.7% of the values, instead of 95.4% only, once again assuming that Nature distributes all relevant properties normally (not sure – especially in this case, because we are talking discrete

*quanta of actionÂ*here – so Nature may want to cut off the ‘tail ends’!), then we’d get ÎxÂ·Îp

_{Â }â„ 4.5ĂÄ§: the cost of extra precision soars! Also note that, if we would equate ÎÂ with Ï (the one-sigma rule corresponds to 68.3% of a normally distributed range of values), then we get yet another ‘version’ of the uncertainty principle: ÎxÂ·Îp

_{Â }â„ Ä§/2. Pick and choose! And if we want to be purists, we should note that Ä§ is used when we express things in radians (such as the

*angular*frequency for example: E =Â Ä§Ï), so we should actually use h when we are talking distance and (linear) momentum. The equation above then becomes ÎxÂ·Îp

_{Â }â„ h/Ï.

It doesn’t matter all that much. The point to note is that, if we express x and p in regular distance and momentum units (m and kgÂ·m/s), then the unit for Ä§ (or h) is 1Ă10^{â34}. Now, we can sort of choose how to spread the uncertainty over x and p. If we spread it evenly, then we’ll measure bothÂ Îx and Îp Â in units of 1Ă10^{â17 }Â m and 1Ă10^{â17Â }kgÂ·m/s. That’s small… but not *thatÂ *small. In fact, it is (more or less)Â *imaginably *small I’d say.

For example, a photon of a blue-violet light (let’s say a wavelength of around 660 nanometer) would have a momentum p = h/Î» equal to some 1Ă10^{â22}Â kgÂ·m/s (just work it out using the values for h and Î»). You would usually see this value measured in a unit that’s more appropriate to the atomic scale: 6.25 eV/c. [Converting momentum into energy using E = pc, and using the Joule-electronvolt conversion (1 eV â 1.6Ă10^{â19}Â J) will get you there.] Hence, units ofÂ 1Ă10^{â17Â }Â m for momentum are *a hundred thousand times* the rather average momentum of our light photon. We can’t have that so let’s *reduce* the uncertainty related to the momentum to thatÂ 1Ă10^{â22}Â kgÂ·m/s scale. Then the uncertainty about position will be measured in units ofÂ 1Ă10^{â12Â }m. That’s theÂ *pico*meter scale in-between the nanometer (1Ă10^{â9Â }m) and the femtometer (1Ă10^{â9Â }m) scale. You’ll remember that this scale corresponds to the resolution of a (modern) electron microscope (50 pm). So can weÂ *see *“uncertainty effects” ? Yes. I’ll come back to that.

However, before I discuss these, I need to make a little digression. Despite the sub-title I am using above, the uncertainties in distance and momentum we are discussing here are nowhere near to what is referred to as * the Planck scale* in physics: the Planck scale is at the other side of that

**I mentioned: theÂ**

*Great Desert**Large Hadron Collider*, which smashes particles with Â (average) energies of 4

*tera*-electronvolt (i.e. 4

*trillion*Â eV – all packed into

*one*particle !) is probing stuff measuring at a scale of a thousandth of a femtometer (0.001Ă10

^{â12Â }m), but we’re obviously at the limits of what’s technically possible, and so that’s where the Great Desert starts. The ‘other side’ of that Great Desert is the Planck scale: 10

^{â35Â }m. Now, why is that some kind of theoretical limit? Why can’t we just continue to further cut these scales down? Just like

*Dedekind*did when defining irrational numbers? We can surely get

*infinitely close to zero*, can we? Well… No. The reasoning is quite complex (and I am not sure if I actually understand it – the way I should) but it is quite relevant to the topic here (the relation between energy and size), and it goes something like this:

- In quantum mechanics, particles are considered to be point-like but they do take space, as evidenced from our discussion on slit widths: light will show diffraction at the
*micro*-scale (10^{â6Â }m) but electrons will do that only at the*nano*-scale (10^{â9Â }m), so that’s*a thousand times smaller*. That’s related to their respective theÂ*de BroglieÂ*wavelength which, for electrons, is also a thousand times smaller than that of electrons. Now, the*de Broglie*wavelength is related to theÂ*energyÂ*and/or theÂ*momentumÂ*of these particles: E = h*f*and p = h/Î». - Higher energies correspond to smaller
*de BroglieÂ*wavelengths and, hence, are associated with particlesÂ*of smaller size*. To continue the example, the energy formula to be used in theÂ E = h*f*relation for an electron – or any particle with rest mass – is the (relativistic) mass-energy equivalence relation: E = Îłm_{0}*c*^{2}, with Îł the Lorentz factor, which depends on the velocity*v*of the particle. For example, electrons moving at more or less normal speeds (like in the 2012 experiment, or those used in an electron microscope) have typical energy levels of some 600 eV, and don’t think that’s a lot: the electrons from that cathode ray tube in the back of an old-fashioned TV which lighted up the screen so you could watch it, had energies in the 20,000 eV range. So, for electrons, we are talking energy levels a thousand or a hundred thousand higher than for your typical 2 to 10 eV photon. - Of course, I am not talking X or gamma rays here:
*hard*X rays also have energies of 10 to 100*kilo*-electronvolt, and*gamma*ray energies range from 1 million to 10 million eV (1-10 MeV). In any case, the point to note is ‘small’ particles must have high energies, and I am*not only*talking massless particles such as photons. Indeed, in my post*End of the Road to Reality?*, I discussed the scale of a proton and the scale of quarks: 1.7 and 0.7*femto*meter respectively, which isÂ*smallerÂ*than the so-called classical electron radius. So we have (much) heavier particles here that are*smaller?*Â Indeed, theÂ*restÂ*mass of the*u*and*d*quarks that make up a proton (*uud*) is 2.4 and 4.8 MeV/*c*^{2}Â respectively, while the (theoretical) rest mass of an electron is 0.511 Mev/*c*^{2Â }only, so that’s almost 20 times more: (2.4+2.4+4.8/0.5). Well… No. The rest mass of a proton is actually 1835 times the rest mass of an electron: the difference between the added rest masses of the quarks that make it up and the rest mass of the proton itself (938 MeV//*c*^{2}) is the equivalent mass of the*strong forceÂ*that keeps the quarks together. - But let me not complicate things. Just note that there seems to be a strange relationship between the energy and the size of a particle: high-energy particles are supposed to be smaller, and vice versa: smaller particles are associated with higher energy levels. If we accept this as some kind of ‘factual reality’, then we
*mayÂ*understand what the Planck scale is all about:Â : the energy levels associated with theoretical ‘particles’ of the above-mentioned Planck scale (i.e. particles with a size in the 10^{â35Â }m range) would have energy levels in the 10^{19 }GeV range.*So what?*Well… This amount of energy corresponds to an equivalent mass density of a black hole. So any ‘particle’ we’d associate with the Planck length would not make sense as a physical entity: it’s the scale where gravity takes over – everything.

*Again: so what?*Â Well… I don’t know. It’s just that this is entirely new territory, and it’s also not the topic of my post here. So let me just quote Wikipedia on this and then move on: “The fundamental limit for a photon’s energy is the Planck energyÂ [that’s the 10^{19Â }GeV which I mentioned above: to be precise, that ‘limit energy’ is said to be 1.22 Ă 10^{19Â }GeV], *for the reasons cited aboveÂ *[that ‘photon’ would not be ‘photon’ but a black hole, sucking up everything around it].Â This makes the Planck scale a fascinating realm for speculation by theoretical physicists from various schools of thought. Is the Planck scale domain a seething mass of virtual black holes? Is it a fabric of unimaginably fine loops or a spin foam network? Is it interpenetrated by innumerable Calabi-Yau manifolds which connect our 3-dimensional universe with a higher-dimensional space? [That’s what’s string theory is about.] Perhaps our 3-D universe is ‘sitting’ on a ‘brane’ which separates it from a 2, 5, or 10-dimensional universe and this accounts for the apparent ‘weakness’ of gravity in ours. These approaches, among several others, are being considered to gain insight into Planck scale dynamics. This would allow physicists to create a unified description of all the fundamental forces. [That’s what’s these Grand Unification Theories (GUTs) are about.]

Hmm… I wish I could find some easy explanation of why higher energy means smaller size. I do note there’s *an easy relationship* *between energy and momentum* for massless particles traveling at the velocity of light (like photons): E = p*cÂ *(or p = E/*c*), but – from what I write above – it is obvious that **it’s theÂ spreadÂ in momentum (and, therefore, in wave numbers) which determines how short or how long our wave train is, not the energy level as such**. I guess I’ll just have to do some more research here and, hopefully, get back to you when I understand things better.

*Â*

**Re-visiting the Uncertainty Principle**

You will probably have read countless accounts of the double-slit experiment, and so you will probably remember that these thought or actual experiments also try to *watch* the electrons as they pass the slits – with disastrous results: the interference pattern disappears. I copy Feynman’s own drawing from his 1965 *LectureÂ *onÂ *Quantum Behavior *below: a light source is placed behind the ‘wall’, right between the two slits. Now, light (i.e. photons) gets scattered when it hits electrons and so now we should ‘see’ through which slit the electron is coming. Indeed, remember that we sent them through these slitsÂ *one by one*, and we still had interference – suggesting the ‘electron wave’ somehow goes through both slits at the same time, which can’t be true – because an electron is a *particle.*

However, let’s re-examine what happensÂ *exactly*.

- We can only detect
*all electrons*if the light is high intensity, and high intensity doesÂ*notÂ*mean*higher energy*photons but*more*photons. Indeed, if the light source is deem, then electrons might get through without being seen. So a high-intensity light source allows us to see all electrons but – as demonstrated not only in thought experiments but also in the laboratory – it destroys the interference pattern. - What if we use lower-energy photons, like infrared light with wavelengths of 10 to 100 microns instead of visible light? We can then use thermal imaging night vision goggles to ‘see’ the electrons. đ And if that doesn’t work, we can use radiowaves (or perhaps radar!). The problem – as Feynman explains it – is that such low frequency light (associated with long wavelengths) only give a ‘big fuzzy flash’ when the light is scattered: “We can no longer tell which hole the electron went through! We just know it went somewhere!” At the same time, “the jolts given to the electron are now small enough so that we begin to see some interference effect again.” Indeed: “For wavelengths much longer than the separation between the two slits (when we have no chance at all of telling where the electron went), we find that the disturbance due to the light gets sufficiently small that we again get the interference curve P
_{12}.” [P_{12Â }is the curve describing the original interference effect.]

Now, that would suggest that, when push comes to shove, the Uncertainty Principle only describes some indeterminacy in the so-called *ComptonÂ scattering* of a photon by an electron. ThisÂ Compton*Â *scattering is illustrated below: it’s a more or lessÂ *elasticÂ collision* between a photon and electron, in which momentum gets exchanged (especially theÂ *directionÂ *of the momentum) and – quite important –Â *the wavelength of the scattered light is different from the incident radiation*. Hence, the photon *losesÂ *some energy to the electron and, because it will still travel at speed *c*, that means its wavelength must *increase* as prescribed by the Î» = h/p *de BroglieÂ *relation (with p = E/c for a photon).Â The change in the wavelength is called theÂ **Compton shift**. and its formula is given in the illustration: it depends on the (rest) mass of the electron obviously and on theÂ *changeÂ *in the direction of the momentum (of the photon – but that change in directionÂ will obviously also be related to the recoil direction of the electron).

This is a very *physicalÂ *interpretation of the Uncertainty Principle, but it’s the one which the great Richard P. Feynman himself stuck to in 1965, i.e. when he wrote his famous *Lectures on Physics*Â at the height of his career. Let me quote his interpretation of the Uncertainty Principle *in full* indeed:

“It is impossible to design an apparatus to determine which hole the electron passes through, that will not at the same time disturb the electrons enough to destroy the interference pattern. If an apparatus is capable of determining which hole the electron goes through, itÂ *cannotÂ *be so delicate that it does not disturb the pattern in an essential way. No one has ever found (or even thought of) a way around this. So we must assume that it describes a basic characteristic of nature.”

That’s *very *mechanistic indeed, and it points to indeterminacy rather than *ontological uncertainty*.Â However, there’s weirder stuff than electrons being ‘disturbed’ in some kind of random way by the photons we use to detect them, with the randomness only being related to us not knowing at what time photons leave our light source, and what energy or momentum they have *exactly*. That’s just ‘indeterminacy’ indeed; not some fundamental ‘uncertainty’ about Nature.

We see such ‘weirder stuff’ in those *mega-Â *and nowÂ *tera*-electronvolt experiments in particle accelerators. In 1965, Feynman had access to the results of the high-energy positron-electron collisions being observed in the 3 km long *Stanford Linear AcceleratorÂ *(SLAC), which started working in 1961, but stuff like quarks and all that was discovered only in the late 1960s and early 1970s, so that’s *after *Feynman’s Lectures on Physics.So letÂ me just mention a rather remarkable example of the Uncertainty Principle at work which Feynman quotes in his **1985** *Alix G. Mautner MemorialÂ **Lectures on Quantum Electrodynamics*.

In the Feynman diagram below, we see *a photon disintegrating*, at time t = T_{3},Â *into a positron and an electron*. The positron (a positron is an electron with positive charge basically: it’s the electron’sÂ *anti-matterÂ *counterpart) meets another electron that ‘happens’ to be nearby and the annihilation results in (another) high-energy photon being emitted. While, as Feynman underlines, “this is a sequence of events which has been observed in the laboratory”, how is all this possible? We createÂ * matterÂ *– an electron and a positron

*both*have considerable

*mass*Â –

**out of nothing**Â here ! [Well… OK – there’s a photon, so that’s some energy to work with…]

Feynman explains this weird observation without reference to the Uncertainty Principle. He just notes that “Every particle in Nature has an amplitude to move backwards in time, and therefore has an anti-particle.” And so that’s what this electron coming from the bottom-left corner does: it emits a photon and then *the electron moves backwards in time. *So, whileÂ ** we** see a (very short-lived) positron moving forward, it’s actuallyÂ

*an electron quickly traveling back in time according to Feynman!Â*And, after a short while, it has had enough of going back in time, so then itÂ

*absorbsÂ*a photon and continues in a slightly different direction. Hmm… If this does

*not*sound fishy to you, it does to me.

The more standard explanation is in terms of the Uncertainty Principle *applied to energy and time*. Indeed, I mentioned that we have several pairs of conjugate variables in quantum mechanics: position and momentum are one such pair (related through the *de BroglieÂ *relationÂ p =Ä§k), but energy and time are another (related through theÂ *other de BroglieÂ *relation E = h*f* =Â Ä§Ï). While the ‘energy-time uncertainty principle’ – **ÎEÂ·Ît _{Â }â„ Ä§/2** – resembles the position-momentum relationship above, it is apparently used for ‘very short-lived products’ produced in high-energy collisions in accelerators only. I must assume the short-lived positron in the Feynman diagram is such an example:

**there is some kind of borrowing of energy (remember mass is equivalent to energy) against time, and then normalcy soon gets restored.Â**Now

*THAT*is something else than indeterminacy I’d say.

But so Feynman would say both interpretations are equivalent, because Nature doesn’t care about our interpretations.

What to say in conclusion? I don’t know. I obviously have some more work to do before I’ll be able to claim to understand the uncertainty principle – or quantum mechanics in general –Â *somewhat*. I think the next step is to solve my problem with the summary **‘not enough arrows’ explanation**, which is – evidently – linked to the relation between *energy* and *size* of particles. That’s the one loose end I really need to tie up I feel ! I’ll keep you posted !