# Wavefunctions and the twin paradox

My previous post was awfully long, so I must assume many of my readers may have started to read it, but… Well… Gave up halfway or even sooner. đ I added a footnote, though, which is interesting to reflect upon. Also, I know many of my readers aren’t interested in the mathâeven if they understand one cannot really appreciate quantum theory without the math. But… Yes. I may have left some readers behind. Let me, therefore, pick up the most interesting bit of all of the stories in my last posts in as easy a language as I can find.

We have that weird 360/720Â° symmetry in quantum physics orâto be preciseâwe have it for elementary matter-particles (think of electrons, for example). In order to, hopefully, help you understand what it’s all about, I had to explain the often-confused but substantially different concepts of aÂ reference frameÂ and a representational baseÂ (or representationÂ tout court). I won’t repeat that explanation, but think of the following.

If we just rotate the reference frame over 360Â°, we’re just using the same reference frame and so we see the same thing: some object which we, vaguely, describe by someÂ eiÂˇÎ¸Â function. Think of some spinning object. In its own reference frame, it will just spin around some center or, in ours, it will spin while moving along some axis in its own reference frame or, seen from ours, as moving in some direction while it’s spinningâas illustrated below.

To be precise, I should say that we describe it by some Fourier sum of such functions. Now, if its spin direction is… Well… In the other direction, then we’ll describe it by by someÂ eâiÂˇÎ¸Â function (again, you should read: aÂ FourierÂ sum of such functions). Now, the weird thing is is the following: if we rotate the object itself, over the sameÂ 360Â°, we get aÂ differentÂ object: ourÂ eiÂˇÎ¸Â andÂ eâiÂˇÎ¸Â function (again: think of aÂ FourierÂ sum, so that’s a waveÂ packet, really) becomes aÂ âeÂąiÂˇÎ¸Â thing. We get aÂ minusÂ sign in front of it.Â So what happened here? What’s the difference, really?

Well… I don’t know. It’s very deep. Think of you and me as two electrons who are watching each other. If I do nothing, and you keep watching me while turning around me, for a fullÂ 360Â° (so that’s a rotation of your reference frame over 360Â°), then you’ll end up where you were when you started and, importantly, you’ll see the same thing: me. đ I mean… You’ll seeÂ exactlyÂ the same thing: if I was anÂ e+iÂˇÎ¸Â wave packet, I am still anÂ anÂ e+iÂˇÎ¸Â wave packet now. OrÂ if I was an eâiÂˇÎ¸Â wave packet, then I am still anÂ an eâiÂˇÎ¸Â wave packet now. Easy. Logical. Obvious, right?

But so now we try something different:Â IÂ turn around, over a fullÂ 360Â° turn, and youÂ stay where you are and watch meÂ while I am turning around. What happens? Classically, nothing should happen but… Well… This is the weird world of quantum mechanics: when I am back where I wasâlooking at you again, so to speakâthen… Well… I am not quite the same any more. Or… Well… Perhaps I am but youÂ seeÂ me differently. If I wasÂ e+iÂˇÎ¸Â wave packet, then I’ve become aÂ âe+iÂˇÎ¸Â wave packet now.

Not hugely different but… Well… ThatÂ minusÂ sign matters, right? OrÂ If I wasÂ wave packet built up from elementaryÂ aÂˇeâiÂˇÎ¸Â waves, then I’ve become aÂ âeâiÂˇÎ¸Â wave packet now. What happened?

It makes me think of the twin paradox in special relativity. We know it’s aÂ paradoxâso that’s anÂ apparentÂ contradiction only: we know which twin stayed on Earth and which one traveled because of the gravitational forces on the traveling twin. The one who stays on Earth does not experience any acceleration or deceleration. Is it the same here? I mean… The one who’s turning around must experience someÂ force.

Can we relate this to the twin paradox? Maybe. Note that aÂ minusÂ sign in front of theÂ eâÂąiÂˇÎ¸Â functions amounts a minus sign in front of both the sine and cosine components. So… Well… The negative of a sine and cosine is the sine and cosine but with a phase shift of 180Â°: âcosÎ¸ =Â cos(Î¸ Âą Ď) andÂ âsinÎ¸ =Â sin(Î¸ Âą Ď). Now, adding or subtracting aÂ commonÂ phase factor to/from the argument of the wavefunction amounts toÂ changingÂ the origin of time. So… Well… I do think the twin paradox and this rather weird business of 360Â° and 720Â° symmetries are, effectively, related. đ

Post scriptum:Â GoogleÂ honors Max Born’s 135th birthday today. đ I think that’s a great coincidence in light of the stuff I’ve been writing about lately (possible interpretations of the wavefunction). đ

# Quantum Mechanics: The Other Introduction

About three weeks ago, I brought my most substantial posts together in one document: itâs theÂ Deep Blue page of this site. I also published it on Amazon/Kindle. It’s nice. It crowns many years of self-study, and many nights of short and bad sleep â as I was mulling over yet another paradox haunting me in my dreams. Itâs been an extraordinary climb but, frankly, the view from the top is magnificent. đÂ

The offer is there: anyone who is willing to go through it and offer constructive and/orÂ substantial comments will be included in the bookâs acknowledgements section when I go for a second edition (which it needs, I think). First person to be acknowledged here is my wife though, Maria Elena Barron, as she has given me the spacetime and, more importantly, the freedom to take this bull by its horns.Below I just copy the foreword, just to give you a taste of it. đ

# Foreword

Another introduction to quantum mechanics? Yep. I am not hoping to sell many copies, but I do hope my unusual backgroundâI graduated as an economist, not as a physicistâwill encourage you to take on the challenge and grind through this.

Iâve always wanted to thoroughly understand, rather than just vaguely know, those quintessential equations: the Lorentz transformations, the wavefunction and, above all, SchrĂśdingerâs wave equation. In my bookcase, Iâve always had what is probably the most famous physics course in the history of physics: Richard Feynmanâs Lectures on Physics, which have been used for decades, not only at Caltech but at many of the best universities in the world. Plus a few dozen other books. Popular booksâwhich I now regret I ever read, because they were an utter waste of time: the language of physics is math and, hence, one should read physics in mathânot in any other language.

But Feynmanâs Lectures on Physicsâthree volumes of about fifty chapters eachâare not easy to read. However, the experimental verification of the existence of the Higgs particle in CERNâs LHC accelerator a couple of years ago, and the award of the Nobel prize to the scientists who had predicted its existence (including Peter Higgs and FranĂ§ois Englert), convinced me it was about time I take the bull by its horns. While, I consider myself to be of average intelligence only, I do feel thereâs value in the ideal of the âRenaissance manâ and, hence, I think stuff like this is something we all should try to understandâsomehow. So I started to read, and I also started a blog (www.readingfeynman.org) to externalize my frustration as I tried to cope with the difficulties involved. The site attracted hundreds of visitors every week and, hence, it encouraged me to publish this booklet.

So what is it about? What makes it special? In essence, it is a common-sense introduction toÂ the key concepts in quantum physics. However, while common-sense, it does not shy away from the math, which is complicated, but not impossible. So this little book is surely not a Guide to the Universe for Dummies. I do hope it will guide some Not-So-Dummies. It basically recycles what I consider to be my more interesting posts, but combines them in a comprehensive structure.

It isÂ a bit of a philosophical analysis of quantum mechanics as well, as I will – hopefully – do a better job than others in distinguishing theÂ mathematicalÂ concepts from what they are supposed toÂ describe, i.e.Â physicalÂ reality.

Last but not least, it does offer some new didactic perspectives. For those who know the subject already, let me briefly point these out:

I. Few, if any, of the popular writers seems to have noted that the argument of the wavefunction (Î¸ =Â EÂˇt â pÂˇt) â using natural units (hence, the numerical value of Ä§ and c is one), and for an object moving at constant velocity (hence, x = vÂˇt) â can be written as the product of the proper time of the object and its rest mass:

Î¸ = EÂˇt â pÂˇx = EÂˇt â pÂˇx = mvÂˇt â mvÂˇvÂˇx = mvÂˇ(t âÂ vÂˇx)

â Î¸ = m0Âˇ(t âÂ vÂˇx)/â(1 â v2) = m0Âˇtâ

Hence, the argument of the wavefunction is just the proper time of the object with the rest mass acting as a scaling factor for the time: the internal clock of the object ticks much faster if itâs heavier. This symmetry between the argument of the wavefunction of the object as measured in its own (inertial) reference frame, and its argument as measured by us, in our own reference frame, is remarkable, and allows to understand the nature of the wavefunction in a more intuitive way.

While this approach reflects Feynmanâs idea of the photon stopwatch, the presentation in this booklet generalizes the concept for all wavefunctions, first and foremost the wavefunction of the matter-particles that weâre used to (e.g. electrons).

II. Few, if any, have thought of looking at SchrĂśdingerâs wave equation as an energy propagation mechanism. In fact, when helping my daughter out as she was trying to understand non-linear regression (logit and Poisson regressions), it suddenly realized we can analyze the wavefunction as a link function that connects two physical spaces: the physical space of our moving object, and a physical energy space.

Re-inserting Planckâs quantum of action in the argument of the wavefunction â so we write Î¸ as Î¸ = (E/Ä§)Âˇt â (p/Ä§)Âˇx = [EÂˇt â pÂˇx]/Ä§ â we may assign a physical dimension to it: when interpreting Ä§ as a scaling factor only (and, hence, when we only consider its numerical value, not its physical dimension), Î¸ becomes a quantity expressed in newtonÂˇmeterÂˇsecond, i.e. the (physical) dimension of action. It is only natural, then, that we would associate the real and imaginary part of the wavefunction with some physical dimension too, and a dimensional analysis of SchrĂśdingerâs equation tells us this dimension must be energy.

This perspective allows us to look at the wavefunction as an energy propagation mechanism, with the real and imaginary part of the probability amplitude interacting in very much the same way as the electric and magnetic field vectors E and B. This leads me to the next point, which I make rather emphatically in this booklet: Â the propagation mechanism for electromagnetic energy â as described by Maxwellâs equations â is mathematically equivalent to the propagation mechanism thatâs implicit in the SchrĂśdinger equation.

I am, therefore, able to present the SchrĂśdinger equation in a much more coherent way, describing not only how this famous equation works for electrons, or matter-particles in general (i.e. fermions or spin-1/2 particles), which is probably the only use of the SchrĂśdinger equation you are familiar with, but also how it works for bosons, including the photon, of course, but also the theoretical zero-spin boson!

In fact, I am personally rather proud of this. Not because I am doing something that hasnât been done before (I am sure many have come to the same conclusions before me), but because one always has to trust oneâs intuition. So let me say something about that third innovation: the photon wavefunction.

III. Let me tell you the little story behind my photon wavefunction. One of my acquaintances is a retired nuclear scientist. While he knew I was delving into it all, I knew he had little time to answer any of my queries. However, when I asked him about the wavefunction forÂ photons, heÂ bluntly told me photons didnât have a wavefunction. I should just study Maxwellâs equations and thatâs it: thereâs no wavefunction for photons: just this traveling electric and a magnetic field vector. Look at Feynmanâs Lectures, or any textbook, he said. None of them talk about photon wavefunctions. Thatâs true, but I knew he had to be wrong. I mulled over it for several months, and then just sat down and started doing to fiddle with Maxwellâs equations, assuming the oscillations of the E and B vector could be described by regular sinusoids. And â Lo and behold! â I derived a wavefunction for the photon. Itâs fully equivalent to the classical description, but the new expression solves the SchrĂśdinger equation, if we modify it in a rather logical way: we have to double the diffusion constant, which makes sense, because E and B give you two waves for the price of one!

[âŚ]

In any case, I am getting ahead of myself here, and so I should wrap up this rather long introduction. Let me just say that, through my rather long journey in search of understanding â rather than knowledge alone â I have learned there are so many wrong answers out there: wrong answers that hamper rather than promote a better understanding. Moreover, I was most shocked to find out that such wrong answers are not the preserve of amateurs alone! This emboldened me to write what I write here, and to publish it. Quantum mechanics is a logical and coherent framework, and it is not all that difficult to understand. One just needs good pointers, and thatâs what I want to provide here.

As of now, it focuses on theÂ mechanicsÂ in particular, i.e.Â the concept of the wavefunction and wave equation (better known as SchrĂśdinger’s equation).Â The other aspect of quantum mechanics – i.e. the idea ofÂ uncertaintyÂ as implied by the quantum idea – will receive more attention in a later version of this document. I should also say I will limit myself to quantum electrodynamics (QED) only, so I won’t discuss quarks (i.e. quantum chromodynamics, which is an entirely different realm), nor will I delve into any of the other more recent advances of physics.

In the end, you’ll still be left with lots of unanswered questions. However, that’s quite OK, as Richard Feynman himself was of the opinion that he himself did notÂ understandÂ the topicÂ the way he would like to understand it.Â But then that’s exactly what draws all of us to quantum physics: a common search for a deep and fullÂ understanding of reality, rather than just some superficial description of it, i.e. knowledge alone.

So letâs get on with it. I amÂ notÂ saying this is going to be easy reading. In fact, I blogged about much easier stuff than this in my blogâtreating onlyÂ aspectsÂ of the whole theory. This is theÂ whole thing, and it’s not easy to swallow. In fact, it may well too big to swallow as a whole. But please do give it a try. I wanted this to be an intuitive but formally correct introduction to quantum math. However, when everything is said and done, you are the only who can judge if I reached that goal.

Of course, I should not forget the acknowledgements butâŚ WellâŚ It was a rather lonely venture, so I am only going to acknowledge my wife here, Maria, who gave me all of the spacetime and all of the freedom I needed, as I would get up early, or work late after coming home from my regular job. I sacrificed weekends, which we could have spent together, and â when mulling over yet another paradox â the nights were often short and bad. Frankly, it’s been an extraordinary climb, but the view from the top is magnificent.

I just need to insert one caution, my site (www.readingfeynman.org) includes animations, which make it much easier to grasp some of the mathematical concepts that I will be explaining. Hence, I warmly recommend you also have a look at that site, and its Deep Blue page in particular â as that page has the same contents, more or less, but the animations make it a much easier read.

Have fun with it!

Jean Louis Van Belle, BA, MA, BPhil, Drs.

# Re-visiting relativity and four-vectors: the proper time, the tensor and the four-force

Pre-script (dated 26 June 2020): Our ideas have evolved into a full-blown realistic (or classical) interpretation of all things quantum-mechanical. In addition, I note the dark force has amused himself by removing some material. So no use to read this. Read my recent papers instead. đ

Original post:

My previous post explained how four-vectors transformÂ from one reference frame to the other. Indeed, a four-vector isÂ notÂ just some one-dimensional array of four numbers: it represent somethingâa physical vector that… Well… Transforms like a vector. đ So whatÂ vectorsÂ are we talking about? Let’s see what we have:

1. We knew the position four-vector already, which we’ll write as xÎźÂ = (ct, x, y, z) = (ct, x).
2. We also proved that AÎźÂ = (ÎŚ,Â Ax, Ay, Az) = (ÎŚ,Â A)Â is a four-vector: it’s referred to as the four-potential.
3. We also know the momentumÂ four-vector from theÂ LecturesÂ on special relativity. We write it as pÎźÂ = (E, px, py, pz) = (E, p), with E = Îłm0, p = Îłm0v, and Îł = (1âv2/c2)â1/2Â or, forÂ cÂ = 1, Îł =Â (1âv2)â1/2

To show that it’s notÂ just a matter of adding some fourth t-component to aÂ three-vector, Feynman gives the example of the four-velocity vector. We have vxÂ = dx/dt,Â vyÂ = dy/dt and vzÂ = dz/dt, but a vÎźÂ = (d(ct)/dt, dx/dt, dy/dt, dz/dt) = (c, dx/dt, dy/dt, dz/dt) ‘vector’ is, obviously, not a four-vector. [Why obviously? The inner productÂ vÎźvÎź Â is not invariant.] In fact,Â Feynman ‘fixes’ the problem by noting that ct, x, y and z have the ‘right behavior’, but the d/dt operator doesn’t. The d/dt operator isÂ not an invariant operator. So how does he fix it then? He tries the (1âv2/c2)â1/2Âˇd/dt operator and, yes, it turns out we do get a four-vector then. In fact, we get that four-velocity vector uÎźÂ that we were looking for:[Note we assume we’re using equivalentÂ time and distance units now, soÂ cÂ = 1 andÂ v/c reduces to a new variable v.]

Now how do we know this is four-vector? How can we prove this one? It’s simple. We can get it from ourÂ pÎźÂ = (E, p) by dividing it by m0, which is an invariantÂ scalar in four dimensions too. Now, it is easy to see that a division by an invariantÂ scalar doesÂ notÂ change the transformation properties. So just write it all out, and you’ll see that pÎź/m0Â = uÎźÂ and, hence, that uÎźÂ is a four-vector too. đ

We’ve got an interesting thing here actually: division by an invariant scalar, or applying that (1âv2/c2)â1/2Âˇd/dt operator, which is referred to as an invariant operator, on a four-vector will give us another four-vector. Why is that? Let’s switch to compatible time and distance units soÂ c = 1 so to simplify the analysis that follows.

#### The invariant (1âv2)â1/2Âˇd/dt operatorÂ and the proper time s

Why is theÂ (1âv2)â1/2Âˇd/dt operator invariant? Why does it ‘fix’ things? Well… Think about the invariant spacetime interval (Îs)2Â = Ît2Â â Îx2Â â Îy2Â âÂ Îz2Â going to the limit (ds)2Â = dt2Â â dx2Â â dy2Â â dz2Â . Of course, we can and should relate this to an invariant quantity s = âŤ ds. Just like Îs, this quantity also ‘mixes’ time and distance. Now, we could try to associate some derivative d/ds with it because, as Feynman puts it, “it should be a nice four-dimensional operation because it is invariant with respect to a Lorentz transformation.” Yes. It should be. So let’s relate ds to dt and see what we get. That’s easy enough: dx = vxÂˇdt,Â dy = vyÂˇdt, dz = vzÂˇdt, so we write:

(ds)2Â = dt2Â â vx2Âˇdt2Â â vy2Âˇdt2Â âÂ vz2Âˇdt2Â âÂ (ds)2Â = dt2Âˇ(1 â vx2Â â vy2Â âÂ vz2) = dt2Âˇ(1 â v2)

and, therefore, ds = dtÂˇ(1âv2)1/2. So our operator d/ds is equal to (1âv2)â1/2Âˇd/dt, and we can apply it toÂ anyÂ four-vector, as we are sure that, as an invariant operator, it’s going to give us another four-vector. I’ll highlight the result, because it’s important:

The d/ds = (1âv2)â1/2Âˇd/dt operator is an invariant operator for four-vectors.

For example, if we apply it to xÎźÂ = (t, x, y, z), we get the very same four-velocity vector ÎźÎź:

dxÎź/ds = uÎźÂ = pÎź/m0

Now, if you’re somewhat awake, you should ask yourself: what is this s, really,Â and what is this operator all about? Our new function s = âŤ ds is notÂ the distance function, as it’s got both time and distance in it. Likewise, the invariant operatorÂ d/ds = (1âv2)â1/2Âˇd/dt has both time and distance in it (the distance is implicit in the v2Â factor). Still, it is referred to as the proper timeÂ along the path of a particle. Now why is that? If it’s got distance andÂ time in it, why don’t we call it the ‘proper distance-time’ or something?

Well… The invariant quantity s actually is the time that would be measured by a clock that’s moving along, in spacetime, with the particle. Just think of it: in the reference frame of the moving particle itself, Îx, ÎyÂ and ÎzÂ must be zero, because it’s not moving in its own reference frame. So theÂ (Îs)2Â = Ît2Â â Îx2Â â Îy2Â âÂ Îz2Â reduces to (Îs)2Â = Ît2, and so we’re only adding time to s. Of course, this view of things implies that theÂ proper time itself is fixed only up to some arbitrary additive constant, namely the setting of the clock at some event along the ‘world line’ of our particle, which is its path in four-dimensional spacetime. But… Well… In a way, s is the ‘genuine’ or ‘proper’ time coming with the particle’s reference frame, and so that’s why Einstein called it like that. You’ll see (later) that it plays a very important role in general relativity theory (which is a topic we haven’t discussed yet: we’ve only touched special relativity, so no gravity effects).

OK. I know this is simple and complicated at the same time: the math is (fairly) easy but, yes, it may be difficult to ‘understand’ this in some kind of intuitiveÂ way. But let’s move on.

#### The four-force vector fÎź

We know the relativistically correct equation for the motionÂ of some charge q. It’s just Newton’s Law F = dp/dt = d(mv)/dt. The only difference is that we areÂ not assuming that m is some constant. Instead, we use the pÂ = Îłm0vÂ formula to get:

How can we get a four-vector for the force? It turns out that we get it when applying our new invariant operator to the momentum four-vector pÎźÂ = (E, p), so we write:Â fÎźÂ =Â dpÎź/ds. But pÎźÂ = m0uÎźÂ =Â m0dxÎź/ds, so we can re-write this asÂ fÎźÂ =Â d(m0ÂˇdxÎź/ds)/ds, which gives us a formula which is reminiscent of the Newtonian F = ma equation:

WhatÂ isÂ this thing? Well… It’s not so difficult to verify that the x, y and z-components are just our old-fashioned Fx,Â FyÂ andÂ Fz, so these are the components of F. The t-component is (1âv2)â1/2ÂˇdE/dt. Now, dE/dt is the time rate of change of energy and, hence, it’s equal to the rate of doing work on our charge, which is equal to Fâ˘v. So we can write fÎź as:

#### The force and the tensor

We will now derive that formula which we ended the previous postÂ with. We start with calculating the spacelike components of fÎź from the Lorentz formula F = q(E + vĂB). [The terminology is nice, isn’t it? The spacelike components of the four-force vector!Â Now thatÂ sounds impressive, doesn’t it? But so… Well… It’s really just the old stuff we know already.]Â So we start with fxÂ =Â Fx, and write it all out:

What a monster! But,Â hey! We can ‘simplify’ this by substituting stuff by (1) the t-, x-, y- and z-components of the four-velocity vector uÎźÂ and (2) the components of our tensor FÎźÎ˝Â = [Fij] = [âiAjÂ âÂ âjAi] with i, j = t, x, y, z. We’ll also pop in the diagonal FxxÂ = 0 element, just to make sure it’s all there. We get:

Looks better, doesn’t it? đ Of course, it’s just the same, really. This is just an exercise in symbolism. Let me insert the electromagnetic tensor we defined in our previous post, just as a reminder of what that FÎźÎ˝Â matrix actually is:

If you read my previous post, this matrixÂ â or the concept of a tensorÂ â has no secrets for you. Let me briefly summarize it, because it’s an important result as well. The tensor is (a generalization of) the cross-product inÂ four-dimensional space. We take two vectors:Â aÎźÂ =Â (at, ax, ay, az) and bÎźÂ =Â (bt, bx, by, bz) and then we takeÂ cross-products of their components just like we did in three-dimensional space, so we write TijÂ = aibjÂ âÂ ajbi. Now, it’s easy to see that this combination implies that TijÂ = â TjiÂ and that TiiÂ = 0, which is why we only have sixÂ independent numbers out of the 16 possible combinations, and which is why we’ll get a so-called anti-symmetric matrix when we organize them in a matrix. In three dimensions, the very same definition of the cross-product TijÂ gives us 9 combinations, and only 3 independent numbers, which is why we represented our ‘tensor’ as a vector too! In four-dimensional space we can’t do that: six things cannot be represented by a four-vector, so weÂ need to use thisÂ matrix, which is referred to as a tensor of the second rank in four dimensions. [When you start using words like that, you’ve come a long way, really. :-)]

[…]Â OK. Back to our four-force. It’s easy to get a similar one-liner forÂ fyÂ and fzÂ too, of course, as well as for ft. But… Yes, ft… Is it the same thing really? Let me quickly copy Feynman’s calculation for ft:

ItÂ does: remember that vĂB and v are orthogonal, and so their dot product is zero indeed. So, to make a long story short, the four equations â one for each component of the four-force vector fÎźÂ â can be summarized in the following elegant equation:

Writing this all requires a few conventions, however. For example,Â FÎźÎ˝Â is a 4Ă4 matrix and so uÎ˝Â has to be written as a 1Ă4 vector. And the formula for theÂ fxÂ and ftÂ component also make it clear that we also want to use the +âââ signature here, so the convention for the signs in the uÎ˝FÎźÎ˝Â product is the same as that for the scalar product aÎźbÎź. So, in short, you really need to interpret what’s being written here.

A more important question, perhaps, is: what can we do with it? Well… Feynman’s evaluation of the usefulness of this formula is rather succinct: “Although it is nice to see that the equations can be written that way, this form is not particularly useful. Itâs usually more convenient to solve for particle motions by using the F = q(E + vĂB) = (1âv2)â1/2Âˇd(m0v)/dtÂ equations, and thatâs what we will usually do.”

Having said that, this formula really makes good on the promise I started my previous post with: we wanted a formula, someÂ mathematical construct, that effectively presents the electromagnetic force as oneÂ force, as one physical reality. So… Well… Here it is! đ

Well… That’s it for today. Tomorrow we’ll talk about energy and about a veryÂ mysterious conceptâthe electromagnetic mass. That should be fun! So I’llÂ c u tomorrow! đ

Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

Some content on this page was disabled on June 20, 2020 as a result of a DMCA takedown notice from Michael A. Gottlieb, Rudolf Pfeiffer, and The California Institute of Technology. You can learn more about the DMCA here:

# On (special) relativity: what’s relative?

Pre-scriptum (dated 26 June 2020): These posts on elementary math and physics have not suffered much the attack by the dark forceâwhich is good because I still like them. While my views on the true nature of light, matter and the force or forces that act on them have evolved significantly as part of my explorations of a more realist (classical) explanation of quantum mechanics, I think most (if not all) of the analysis in this post remains valid and fun to read. In fact, I find the simplest stuff is often the best. đ

Original post:

This is my third and final post about special relativity. In the previous posts, I introduced the general idea and the Lorentz transformations. I present these Lorentz transformations once again below, next to their Galilean counterparts. [Note that I continue to assume, for simplicity, that the two reference frames move with respect to each other along the x- axis only, so the y- and z-component of u is zero. It is not all that difficult to generalize to three dimensions (especially not when using vectors) but it makes an intuitive understanding of what’s relativity all about more difficult.]

As you can see, under a Lorentz transformation, the new ‘primed’ space and time coordinates are a mixture of the ‘unprimed’ ones. Indeed, the new x’ is a mixture of x and t, and the new t’ is a mixture as well. You don’t have that under a Galilean transformation: in the Newtonian world, space and time are neatly separated, and time is absolute, i.e. it is the same regardless of the reference frame. In Einstein’s world â our world â that’s not the case: time is relative, or localÂ as Hendrik Lorentz termed it,Â and so it’s space-timeÂ â i.e. ‘some kind of union of space and time’ as Minkowski termed itÂ âÂ that transforms. In practice, physicists will use so-called four-vectors, i.e. vectors with four coordinates, to keep track of things. These four-vectors incorporate both the three-dimensional space vector as well as the time dimension. However, we won’t go into the mathematical details of that here.

What else is relative? Everything, except the speed of light. Of course, velocity is relative, just like in the Newtonian world, but the equation to go from a velocity as measured in one reference frame to a velocity as measured in the other, is different: it’s not a matter of just adding or subtracting speeds. In addition, besides time, mass becomes a relative concept as well in Einstein’s world, and that was definitely notÂ the case in the Newtonian world.

What about energy? Well… We mentioned that velocities are relative in the Newtonian world as well, so momentum and kinetic energy were relative in that world as well: what you would measure for those two quantities would depend on your reference frame as well. However, here also, we get a different formula now. In addition, we have this weird equivalence between mass and energy in Einstein’s world, about which I should also say something more.

But let’s tackle these topics one by one. We’ll start with velocities.

Relativistic velocity

In the Newtonian world, it was easy. From the Galilean transformation equations above, it’s easy to see that

v’ = dx’/dt’ = d(xÂ â ut)/dt = dx/dtÂ â d(ut)/dt = vÂ â u

So, in the Newtonian world, it’s just a matter of adding/subtracting speeds indeed: if my car goes 100 km/h (v), and yours goes 120 km/h, then youÂ will see my car falling behind at a speed of (minus) 20 km/h. That’s it. In Einstein’s world, it is not so simply. Let’s take the spaceship example once again. So we have a man on the ground (the inertial or ‘unprimed’ reference frame) and a man in the spaceship (the primed reference frame), which is moving away from us with velocity u.

Now, suppose an object is movingÂ insideÂ the spaceship (along the x-axis as well) with a (uniform) velocityÂ vx’, as measured from the point of view of the man inside the spaceship. Then the displacement x’ will be equal to x’ =Â vx’Â t’. To know how that looks from the man on the ground, we just need to use the opposite Lorentz transformations: just replace u by âu everywhere (to the man in the spaceship, it’s like the man on the ground moves away with velocity âu), and note that the Lorentz factor does notÂ change because we’re squaring and (âu)2 =Â u2. So we get:

Hence, x’Â =Â vx’Â t’Â can be written as x = Îł(vx’Â t’ + ut’). Now we should also substitute t’, because we want to measure everything from the point of view of the man on the ground. Now, t = Îł(t’Â + uvx’Â t’/c2). Because we’re talking uniform velocities, vxÂ (i.e. the velocity of the object as measured by the man on the ground) will be equal to x divided by tÂ (so we don’t need to take the time derivative of x), and then, after some simplifying and re-arranging (note, for instance, how the t’ factor miraculously disappears), we get:

What does this rather complicated formula say? Just put in some numbers:

• Suppose the object is moving at half the speed of light, so 0.5c, and that the spaceship is moving itself also at 0.5c, then we get the rather remarkable result that, from the point of view of the observer on the ground, that object is notÂ going as fast as light, but only at vxÂ = (0.5c + 0.5c)/(1 + 0.5Âˇ0.5) = 0.8c.
• Or suppose we’re looking at a light beam inside the spaceship, so something that’s traveling at speed c itself in the spaceship. How does that look to the man on the ground? Just put in the numbers: vxÂ = (0.5cÂ + c)/(1 + 0.5Âˇ1) = cÂ ! So the speed of light is notÂ dependent on the reference frame: it looks the sameÂ â both to the man in the ship as well as to the man on the ground. As Feynman puts it: “This is good, for it is, in fact, what the Einstein theory of relativity was designed to do in the first placeâso it hadÂ betterÂ work!”

It’s interesting to note that, even if u has no y– or z-component, velocity in the yÂ direction will be affected too. Indeed, if an object is moving upward in the spaceship, then the distance of travel of that object to the man on the ground will appear to be larger. See the triangle below: if that object travels a distance Îs’ = Îy’ = Îy = v’Ît’ with respect to the man in the spaceship, then it will have traveled a distance Îs = vÎt to the man on the ground, and that distance is longer.

I won’t go through the process of substituting and combining the Lorentz equations (you can do that yourself) but the grand result is the following:

vyÂ =Â (1/Îł)vy’Â

1/Îł is the reciprocal of the Lorentz factor, and I’ll leave it to you to work out a few numeric examples. When you do that, you’ll find the rather remarkable result that vyÂ is actually lessÂ than vy’. For example, for u = 0.6c, 1/Îł will be equal to 0.8, so vyÂ will be 20% less thanÂ vy’. How is that possible? The vertical distance is what it is (Îy’ =Â Îy), and that distance isÂ not affected by the ‘length contraction’ effect (y’ = y). So how can the vertical velocity beÂ smaller?Â Â The answer is easy to state, but not so easy to understand: it’s the time dilation effect: time in the spaceship goes slower. Hence, the object will cover the same vertical distance indeedÂ â for both observers âÂ but, from the point of view of the observer on the ground, the object will apparently need moreÂ time to cover that distance than the time measured by the man in the spaceship: Ît > Ît’. Hence, the logical conclusion is that the vertical velocity of that object will appear to beÂ lessÂ to the observer on the ground.

How much less? The time dilation factor is the Lorentz factor. Hence,Â Ît = ÎłÎt’. Now, ifÂ uÂ = 0.6c, then Îł will be equal to 1.25 and Ît = 1.25Ît’. Hence, if that object would need, say, one second to cover that vertical distance, then, from the point of view of the observer on the ground, it would need 1.25 seconds to cover the same distance. Hence, its speed as observed from the ground is indeed only 1/(5/4) = 4/5 = 0.8 of its speed as observed by the man in the spaceship.

Is that hard to understand? Maybe. You have to think through it. One common mistake is that people think that length contraction and/or time dilation are, somehow, related to the fact that we are looking at things from a distance and that light needs time to reach us. Indeed, on the Web, you can find complicated calculations using the angle of view and/or the line of sight (and tons of trigonometric formulas) as, for example, shown in the drawing below. These haveÂ nothingÂ to do with relativity theory and you’ll never get the Lorentz transformation out of them. They are plain nonsense: they are rooted in an inability of these youthful authors to go beyond Galilean relativity. Length contraction and/or time dilation are not some kind of visual trick or illusion. If you want to see how one can derive the Lorentz factor geometrically, you should look for a good description of the Michelson-Morley experiment in a good physics handbook such as, yes :-), Feynman’s Lectures.

So, I repeat: illustrations that try to explain length contraction and time dilation in terms of line of sight and/or angle of view are useless and will not help you to understand relativity. On the contrary, they will only confuse you. I will let you think through this and move on to the next topic.

Relativistic mass and relativistic momentum

Einstein actually statedÂ twoÂ principles in his (special) relativity theory:

1. The first is the Principle of Relativity itself, which is basically just the same as Newton’s principle of relativity. So that was nothing new actually: “If a system of coordinates K is chosen such that, in relation to it, physical laws hold good in their simplest form, then theÂ sameÂ laws must hold good in relation to any other system of coordinates K’ moving in uniform translation relatively to K.” Hence, Einstein did not change the principle of relativityÂ â quite on the contrary: he re-confirmed itÂ âÂ but he did change Newton’s Laws, as well as the Galilean transformation equations that came with them. He also introduced a new ‘law’, which is stated in the second ‘principle’, and that the more revolutionary one really:
2. The Principle of Invariant Light Speed: “Light is always propagated in empty space with a definite velocity [speed]Â cÂ which is independent of the state of motion of the emitting body.”

As mentioned above, the most notable change in Newton’s Laws â the only change, in fact âÂ is Einstein’s relativistic formula for mass:

mvÂ = Îłm0

This formula implies that the inertia of an object, i.e. its mass, also depends on the reference frame of the observer. If the object moves (but velocity is relative as we know: an object will not be moving if we move with it), then its mass increases. This affects its momentum. As you may or may not remember, the momentum of an object is the product of its mass and its velocity. It’s a vector quantity and, hence, momentum has not only a magnitude but also a direction:

Â pvÂ = mvvÂ = Îłm0vÂ

As evidenced from the formula above, the momentum formula is a relativistic formula as well, as it’s dependent on the Lorentz factor too. So where do I want to go from here? Well… In this section (relativistic mass and momentum), I just want to show that Einstein’s mass formula is not some separate law or postulate: it just comes with the Lorentz transformation equations (and the above-mentioned consequences in terms of measuring horizontal and vertical velocities).

Indeed, Einstein’s relativistic mass formula can be derived from the momentum conservation principle, which is one of the ‘physical laws’ that Einstein refers to. Look at the elastic collision between two billiard balls below. These balls are equal âÂ same mass and same speed from the point of view of an inertial observer âÂ but not identical: one is red and one is blue. The two diagrams show the collision from two different points of view: left, we have the inertial reference frame, and, right, we have a reference frame that is movingÂ with a velocity equal to theÂ horizontalÂ component of the velocity ofÂ the blue ball.

The points to note are the following:

1. TheÂ totalÂ momentum of suchÂ elasticÂ collision before and after the collision must be the same.
2. Because the two balls have equal mass (in the inertial reference frame at least), the collision will be perfectly symmetrical. Indeed, we may just turn the diagram ‘upside down’ and change the colors of the balls, as we do below, and the values w, u and v (as well as the angle Îą) are the same.

As mentioned above, the velocity of the blue and red ball and, hence, their momentum, will depend on the frame of reference. In the diagram on the left, we’re moving with a velocity equal to the horizontalÂ component of the velocity ofÂ the blue ball and, therefore, in thisÂ particular frame of reference,Â the velocity (and the momentum) of the blue ball consists of a vertical component only, which we refer to as w.

From this point of view (i.e. the reference frame moving with, the velocity (and, hence, the momentum) of the red ball will haveÂ both a horizontal as well as a vertical component. If we denote the horizontal component by u, then it’s easy to show that the vertical velocity of the red ball must be equal to sin(Îą)v. Now, because u = cos(Îą)v, this vertical component will be equal to tan(Îą)u. But so what is tan(Îą)u? Now, you’ll say, thatÂ is quite evident: tan(Îą)u must be equal to w, right?

No. That’s Newtonian physics. The red ball is moving horizontally with speed uÂ with respect to the blue ball and, hence, its vertical velocity will not be quite equal to w. Its vertical velocity will be given by the formula which we derived above:Â vyÂ =Â (1/Îł)vy’, so it will be a little bit slowerÂ than the w we see in the diagram on the right which is, of course, the same w as in the diagram on the left. [If you look carefully at my drawing above, then you’ll notice that the w vector is a bit longer indeed.]

Huh?Â Yes. Just think about it: tan(Îą)uÂ =Â (1/Îł)w. But then… How can momentum be conserved if these speeds are not the same? Isn’t the momentum conservation principle supposed to conserve both horizontal as well as vertical momentum? It is, and momentum isÂ being conserved. Why? Because of the relativistic mass factor.

Indeed, theÂ changeÂ in vertical momentum (Îp) of the blue ball in the diagram on the left or – which amounts to the same – the red ball in the diagram on the right (i.e. the vertically moving ball) is equal toÂ Îpblue = 2mww. [The factor 2 is there because the ball goes down and then up (or vice versa) and, hence, the total change in momentum must be twice theÂ mwwÂ amount.] Now, that amount must be equal toÂ Îpred, which is equal to ÎpblueÂ = 2mv(1/Îł)w. Equating both yields the following grand result:

mv/mwÂ =Â ÎłÂ âÂ mvÂ =Â Îłmw

What does this mean? It means that mass of the red ball in the diagram on the left is larger than the mass of the blue ball. So here we have actually derivedÂ Einstein’s relativistic mass formula from the momentum conservation principle !

Of course you’ll say: not quite. This formula is not the muÂ =Â Îłm0Â formula that we’re used to ! Indeed, it’s not. The blue ball has some velocity w itself, and so the formula links two velocitiesÂ v and w. However, we can derive Â mvÂ =Â Îłm0Â formula as a limit of mvÂ =Â ÎłmwÂ for w going to zer0. How canÂ wÂ become infinitesimally small? If the angleÂ Îą becomes infinitesimally small. It’s obvious, then, that v and u will be practically equal. In fact, if w goes to zero, thenÂ mwÂ will be equal to m0Â in the limiting case, and mvÂ will be equal toÂ mu. So, then, indeed, we get the familiar formula as a limiting case:

muÂ =Â Îłm0Â

Hmm… You’ll probably find all of this quite fishy. I’d suggest you just think about it. What I presented above, is actually Feynman’s presentation of the subject, but with a bit more verbosity. Let’s move on to the final.

Relativistic energy

From what I wrote above (and from what I wrote in my two previous posts on this topic), it should be obvious, by now, that energy also depends on the reference frame. Indeed, mass and velocity depend on the reference frame (moving or not), and both appear in the formula for kinetic energy which, as you’ll remember, is

K.E. = mc2Â â m0c2Â = (mÂ âÂ m0)c2Â = Îłm0c2Â â m0c2Â = m0c2(ÎłÂ â 1).

Now, if you go back to the post where I presented that formula, you’ll see that we’re actually talking the changeÂ in kinetic energy here: if the mass is at rest, it’s kinetic energy is zero (because m = m0), and it’s only when the mass is moving, that we can observe the increase in mass. [If you wonder how, think about the example of the fast-moving electrons in an electron beam: we see it as an increase in theÂ inertia: applying the same force does no longer yield the same acceleration.]

Now, in that same post, I also noted that Einstein added an equivalent rest mass energyÂ (E0Â = m0c2) to the kinetic energy above, to arrive at the total energy of an object:

E =Â E0Â + K.E. =Â mc2Â

Now, what does thisÂ equivalenceÂ actually mean? Is mass energy? Can we equate them really? The short answer to that is: yes.

Indeed, in one of my older posts (Loose Ends), I explained that protons and neutrons are made of quarks and, hence, that quarks are the actualÂ matterÂ particles, not protons and neutrons. However, the mass of a proton – which consists of twoÂ upÂ quarks and oneÂ downÂ quark – isÂ 938 MeV/c2Â (don’t worry about the units I am using here: because protons are so tiny, we don’t measure their mass in grams), but the mass figure you get when you add the rest mass of twoÂ uâs and oneÂ d, is 9.6 MeV/c2Â only: about one percent of 938 ! So where’s the difference?

The difference is the equivalent mass (or inertia) of the bindingÂ energy between the quarks. Indeed, the so-called âmassâ that gets converted into energy when a nuclear bomb explodes isÂ notÂ the mass of quarks. Quarks survive: nuclear power isÂ binding energy between quarksÂ that gets converted into heat and radiation and kinetic energy and whatever else a nuclear explosion unleashes.

In short, 99% of the âmassâ of a proton or an electron is due to the strong force.Â So that’s ‘potential’ energy that gets unleashed in a nuclear chain reaction. In other words, the rest mass of the proton is actually the inertia of the system of moving quarks and gluons that make up the particle. In such atomic system, even the energy of massless particles (e.g. the virtual photons that are being exchanged between the nucleus and its electron shells) is measured as part of the rest mass of the system. So, yes, mass is energy. As Feynman put it, long before the quark model was confirmed and generally accepted:

“We do not have to know what things are made of inside; we cannot and need not justify, inside a particle, which of the energy is rest energy of the parts into which it is going to disintegrate. It is not convenient and often not possible to separate the total mc2Â energy of an object into (1) rest energy of the inside pieces, (2) kinetic energy of the pieces, and (3) potential energy of the pieces; instead we simply speak of theÂ totalÂ energy of the particle. We ‘shift the origin’ of energy by adding a constantÂ m0c2Â to everything, and say that the total energy of a particle is the mass in motion times c2, and when the object is standing still, the energy is the mass at rest timesÂ c2.” (Richard Feynman’s Lectures on Physics, Vol. I, p. 16-9)

Â So that says it all, I guess, and, hence, that concludes my little ‘series’ on (special) relativity. I hope you enjoyed it.

Post scriptum:

Feynman describes the concept of space-time with a nice analogy: “When we move to a new position, our brain immediately recalculates the true width and depth of an object from the ‘apparent’ width and depth. But our brain does not immediately recalculate coordinates and time when we move at high speed, because we have had no effective experience of going nearly as fast as light to appreciate the fact that time and space are also of the same nature. It is as though we were always stuck in the position of having to look at just the width of something, not being able to move our heads appreciably one way or the other; if we could, we understand now, we would see some of the other man’s timeâwe would see “behind”, so to speak, a little bit. Thus, we shall try to think of objects in a new kind of world, of space and time mixed together, in the same sense that the objects in our ordinary space-world are real, and can be looked at from different directions. We shall then consider that objects occupying space and lasting for a certain length of time occupy a kind of a “blob” in a new kind of world, and that when we look at this “blob” from different points of view when we are moving at different velocities. This new world, this geometrical entity in which the “blobs” exist by occupying position and taking up a certain amount of time, is called space-time.”

If none of what I wrote could convey the general idea, then I hope the above quote will. đ Apart from that, I should also note that physicists will prefer to re-write the Lorentz transformation equations by measuring time and distance in so-called equivalent units: velocities will be expressed not in km/h but as a ratio of cÂ and, hence, cÂ = 1 (a pure number) and soÂ u will also be a pure number between 0 and 1. That can be done byÂ expressing distance in light-seconds ( a light-second is the distance traveled by light in one secondÂ or, alternatively, by expressingÂ time in ‘meter’. Both are equivalent but, in most textbooks, it will be time that will be measured in the ‘new’ units. So how do we express time in meter?

It’s quite simple: we multiply the old seconds with c and then we get:Â timeexpressed in metersÂ = timeexpressed in secondsÂ multiplied by 3Ă108Â meters per second. Hence, as the ‘second’ the first factor and the ‘per second’ in the second factor cancel out, the dimension of the new time unit will effectively be the meter. Now, if both time and distance are expressed in meter, then velocity becomes a pure number without any dimension, because we are dividing distance expressed in meter by time expressed in meter, and it should be noted that it will be a pure number between 0 and 1 (0 â¤ u â¤ 1), because 1 ‘time second’ = 1/(3Ă108) ‘time meters’. Also, c itself becomes the pure number 1. The Lorentz transformation equations then become:

They are easy to remember in this form (cf. the symmetry between x â ut and tÂ âÂ ux) and, if needed, we can always convert back to the old units to recover the original formulas.

I personally think there is no better way to illustrate how space and time are ‘mere shadows’ of the same thing indeed: if we express both time and space in the same dimension (meter), we can see how, as result of that, velocity becomes a dimensionless number between zero and one and, more importantly, how the equations forÂ x’ and t’Â then mirror each other nicely. I am not sure what ‘kind of union’ between space and time Minkowski had in mind, but this must come pretty close, no?

Final note: I noted the equivalence of mass and energy above. In fact, mass and energy can also be expressed in the same units, and we actually do that above already. If we say that an electron has a rest mass of 0.511 MeV/c2Â (a bit less than a quarter of the mass of the u quark), then we express the mass in terms of energy. Indeed, the eV is an energy unit and so we’re actually using the m = E/c2Â formula when we express mass in such units. Expressing mass and energy in equivalent units allows us to derive similar ‘Lorentz transformation equations’ for the energy and the momentum of an object as measured under an inertial versus a moving reference frame. Hence, energy and momentum also transform like our space-time four-vectors and â likewise â the energy and the momentum itself, i.e. theÂ componentsÂ of the (four-)vector, are less ‘real’ than the vector itself. However, I think this post has become way too long and, hence, I’ll just jotÂ these four equations down â please note, once again, the nice symmetry between (1) and (2)Â â but then leave it at that and finish this post. đ

# The Uncertainty Principle re-visited: Fourier transforms and conjugate variables

Pre-scriptum (dated 26 June 2020): This post did not suffer from the DMCA take-down of some material. It is, therefore, still quite readableâeven if my views on the nature of the Uncertainty Principle have evolved quite a bit as part of my realist interpretation of QM.

Original post:

In previous posts, I presented a time-independent wave function for a particle (or wavicle as we should call it – but so that’s not the convention in physics) – let’s say an electron – traveling through space without any external forces (or force fields) acting upon it. So it’s just going in some random direction with some random velocity v and, hence, its momentum is p = mv. Let me be specific – so I’ll work with some numbers here – because I want to introduce some issues related to units for measurement.

So the momentum of this electron isÂ the product of its mass m (aboutÂ 9.1Ă10â28Â grams) with its velocity v (typically something in the range around 2,200 km/s, which is fast but not even close to the speed of light – and, hence, we don’t need to worry about relativistic effects on its mass here). Hence, the momentum p of this electron would be some 20Ă10â25Â kgÂˇm/s. Huh? KgÂˇm/s?Well… Yes,Â kgÂˇm/s or NÂˇs are the usual measures of momentum in classical mechanics: its dimension is [mass][length]/[time] indeed. However, you know that, in atomic physics, we don’t want to work with these enormous units (because we then always have to add these Ă10â28Â and Ă10â25Â factors and so that’s a bit of a nuisance indeed). So the momentum p will usually be measured in eV/c, with cÂ representing what it usually represents, i.e. the speed of light.Â Huh?Â What’s this strange unit? Electronvolts divided by c? Well… We know that eV is an appropriate unit for measuring energy in atomic physics: we can express eV in JouleÂ and vice versa:Â 1 eV = 1.6Ă10â19Â Joule, so that’s OK – except for the fact that this Joule is a monstrously large unit at the atomic scale indeed, and so that’s why we prefer electronvolt. But the JouleÂ is a shorthand unit for kgÂˇm2/s2, which is the measure for energy expressed in SI units, so there we are: while the SI dimension for energy is actually [mass][length]2/[time]2, using electronvolts (eV) is fine. Now, just divide the SI dimension for energy, i.e.Â [mass][length]2/[time]2, by the SI dimension for velocity, i.e. [length]/[time]: we get something expressed in [mass][length]/[time]. So that’s the SI dimension for momentum indeed! In other words, dividing some quantity expressed in some measure for energy (be it Joules or electronvolts or ergÂ or calories or coulomb-volts or BTUs or whatever – there’s quite a lot of ways to measure energy indeed!) by the speed of light (c) will result in some quantity with the right dimensions indeed. So don’t worry about it. Now,Â 1Â eV/cÂ is equivalent to 5.344Ă10â28Â kgÂˇm/s, so the momentum of this electron will be 3.75 eV/c.

Let’s go back to the main story now. Just note that the momentum of this electron that we are looking at is a very tiny amount – as we would expect of course.

Time-independent means that we keep the time variable (t) in the wave functionÂ Î¨(x, t) fixed and so we only look at how Î¨(x, t) varies in space, with x as the (real) space variable representing position. So we have a simplified wave function Î¨(x) here: we can always put the time variable back in when we’re finished with the analysis. By now, it should also be clear that we should distinguish between real-valued wave functions and complex-valued wave functions. Real-valued wave functions represent what Feynman calls “real waves”, like a sound wave, or an oscillating electromagnetic field. Complex-valued wave functions describe probability amplitudes. They are… Well… Feynman actually stops short of saying that they are not real. So what are they?

They are, first and foremost complex numbers, so they have a real and a so-called imaginary part (z = a + ib or, if we use polar coordinates, reÎ¸Â = cosÎ¸ + isinÎ¸). Now, you may think – and you’re probably right to some extent – that the distinction between ‘real’ waves and ‘complex’ waves is, perhaps, less of a dichotomy than popular writers – like me đ – suggest. When describing electromagnetic waves, for example, we need to keep track of both the electric field vector E as well as the magnetic field vector B (both are obviously related through Maxwell’s equations). So we have two components as well, so to say, and each of these components has three dimensions in space, and we’ll use the same mathematical tools to describe them (so we will also represent them using complex numbers). That being said, these probability amplitudes,Â Â usually denoted byÂ Î¨(x), describe something very different. What exactly? Well… By now, it should be clear that that is actually hard to explain: the best thing we can do is to work with them, so they start feeling familiar. The main thing to remember is that we need to square their modulus (or magnitude or absolute value if you find these terms more comprehensible) to get a probability (P). For example, the expression below gives the probability of finding a particle – our electron for example – in in the (space) interval [a, b]:

Of course, we should not be talking intervals but three-dimensional regions in space. However, we’ll keep it simple: just remember that the analysis should be extended to three (space) dimensions (and, of course, include the time dimension as well) when we’re finished (to do that, we’d use so-called four-vectors – another wonderful mathematical invention).

Now, we also used a simple functional form for this wave function, as an example: Î¨(x) could be proportional, we said, to some idealized function eikx. So we can write:Â Î¨(x) âÂ eikxÂ (â is the standard symbol expressing proportionality). In this function, we have a wave number k, which is like the frequency in spaceÂ of the wave (but then measured in radians because theÂ phase of the wave function has to be expressed in radians). In fact, we actually wrote Î¨(x, t) = (1/x)ei(kx –Â Ďt) (so the magnitude of thisÂ amplitude decreases with distance) but, again, let’s keep it simple for the moment: even with this very simple function eikxÂ , things will become complex enough.

We also introduced theÂ de BroglieÂ relation, which gives this wave number k as a function of the momentum p of the particle: k = p/Ä§, with Ä§ the (reduced) Planck constant, i.e. a veryÂ tiny number in the neighborhood ofÂ 6.582 Ă10â16Â eVÂˇs. So, using the numbers above, we’d have a value for k equal to 3.75 eV/c divided by 6.582 Ă10â16Â eVÂˇs. So that’s 0.57Ă1016Â (radians) per… Hey, how do we do it with the units here? We get an incredibly huge number here (57 with 14 zeroes after it) per second? We should get some number per meterÂ because k is expressed in radians per unit distance, right? Right. We forgot c. We are actually measuring distance here, but inÂ light-secondsÂ instead of meter: k is 0.57Ă1016/cÂˇs. Indeed, a light-second is the distance traveled by light in one second, so that’s cÂˇs, and if we want k expressed in radians per meter, then we need to divide this huge number 0.57Ă1016Â (in rad)Â by 2.998Ă108 ( in (m/s)Âˇs) and so then we get a much more reasonable value for k, and with the right dimension too: to be precise, k is about 19Ă106Â rad/m in this case. That’s still huge: it corresponds with a wavelength of 0.33 nanometer (1 nm = 10-6Â m) but that’s the correct order of magnitude indeed.

[In case you wonder what formula I am using to calculate the wavelength: it’sÂ Îť = 2Ď/k. Note that our electron’s wavelength is more than a thousand timesÂ shorterÂ than the wavelength of (visible) light (we humans can see light with wavelengths ranging from 380 to 750 nm) but so that’s what gives the electron its particle-like character! If we would increase their velocity (e.g. by accelerating them in an accelerator, using electromagnetic fields to propel them to speeds closer to cÂ and also to contain them in a beam), then we get hard beta rays. Hard beta rays are surely not as harmful as high-energy electromagnetic rays. X-rays and gamma rays consist of photons with wavelengths ranging from 1 to 100 picometer (1 pm =Â 10â12Â m) – so that’s another factor of a thousand down – and thick lead shields are needed to stop them: they are the cause of cancer (Marie Curie’s cause of death), and the hard radiation of a nuclear blast will always end up killing more people than the immediate blast effect. In contrast, hard beta rays will cause skin damage (radiation burns) but they won’t go deeper than that.]

Let’s get back to our wave function Î¨(x) âÂ eikx. When we introduced it in our previous posts, we said it could not accurately describe a particle because this wave function (Î¨(x) = Aeikx) is associated with probabilities |Î¨(x)|2Â that are the same everywhere. Indeed, Â |Î¨(x)|2Â = |Aeikx|2Â = A2. Apart from the fact that these probabilities would add up to infinity (so this mathematical shape is unacceptable anyway), it also implies that we cannot locate our electron somewhereÂ in space. It’s everywhere and that’s the same as saying it’s actually nowhere. So, while we can use this wave function to explain and illustrate a lot of stuff (first and foremost the de BroglieÂ relations), we actually need something different if we would want to describe anything real (which, in the end, is what physicists want to do, right?). We already said in our previous posts: real particles will actually be represented by a wave packet, or a wave train. A wave train can be analyzed as a composite wave consisting of a (potentially infinite) number of component waves. So we write:

Note that we do notÂ have one unique wave number k or – what amounts to saying the same – one unique value p for the momentum: we have n values. So we’re introducing a spreadÂ in the wavelength here, asÂ illustrated below:

In fact, the illustration above talks of aÂ continuousÂ distribution of wavelengths and so let’s take the continuum limit of the function above indeed and write what we should be writing:

NowÂ thatÂ is anÂ interesting formula. [Note that I didn’t care about normalization issues here, so it’s not quiteÂ what you’d see in a more rigorous treatment of the matter. I’ll correct that in the Post Scriptum.] Indeed, it shows how we can get the wave functionÂ Î¨(x) from some other function ÎŚ(p). We actually encountered that function already, and we referred to it as the wave function in the momentum space. Indeed, Nature does not care much what we measure: whether it’s position (x) or momentum (p), Nature will not share her secrets with us and, hence, the best we can do – according to quantum mechanics – is to find some wave function associating some (complex) probability amplitude with each and every possible (real)Â value of x or p. What the equation above shows, then, is these wave functions come as a pair: if we have ÎŚ(p), then we can calculate Î¨(x) – and vice versa. Indeed, the particular relation betweenÂ Î¨(x) and ÎŚ(p) as established above, makes Î¨(x) andÂ ÎŚ(p) a so-called Fourier transform pair, as we can transformÂ ÎŚ(p) intoÂ Î¨(x) using the above Fourier transform (that’s how thatÂ Â integral is called), and vice versa. More in general, a Fourier transform pair can be written as:

Instead of x and p, andÂ Î¨(x) andÂ ÎŚ(p), we have x and y, and f(x) and g(y), in the formulas above, but so that does not make much of a difference when it comes to the interpretation: x and p (or x and y in the formulas above) are said to be conjugateÂ variables. What it means really is that they are not independent. There are quite a few of such conjugate variables in quantum mechanics such as, for example: (1) time and energy (and time and frequency, of course, in light of the de BroglieÂ relation between both), and (2)Â angularÂ momentum andÂ angularÂ position (or orientation). There are other pairs too but these involve quantum-mechanical variables which I do not understand as yet and, hence, I won’t mention them here. [To be complete, I should also say something about that 1/2Ď factor, but so that’s just something that pops up when deriving the Fourier transform from the (discrete) Fourier series on which it is based. We can put it in front of either integral, or split that factor across both. Also note the minus sign in the exponent of the inverse transform.]

When you look at the equations above, you may think that f(x) and g(y) must be real-valued functions. Well… No. The Fourier transform can be used for both real-valued as well as complex-valued functions. However, at this point I’ll have to refer those who want to know each and every detail about these Fourier transforms to a course in complex analysis (such as Brown and Churchill’s Complex Variables and Applications (2004)Â for instance) or, else, to a proper course on real and complex Fourier transforms (they are used in signal processing – a very popular topic in engineering – and so there’s quite a few of those courses around).

The point to note in this post is that we can derive the Uncertainty Principle from the equations above. Indeed, the (complex-valued) functionsÂ Î¨(x) and ÎŚ(p) describe (probability)Â amplitudes, but the (real-valued) functionsÂ |Î¨(x)|2Â andÂ |ÎŚ(p)|2Â describe probabilities or – to be fully correct – they are probability (density) functions. So it is pretty obvious that, if the functions Î¨(x) and ÎŚ(p)Â are a Fourier transform pair, then |Î¨(x)|2Â andÂ |ÎŚ(p)|2Â must be related to. They are. The derivation is a bit lengthy (and, hence, I will not copy it from the Wikipedia article on the Uncertainty Principle) but one can indeed derive the so-called Kennard formulation of the Uncertainty Principle from the above Fourier transforms. This KennardÂ formulation does not use this rather vague Îx and Îp symbols but clearly states that the product of theÂ standard deviation from the mean ofÂ these two probability density functions can never be smaller than Ä§/2:

ĎxĎpÂ âĽ Ä§/2

To be sure: Ä§/2 is a rather tiny value, as you should know by now, đ but, so, well… There it is.

As said, it’s a bit lengthy but not that difficult to do that derivation. However, just for once, I think I should try to keep my post somewhat shorter than usual so, to conclude, I’ll just insert one more illustration here (yes, you’ve seen that one before), which should now be veryÂ easy to understand: if the wave function Î¨(x) is such that there’s relatively little uncertainty about the position x of our electron, then the uncertainty about its momentum will be huge (see the top graphs). Vice versa (see the bottom graphs), precise information (or a narrow range) on its momentum, implies that its position cannot be known.

Does all this math make it any easier to understand what’s going on? Well… Yes and no, I guess. But then, if evenÂ Feynman admits that he himself “does not understand it the way he would like to” (Feynman Lectures, Vol. III, 1-1), who am I? In fact, I should probably not even try to explain it, should I? đ

So the best we can do is try to familiarize ourselves with the language used, and so that’s math for all practical purposes. And, then, when everything is said and done, we should probably just contemplate Mario Livio’s question: Is God a mathematician? đ

Post scriptum:

I obviously cut corners above, and so you may wonder how that Ä§ factor can beÂ related toÂ ĎxÂ and ĎÂ pÂ if it doesn’t appear in the wave functions. Truth be told, it does. Because of (i) the presence of Ä§ in the exponent in ourÂ ei(p/Ä§)xÂ function, (ii) normalization issues (remember that probabilities (i.e. Î¨|(x)|2Â andÂ |ÎŚ(p)|2) have to add up to 1) and, last but not least, (iii) the 1/2Ď factor involved in Fourier transforms , Î¨(x)Â and ÎŚ(p) have to be written as follows:

Note that we’ve also re-inserted the time variable here, so it’s pretty complete now. One more thing we could do is to substitute x for a proper three-dimensional space vector xÂ or, better still, introduce four-vectors, which would allow us to also integrate relativistic effects (most notably the slowing of time with motion – as observed from the stationary reference frame) – which become important when, for instance, we’re looking at electrons being accelerated, which is the rule, rather than the exception, in experiments.

Remember (from a previous post) that we calculated that an electron traveling at its usual speed in orbit (2200 km/s, i.e. less than 1% of the speed of light) had an energy of about 70 eV? Well, the Large Electron-Positron Collider (LEP) did accelerate them to speeds close to light, thereby giving them energy levels topping 104.5 billionÂ eV (or 104.5 GeV as it’s written) so they could hit each other with collision energies topping 209Â GeV (they come from opposite directions so it’s two timesÂ 104.5 GeV). Now, 209 GeV is tiny when converted toÂ everydayÂ energy units: 209 GeV is 33Ă10â9Â Joule only indeed – and so note the minus sign in the exponent here: we’re talking billionths of a Joule here. Just to put things into perspective: 1 WattÂ is the energy consumption of an LED (and 1 WattÂ is 1 Joule per second), so you’d need to combine the energy of billions of these fast-traveling electrons to power just one little LED lamp. But, of course, that’s not the right comparison:Â 104.5 GeV is more than 200,000 times the electron’s rest mass (0.511 MeV), so that means that – in practical terms – their mass (remember that mass is a measure for inertia) increased by the same factor (204,500 times to be precise). Just to give an idea of the effort that was needed to do this: CERN’s LEP collider was housed in a tunnel with a circumference of 27 km. Was?Â Yes. The tunnel is still there but it now houses theÂ Large Hadron ColliderÂ (LHC) which, as you surely know, is the world’s largest and most powerful particle accelerator: its experiments confirmed the existence of theÂ HiggsÂ particle in 2013, thereby confirming the so-called Standard Model of particle physics. [But I’ll see a few things about that in my next post.]

Oh… And, finally, in case you’d wonder where we get the inequality sign in ĎxĎpÂ âĽ Ä§/2, that’s because – at some point in the derivation – one has to use the Cauchy-Schwarz inequality (aka as the triangle inequality): |z1+Â z1|Â â¤Â |z1|+|Â z1|. In fact, to be fully complete, the derivation uses the more general formulation of the Cauchy-Schwarz inequality, which also applies to functions as we interpret them as vectors in a function space. But I would end up copying the whole derivation here if I add any more to this – and I said I wouldn’t do that. đ […]