I re-visited the Uncertainty Principle a couple of times already, but here I really want to get at the bottom of the thing? What’s uncertain? The energy? The time? The wavefunction itself? These questions are not easily answered, and I need to warn you: you won’t get too much wiser when you’re finished reading this. I just felt like freewheeling a bit. [NoteĀ that the first part of this post repeats what you’ll find on the Occam page, or my post on Occam’s Razor. But these post doĀ *notĀ *analyze uncertainty, which is what I will beĀ *tryingĀ *to do here.]

Let’s first think about the wavefunction itself. Itās tempting to think it actuallyĀ *isĀ *the particle, somehow. But it isnāt. So what is it then? Wellā¦ Nobody knows. In my previous post, I said I like to think it *travelsĀ *with the particle, but then doesn’t make much sense either. Itās like a fundamentalĀ *propertyĀ *of the particle. Like the color of an apple. But where *is*Ā that color? In the apple, in the light it reflects, in the retina of our eye, or is it in our brain? If you know a thing or two about how perception actually works, you’ll tend to agree the quality ofĀ *colorĀ *is *notĀ *in the apple. When everything is said and done, the wavefunction is aĀ *mental construct*: when learning physics, we start to think of a particle as a wavefunction, but they are two separate things: the particle is reality, the wavefunction is imaginary.

But that’s not what I want to talk about here. It’s about thatĀ uncertainty. *Where* is the uncertainty? You’ll say: you just said it was in our brain. No. I didn’t say that. It’s not that simple. Let’s look at the basic assumptions of quantum physics:

- Quantum physics assumes thereās always some
*randomness*Ā in Nature and, hence, we can measure *probabilitiesĀ *only. We’ve got randomness in classical mechanics too, but this is different. This is an assumption about how Nature *works*: we donāt really know whatās happening. We donāt know the internal wheels and gears, so to speak, or the āhidden variablesā, as one interpretation of quantum mechanics would say. In fact, the most commonly accepted interpretation of quantum mechanics says *there are no āhidden variablesā.*
- However, as Shakespeare hasĀ one of his charactersĀ say: there is a method in the madness, and the
*pioneers*ā I mean Werner Heisenberg, Louis de Broglie, Niels Bohr, Paul Dirac, etcetera ā discovered that method: all probabilities can be found by taking the square of the absolute value of a complex-valued wavefunctionĀ (often denoted by ĪØ), whose argument, or *phase*Ā (Īø),Ā is given by the *de Broglie *relationsĀ Ļ = E/Ä§ and **k**Ā =Ā **p**/Ä§. The generic functional form of that wavefunction is:

ĪØ = ĪØ(**x**, t) =Ā *aĀ·e*^{āiĪøĀ }= *aĀ·e*^{āi(ĻĀ·t ā kĀ āx)}Ā = *aĀ·e*^{āiĀ·[(E/Ä§)Ā·t ā (p/Ä§)āx]}

That should be obvious by now, as Iāve written more than a dozens of posts on this. š I still have trouble interpreting this, howeverāand I am not ashamed, because the Great Ones I just mentioned have trouble with that too. It’s not that complex exponential. ThatĀ *e*^{āiĻ}Ā is a very simple periodic function, consisting of two sine waves rather than just one, as illustrated below. [It’s a sine and a cosine, but they’re the same function: there’s just a phase difference of 90 degrees.]Ā

No. To understand the wavefunction, we need to understand thoseĀ *de BroglieĀ *relations,Ā Ļ = E/Ä§ and **k**Ā =Ā **p**/Ä§, and then, as mentioned, we need to understand the Uncertainty Principle. We need to understand where it comes from. Let’sĀ try to go as far as we can by making a few remarks:

- Adding or subtracting two terms in math, (E/Ä§)Ā·t ā (
**p**/Ä§)ā**x**, implies the two terms should have the sameĀ *dimension*: we can only add apples to apples, and oranges to oranges. We shouldnāt mix them. Now, theĀ (E/Ä§)Ā·t and (**p**/Ä§)Ā·x terms*Ā *are actually dimensionless: they are pure numbers. So thatās even better. Just check it: energy is expressed in *newtonĀ·**meterĀ *(energy, or *work*, is force over distance, remember?) or *electronvolts*Ā (1 eVĀ =Ā 1.6Ć10^{ā19 }J = 1.6Ć10^{ā19 }NĀ·m); Planckās constant, as the quantum of *action*,Ā is expressed in JĀ·s or eVĀ·s; and the unit ofĀ (linear) momentum is 1Ā NĀ·s = 1Ā kgĀ·m/s = 1Ā NĀ·s. E/Ä§ gives a number expressed *per second*, and p/Ä§ a number expressed *per meter*. Therefore, multiplying E/Ä§ and **p**/Ä§ by t and x respectively gives us a dimensionless number indeed.
- Itās also an
*invariant* number, which means weāll always get the same value for it, *regardless of our frame of reference*. As mentioned above, thatās because theĀ four-vector product p_{Ī¼}x_{Ī¼Ā }=Ā EĀ·t ā **p**ā**x**Ā is invariant: it doesnāt change when analyzing a phenomenon in oneĀ reference frame (e.g. our inertial reference frame)Ā or another (i.e. in a *moving *frame).
- Now, Planckās quantum of actionĀ h, or Ä§ āĀ h and Ä§ only differ in their dimension: h is measured in
*cyclesĀ *per second, while Ä§ is measured inĀ *radiansĀ *per second: both assume we can at least measure *oneĀ *cycleĀ āĀ is the quantum of energy really. Indeed, if āenergy is the currency of the Universeā, and itās real and/or virtual photons who are exchanging it, then itās good to know the currency unit is h, i.e. the energy thatās associated with *one cycle*Ā of a photon. [In case you want to see the logic of this, see my post on the physical constantsĀ *c*, h andĀ Ī±.]
- Itās not only time and space that are related, as evidenced by the fact that t ā
**x** itself is an invariant four-vector, E and p are related too, of course! They are related through the classical velocity of the particle that weāre looking at: E/p = *c*^{2}/v and, therefore, we can write:Ā EĀ·Ī² = pĀ·*c*, with Ī² = v/*c*, i.e. the *relativeĀ *velocity of our particle, as measured as aĀ *ratioĀ *of the speed of light.Ā Now, I should add that theĀ t ā **x**Ā four-vector is invariant only if we measure time and space in equivalent units. Otherwise, we have to write *c*Ā·t ā **x**. If we do that, so our unit of distance becomes *cĀ *meter, rather than one meter, or our unit of time becomes the time that is needed for light to travel one meter, thenĀ *cĀ *= 1, and the EĀ·Ī² = pĀ·*c*Ā becomes EĀ·Ī² = p, which we also write as Ī² = p/E: the ratio of the *energyĀ *and theĀ *momentumĀ *of our particle is its (relative) velocity.

Combining all of the above, we may want to assume that we are measuring energyĀ *andĀ *momentum in terms of the Planck constant, i.e. theĀ *ānaturalāĀ *unit for both. In addition, we may also want to assume that weāre measuring time and distance in equivalent units. Then the equation for the *phase* of our wavefunctions reduces to:

Īø =Ā (ĻĀ·t ā kĀ āx) = EĀ·tĀ ā pĀ·x

Now,Ā Īø is the argument of a wavefunction, and we can alwaysĀ *re-scaleĀ *such argument by multiplying or dividing it by someĀ *constant*. Itās just like writing the argument of a wavefunction asĀ *v*Ā·tāx or (*v*Ā·tāx)/*v*Ā = t āx/*v*Ā withĀ *vĀ *the velocity of the waveform that we happen to be looking at. [In case you have trouble following this argument, please check the post I did for my kids on waves and wavefunctions.] Now, the energy conservation principle tells us the energy of a free particle wonāt change. [Just to remind you, a āfree particleā means it’s in a āfield-freeā space, so our particle is in a region ofĀ *uniform* potential.] So we can, in this case, treat E as a constant, and divideĀ EĀ·tĀ ā pĀ·x by E, so we get a re-scaled phase for our wavefunction, which Iāll write as:

Ļ =Ā (EĀ·tĀ ā pĀ·x)/E = t ā (p/E)Ā·x = t ā Ī²Ā·x

Alternatively, we could also look at p as some constant, as there is no variation in potential energy that will cause a change in momentum, and the related *kinetic *energy. Weād then divide by p and weād getĀ (EĀ·tĀ ā pĀ·x)/p = (E/p)Ā·t ā x) = t/Ī² ā x, which amounts to the same, as we can always re-scale by multiplying it with Ī², which would again yield the same t ā Ī²Ā·x argument.

The point is, if we measure energy and momentum in terms of the Planck unit (I mean:Ā in terms of the Planck constant, i.e. theĀ *quantum of energy*), and if we measure time and distance in ānaturalā units too, i.e. we take the speed of light to be unity, then our Platonic wavefunction becomes as simple as:

Ī¦(Ļ) =Ā *aĀ·e*^{āiĻĀ }= *aĀ·e*^{āi(t ā Ī²Ā·x)}

This is a wonderful formula, but let me first answer your most likely question: why would we use a *relativeĀ *velocity?Wellā¦ Just think of it: when everything is said and done, the whole theory of relativity and, hence, the whole of physics, is based onĀ *one fundamental and experimentally verified fact*: the speed of light isĀ *absolute*. In whatever reference frame, we willĀ *alwaysĀ *measure it asĀ 299,792,458 m/s. Thatās obvious, youāll say, but itās actually the weirdest thing ever if you start thinking about it, and it explains why those Lorentz transformations look so damn complicated. In any case, thisĀ *factĀ *legitimately establishes *cĀ *as some kind ofĀ *absoluteĀ *measure against which all speeds can be measured. Therefore, it is onlyĀ *naturalĀ *indeed to express a velocity as some number between 0 and 1. Now that amounts to expressing it as theĀ Ī² = v/*c* ratio.

Letās now go back to that Ī¦(Ļ) =Ā *aĀ·e*^{āiĻĀ }= *aĀ·e*^{āi(t ā Ī²Ā·x)Ā }wavefunction. Its temporal frequency Ļ is equal to one, and its spatial frequency k is equal to Ī² = v/*c**. *It couldnāt be simpler but, of course, weāve got this remarkably simple result because we re-scaled the argument of our wavefunction using theĀ *energyĀ *andĀ *momentumĀ *itself as the* scale factor*. So, yes, we can re-write the wavefunction of our particle in a particular elegant and simple form using the only information that we have when looking at quantum-mechanical stuff: energy and momentum, because thatās what everything reduces to at that level.

So… Well… Weāve pretty much explained what quantum physics is all about here.Ā You just need to get used to that complex exponential: *e*^{āiĻ}Ā = cos(āĻ) + *i*Ā·sin(āĻ) =Ā cos(Ļ) ā*i*Ā·sin(Ļ). It would have been nice if Nature would have given us a simple sine or cosine function. [Remember the sine and cosine function are actually the same, except for a phase difference of 90 degrees: sin(Ļ) = cos(Ļ/2āĻ) = cos(Ļ+Ļ/2). So we can go always from one to the other by shifting the origin of our axis.] Butā¦ Wellā¦ As weāve shown so many times already, a real-valued wavefunction doesnāt explain the interference we observe, be it interference of electrons or whatever other particles or, for that matter, the interference of electromagnetic waves itself, which, as you know, we also need to look at as a stream ofĀ *photonsĀ *, i.e. light *quanta*, rather than as some kind of infinitely flexibleĀ *aether*Ā thatās undulating, like water or air.

However, the analysis above does *notĀ *include uncertainty. That’s as fundamental to quantum physics as *de Broglie*‘s equations, so let’s think about that now.

**Introducing uncertainty**

Our information on the energy and the momentum of our particle will be incomplete: weāll write E = E_{0Ā }Ā± Ļ_{E}, and p = p_{0Ā }Ā± Ļ_{p}. **Huh?**Ā No ĪE or ĪE?Ā Well… It’s the same, really, but I am a bit tired of using the Ī symbol, so I am using the Ļ symbol here, which denotes aĀ *standard deviation*Ā of some *density* function. It underlines the probabilistic, or statistical, nature of our approach.

The simplest model is that of a two-state system, because it involves two energy levels only: E = E_{0Ā }Ā± A, with A some constant. Large or small, it doesn’t matter. All is relative anyway. šĀ We explained the basics of the two-state system using the example of an ammonia molecule, i.e. an NH_{3Ā }molecule, so it consists on one nitrogen and three hydrogen atoms. We had two baseĀ states in this system: āupā or ādownā, which we denoted asĀ *baseĀ *stateĀ | 1 āŖ and *base*Ā stateĀ | 2 āŖ respectively. This ‘up’ and ‘down’ had nothing to do with the classical or quantum-mechanical notion of spin, which is related to theĀ *magneticĀ *moment. No. It’s much simpler than that: the nitrogen atom could be either beneath or, else, above the plane of the hydrogens, as shown below, with ‘beneath’ and ‘above’ being defined in regard to the molecule’s direction of rotation around its axis of symmetry.

In any case, for the details, I’ll refer you to the post(s) on it. Here I just want to mention the result. We wroteĀ theĀ *amplitudeĀ *to find the molecule in either one of these two states as:

- C
_{1Ā }=Ā ā© 1 | Ļ āŖ = (1/2)Ā·*e*^{ā(i/Ä§)Ā·(E0Ā ā A)Ā·tĀ }+ (1/2)Ā·*e*^{ā(i/Ä§)Ā·(E0Ā + A)Ā·t}
- C
_{2Ā }=Ā ā© 2 | Ļ āŖ = (1/2)Ā·*e*^{ā(i/Ä§)Ā·(E0Ā ā A)Ā·tĀ }ā (1/2)Ā·*e*^{ā(i/Ä§)Ā·(E0Ā + A)Ā·t}

That gave us the following probabilities:

If our molecule can be in two states only, and it starts off in one, then the probability that it willĀ *remain* in that state will gradually decline, while the probability that it flips into the other state will gradually increase.

Now, the point you should note is that we get theseĀ *time-dependentĀ *probabilitiesĀ *onlyĀ *because we’re introducingĀ *two *differentĀ energy levels:Ā E_{0Ā }+ A andĀ E_{0Ā }ā A. [Note they separated by an amount equal to *2*Ā·A, as I’ll use that information later.] If we’d haveĀ *oneĀ *energy level onlyĀ ā which amounts to saying that weĀ *knowĀ *it, and that it’s somethingĀ *definiteĀ **ā *then we’d just have* oneĀ *wavefunction, which we’d write as:

*aĀ·e*^{āiĪøĀ }= *a*Ā·e^{ā(}^{i/Ä§)Ā·(E0Ā·t āĀ pĀ·x)}^{Ā }= *a*Ā·*e*^{ā(}^{i}^{/Ä§)Ā·(E}^{0Ā·t)}Ā·*e*^{(}^{i/Ä§)Ā·(pĀ·x)}

Note that we can always split our wavefunction in a ātimeā and a āspaceā part, which is quite convenient. In fact, because our ammonia molecule stays where it is, it has no momentum: **p** = **0**. Therefore, its wavefunction reduces to:

*aĀ·e*^{āiĪøĀ }= *a*Ā·e^{ā(}^{i/Ä§)Ā·(E0Ā·t)}

As simple as it can be. š The point is that a wavefunction like this, i.e. a wavefunction that’s defined by a *definiteĀ *energy, will alwaysĀ yield a constant and equal probability, both in time as well as in space. That’s just the math of it: |*aĀ·e*^{āiĪø}|^{2Ā }= *a*^{2}. *Always!*Ā If you want to know why, you should think of Euler’s formula and Pythagoras’ Theorem: cos^{2}Īø +sin^{2}Īø = 1. *Always!*Ā š

That constant probability is annoying, because our nitrogen atom never ‘flips’, and we *know* it actually does, thereby overcoming a energy barrier: it’s a phenomenon that’s referred to as ‘tunneling’, and it’s real! The probabilities in that graph above are real! Also, if our wavefunction would represent some moving particle, it would imply that the probability to find it *somewhereĀ *in space is the sameĀ *all over space*, which implies our particle isĀ *everywhere*Ā and *no*where at the same time, really.

So, in quantum physics, this problem is solved by introducing *uncertainty*.Ā Introducing some uncertainty about the energy, or about the momentum, is mathematically equivalent to saying that weāre actually looking at a *compositeĀ *wave, i.e. the* sum* of a finite or potentially infinite set ofĀ *component* waves. So we have the sameĀ Ļ = E/Ä§ and **k**Ā = **p**/Ä§ relations, but we apply them to *nĀ *energy levels, or to some continuousĀ *rangeĀ *of energy levelsĀ ĪE. It amounts to saying that our wave function doesnāt have a specific frequency: it now has *n* frequencies, or a *range* of frequenciesĀ ĪĻ =Ā ĪE/Ä§. In our two-state system, n = 2, obviously! So we’veĀ *twoĀ *energy levels only and so our *composite* wave consists of two *component* waves only.

We know what that does: it ensures our wavefunction is being ācontainedā in some āenvelopeā. It becomes a wavetrain, or a kind of *beat* note, as illustrated below:

[The animation comes from Wikipedia, and shows the difference between theĀ *groupĀ *andĀ *phaseĀ *velocity: the green dot shows the group velocity, while the red dot travels at the phase velocity.]

So… OK. That should be clear enough. Let’s now apply these thoughts to our ‘reduced’ wavefunction

Ī¦(Ļ) =Ā *aĀ·e*^{āiĻĀ }= *aĀ·e*^{āi(t ā Ī²Ā·x)}

**Thinking about uncertainty**

Frankly, I tried to fool you above. If the functional form of the wavefunction isĀ *a*Ā·e^{ā(}^{i/Ä§)Ā·(EĀ·t āĀ pĀ·x)}, then we can measure E and **p** in whatever unit we want, including h or Ä§, but we can*not*Ā re-scale the argument of the function, i.e. the *phaseĀ *Īø, without *changingĀ *the functional form itself. I explained that in that post for my kids on wavefunctions:, in which I explained we may represent theĀ same electromagnetic*Ā *wave by two different functional forms:

Ā F(*c*tāx) = G(tāx/*c*)

So F and G represent the same wave, but they are different wavefunctions. In this regard, you should note that the argument of F is expressed in *distance units*, as we multiply t with the speed of light (so it’s like our time unit is 299,792,458 m now), while the argument of G is expressed in *time units*, as we divide x by the distance traveled in one second). But F and G are different functional forms. Just do an example and take a simple sine function: you’ll agree that sin(Īø) ā sin(Īø/*c*) for all values of Īø, except 0. Re-scaling changes the frequency, or the wavelength, and it does so quite drastically in this case. š Likewise, you can see thatĀ *aĀ·e*^{āi(Ļ/E)Ā }= [*aĀ·e*^{āiĻ}]^{1/E}, so that’s aĀ *veryĀ *different function. In short, we were a bit too adventurous above. Now, while weĀ *canĀ *drop the 1/Ä§ in the *a*Ā·e^{ā(}^{i/Ä§)Ā·(EĀ·t āĀ pĀ·x)}^{Ā }function when measuring energy and momentum in units that are *numerically* equal to Ä§, we’ll just revert to our original wavefunction for the time being, which equals

ĪØ(Īø) =Ā *aĀ·e*^{āiĪøĀ }= *a*Ā·*e*^{ā}^{iĀ·[(E/Ä§)Ā·t āĀ (p/Ä§)Ā·x]}

Let’s now introduce uncertainty once again. The simplest situation is that we have two closely spaced energy levels. In theory, the difference between the two can be as small as Ä§, so we’d write: E = E_{0Ā }Ā± Ä§/2. [Remember what I said about the Ā± A: it means the *differenceĀ *is 2A.] However, we can generalize this and write: E = E_{0Ā }Ā± nĀ·Ä§/2, with n = 1, 2, 3,… This doesĀ *notĀ *imply any greater uncertaintyĀ ā we still have two states onlyĀ ā but just a largerĀ *differenceĀ *between the two energy levels.

Let’s also simplify by looking at the ‘time part’ of our equation only, i.e.Ā *a*Ā·e^{ā}^{iĀ·(E/Ä§)Ā·t}. It doesn’t mean we don’t care about the ‘space part’: it just means that we’re *onlyĀ *looking at how our function variesĀ *in time*Ā and so we just ‘fix’ or ‘freeze’ x. Now, the uncertainty is in the energy really but, from a mathematical point of view, we’ve got an uncertainty in the argument of our wavefunction, really.Ā This uncertainty in the argument is, obviously, equal to:

(E/Ä§)Ā·t = [(E_{0Ā }Ā± nĀ·Ä§/2)/Ä§]Ā·t = (E_{0}/Ä§ Ā± n/2)Ā·t =Ā (E_{0}/Ä§)Ā·t Ā± (n/2)Ā·t

So we can write:

*a*Ā·*e*^{ā}^{iĀ·(E/Ä§)Ā·t}Ā =Ā *a*Ā·*e*^{ā}^{iĀ·[(E0/Ä§)Ā·tĀ Ā± (1/2)Ā·t]}Ā =Ā *a*Ā·*e*^{ā}^{iĀ·[(E0/Ä§)Ā·t]}Ā·*e*^{iĀ·[Ā±(n/2)Ā·t]}

This is valid forĀ *anyĀ *value of t. What the expression says is that, from a mathematical point of view, introducing uncertainty about the energy is equivalentĀ to introducing uncertainty about the wavefunction itself. It may be equal to *a*Ā·*e*^{ā}^{iĀ·[(E0/Ä§)Ā·t]}Ā·*e*^{iĀ·(n/2)Ā·t}, but it may also be equal to *a*Ā·*e*^{ā}^{iĀ·[(E0/Ä§)Ā·t]}Ā·*e*^{āiĀ·(n/2)Ā·t}. The *phases* of the *e*^{āiĀ·t/2}Ā and *e*^{iĀ·t/2Ā }factors are separated by a distance equal to t.

So… Well…

[…]

Hmm… I am stuck. How is this going to lead me to theĀ ĪEĀ·Īt =Ā Ä§/2 principle? To anyone out there: can you help? š

[…]

The thing is: you won’t get the Uncertainty Principle by staring at that formula above. It’s a bit more complicated. The idea is that we have some distribution of theĀ *observables*, like energy and momentum, and that implies some distribution of the associated frequencies, i.e.Ā Ļ for E, and k for p. The Wikipedia article on the Uncertainty Principle gives you a formal derivation of the Uncertainty Principle, using the so-called Kennard formulation of it. You can have a look, but it involves a lot of formalismāwhich is what I wanted to avoid here!

I hope you get the idea though. It’s like statistics. First, we assume weĀ *knowĀ *the population, and then we describe that population using all kinds of summary statistics. But then we reverse the situation: we don’t know the population but we do haveĀ *sampleĀ *information, which we also describe using all kinds of summary statistics. Then, based on what we find for the sample, we calculate the estimated statistics for the population itself, like the mean value and the standard deviation, to name the most important ones. So it’s a bit the same here, except that, in quantum mechanics, there may not be anyĀ *realĀ *value underneath: the mean and the standard deviation represent something fuzzy, rather than something precise.

Hmm… I’ll leave you with these thoughts. We’ll develop them further as we will be digging into all much deeper over the coming weeks. š

**Post scriptum**: I know you expect something more from me, so… Well… Think about the following. If we have some uncertainty about the energy E, we’ll have some uncertainty about the momentum p according to thatĀ Ī² = p/E. [By the way, pleaseĀ *thinkĀ *about this relationship: it says, all other things being equal (such as the *inertia*, i.e. theĀ *mass*, of our particle), that more energy will all go into more momentum. More specifically, note thatĀ āp/āp =Ā Ī² according to this equation. In fact, if we include theĀ *massĀ *of our particle, i.e. its *inertia*, as potential energy, then we might say that (1āĪ²)Ā·E isĀ the potential energy of our particle, as opposed to its kinetic energy.] So let’s try to think about that.

Let’sĀ denote the uncertainty about the energy as ĪE. As should be obvious from the discussion above, it can be anything: it can mean *twoĀ *separate energy levels E = E_{0Ā }Ā± A, or a potentially infiniteĀ *setĀ *of values. However, even if the set is infinite, we know the various energy levels need to be separated by Ä§, *at least*. So if the set is infinite, it’s going to be aĀ *countableĀ *infinite set, like the set of natural numbers, or the set of integers. But let’s stick to our example of two values E = E_{0Ā }Ā± A only, with A = Ä§Ā so E +Ā ĪE =Ā E_{0Ā }Ā± Ä§Ā and, therefore, ĪE = Ā± Ä§. That implies Īp = Ī(Ī²Ā·E) = Ī²Ā·ĪE = Ā± Ī²Ā·Ä§.

Hmm… This is a bit fishy, isn’t it? We said we’d measure the momentum in units of Ä§, but so here we say the uncertainty in the momentum can actually be a fraction of Ä§. […] Well… Yes. Now, the momentum is the product of the mass, as measured by the *inertiaĀ *of our particle to accelerations or decelerations, and its velocity. If we assume the inertia of our particle, or itsĀ *mass*, to be constantĀ ā so we say it’s a property of the object that isĀ *notĀ *subject to uncertainty, which, I admit, is a rather dicey assumption (if all other measurable properties of the particle are subject to uncertainty, then why not its mass?)Ā ā then we can also write: Īp = Ī(mĀ·v) = Ī(mĀ·Ī²) =Ā mĀ·ĪĪ². [Note that we’re not only assuming that the mass is not subject to uncertainty, but also that the velocity is non-relativistic. If not, we couldn’t treat the particle’s mass as a constant.] But let’s be specific here: what we’re saying is that, if ĪE = Ā± Ä§, then Īv = ĪĪ²Ā will be equal toĀ ĪĪ² =Ā Īp/m = Ā± (Ī²/m)Ā·Ä§. The point to note is that we’re no longer sure about the *velocityĀ *of our particle. Its (relative) velocity is now:

Ī² Ā± ĪĪ²Ā =Ā Ī²Ā Ā± (Ī²/m)Ā·Ä§

But, because velocity is the ratio of distance over time, this introduces an uncertainty about time and distance. Indeed, if its velocity is Ī² Ā± (Ī²/m)Ā·Ä§, then, over some time T, it will travel some distance X = [Ī² Ā± (Ī²/m)Ā·Ä§]Ā·T. Likewise, it we have some distance X, then our particle will need a time equal to TĀ = X/[Ī² Ā± (Ī²/m)Ā·Ä§].

You’ll wonder what I am trying to say because… Well… If we’d just measure X and T *precisely*, then all the uncertainty is gone and we know if the energy isĀ E_{0Ā }+ Ä§ orĀ E_{0Ā }ā Ä§. Well… Yes and no. TheĀ *uncertainty *is fundamentalĀ ā at least that’s what’s quantum physicists believeĀ ā so our uncertainty about the time and the distance we’re measuring is equally fundamental: we can have *either*Ā of the two values X = [Ī² Ā± (Ī²/m)Ā·Ä§] TĀ = X/[Ī² Ā± (Ī²/m)Ā·Ä§], whenever or wherever we measure. So we have aĀ ĪX andĀ ĪT that are equal to Ā± [(Ī²/m)Ā·Ä§]Ā·T andĀ X/[Ā± (Ī²/m)Ā·Ä§] respectively. We can relate this toĀ ĪE andĀ Īp:

- ĪX = (1/m)Ā·TĀ·Īp
- ĪT = X/[(Ī²/m)Ā·ĪE]

You’ll grumble: this still doesn’t give us the Uncertainty Principle in its canonical form. Not at all, really. I know… I need to do some more thinking here. But I feel I am getting somewhere. š Let me know if you see where, and if you think you can get any further. š

The thing is: you’ll have to read a bit more about Fourier transforms and why and how variables like time and energy, or position and momentum, are so-called conjugate variables. As you can see, energy and time, and position and momentum, are obviously linked through the EĀ·t and **p**Ā·**xĀ **products in theĀ E_{0}Ā·t āĀ **p**Ā·**x**Ā sum. That says a lot, and it helps us to understand, in a more intuitive way, why the ĪEĀ·Īt and ĪpĀ·Īx**Ā **products should obey the relation they are obeying, i.e. the Uncertainty Principle, which we write asĀ ĪEĀ·Īt ā„ Ä§/2 and ĪpĀ·Īx ā„ Ä§/2. But so *provingĀ *involves more than just staring at that ĪØ(Īø) =Ā *aĀ·e*^{āiĪøĀ }= *a*Ā·*e*^{ā}^{iĀ·[(E/Ä§)Ā·t āĀ (p/Ä§)Ā·x]Ā }relation.

Having said, it helps to think about how that EĀ·t ā pĀ·x sum works. For example, think about two particles, *a* and *b*, with different velocity and mass, but with the same momentum, so p_{aĀ }= p_{b}Ā ā m_{a}Ā·v_{aĀ }= m_{a}Ā·v_{aĀ }ā m_{a}/v_{bĀ }= m_{b}/v_{a}. The *spatial* frequency of the wavefunction Ā would be the same for both but the *temporalĀ *frequency would be different, because their energy incorporates the rest mass and, hence, because m_{aĀ }ā m_{b}, we also know that E_{aĀ }ā E_{b}.Ā So… It all works out but, yes, I admit it’s all very strange, and it takes a long time and a lot of reflection to advance our understanding.