I re-visited the Uncertainty Principle a couple of times already, but here I really want to get at the bottom of the thing? What’s uncertain? The energy? The time? The wavefunction itself? These questions are not easily answered, and I need to warn you: you won’t get too much wiser when you’re finished reading this. I just felt like freewheeling a bit. [NoteĀ that the first part of this post repeats what you’ll find on the Occam page, or my post on Occam’s Razor. But these post doĀ notĀ analyze uncertainty, which is what I will beĀ tryingĀ to do here.]
Let’s first think about the wavefunction itself. Itās tempting to think it actuallyĀ isĀ the particle, somehow. But it isnāt. So what is it then? Well⦠Nobody knows. In my previous post, I said I like to think it travelsĀ with the particle, but then doesn’t make much sense either. Itās like a fundamentalĀ propertyĀ of the particle. Like the color of an apple. But where isĀ that color? In the apple, in the light it reflects, in the retina of our eye, or is it in our brain? If you know a thing or two about how perception actually works, you’ll tend to agree the quality ofĀ colorĀ is notĀ in the apple. When everything is said and done, the wavefunction is aĀ mental construct: when learning physics, we start to think of a particle as a wavefunction, but they are two separate things: the particle is reality, the wavefunction is imaginary.
But that’s not what I want to talk about here. It’s about thatĀ uncertainty. Where is the uncertainty? You’ll say: you just said it was in our brain. No. I didn’t say that. It’s not that simple. Let’s look at the basic assumptions of quantum physics:
- Quantum physics assumes thereās always some randomnessĀ in Nature and, hence, we can measure probabilitiesĀ only. We’ve got randomness in classical mechanics too, but this is different. This is an assumption about how Nature works: we donāt really know whatās happening. We donāt know the internal wheels and gears, so to speak, or the āhidden variablesā, as one interpretation of quantum mechanics would say. In fact, the most commonly accepted interpretation of quantum mechanics says there are no āhidden variablesā.
- However, as Shakespeare hasĀ one of his charactersĀ say: there is a method in the madness, and the pioneersā I mean Werner Heisenberg, Louis de Broglie, Niels Bohr, Paul Dirac, etcetera ā discovered that method: all probabilities can be found by taking the square of the absolute value of a complex-valued wavefunctionĀ (often denoted by ĪØ), whose argument, or phaseĀ (Īø),Ā is given by the de Broglie relationsĀ Ļ = E/ħ and kĀ =Ā p/ħ. The generic functional form of that wavefunction is:
ĪØ = ĪØ(x, t) =Ā aĀ·eāiĪøĀ = aĀ·eāi(ĻĀ·t ā kĀ āx)Ā = aĀ·eāiĀ·[(E/ħ)Ā·t ā (p/ħ)āx]
That should be obvious by now, as Iāve written more than a dozens of posts on this. š I still have trouble interpreting this, howeverāand I am not ashamed, because the Great Ones I just mentioned have trouble with that too. It’s not that complex exponential. ThatĀ eāiĻĀ is a very simple periodic function, consisting of two sine waves rather than just one, as illustrated below. [It’s a sine and a cosine, but they’re the same function: there’s just a phase difference of 90 degrees.]Ā 
No. To understand the wavefunction, we need to understand thoseĀ de BroglieĀ relations,Ā Ļ = E/ħ and kĀ =Ā p/ħ, and then, as mentioned, we need to understand the Uncertainty Principle. We need to understand where it comes from. Let’sĀ try to go as far as we can by making a few remarks:
- Adding or subtracting two terms in math, (E/ħ)Ā·t ā (p/ħ)āx, implies the two terms should have the sameĀ dimension: we can only add apples to apples, and oranges to oranges. We shouldnāt mix them. Now, theĀ (E/ħ)Ā·t and (p/ħ)Ā·x termsĀ are actually dimensionless: they are pure numbers. So thatās even better. Just check it: energy is expressed in newtonĀ·meterĀ (energy, or work, is force over distance, remember?) or electronvoltsĀ (1 eVĀ =Ā 1.6Ć10ā19 J = 1.6Ć10ā19 NĀ·m); Planckās constant, as the quantum of action,Ā is expressed in JĀ·s or eVĀ·s; and the unit ofĀ (linear) momentum is 1Ā NĀ·s = 1Ā kgĀ·m/s = 1Ā NĀ·s. E/ħ gives a number expressed per second, and p/ħ a number expressed per meter. Therefore, multiplying E/ħ and p/ħ by t and x respectively gives us a dimensionless number indeed.
- Itās also an invariant number, which means weāll always get the same value for it, regardless of our frame of reference. As mentioned above, thatās because theĀ four-vector product pμxμ =Ā EĀ·t ā pāxĀ is invariant: it doesnāt change when analyzing a phenomenon in oneĀ reference frame (e.g. our inertial reference frame)Ā or another (i.e. in a moving frame).
- Now, Planckās quantum of actionĀ h, or ħ āĀ h and ħ only differ in their dimension: h is measured in cyclesĀ per second, while ħ is measured inĀ radiansĀ per second: both assume we can at least measure oneĀ cycleĀ āĀ is the quantum of energy really. Indeed, if āenergy is the currency of the Universeā, and itās real and/or virtual photons who are exchanging it, then itās good to know the currency unit is h, i.e. the energy thatās associated with one cycleĀ of a photon. [In case you want to see the logic of this, see my post on the physical constantsĀ c, h and α.]
- Itās not only time and space that are related, as evidenced by the fact that t ā x itself is an invariant four-vector, E and p are related too, of course! They are related through the classical velocity of the particle that weāre looking at: E/p = c2/v and, therefore, we can write:Ā E·β = pĀ·c, with β = v/c, i.e. the relativeĀ velocity of our particle, as measured as aĀ ratioĀ of the speed of light.Ā Now, I should add that theĀ t ā xĀ four-vector is invariant only if we measure time and space in equivalent units. Otherwise, we have to write cĀ·t ā x. If we do that, so our unit of distance becomes cĀ meter, rather than one meter, or our unit of time becomes the time that is needed for light to travel one meter, thenĀ cĀ = 1, and the E·β = pĀ·cĀ becomes E·β = p, which we also write as β = p/E: the ratio of the energyĀ and theĀ momentumĀ of our particle is its (relative) velocity.
Combining all of the above, we may want to assume that we are measuring energyĀ andĀ momentum in terms of the Planck constant, i.e. theĀ ānaturalāĀ unit for both. In addition, we may also want to assume that weāre measuring time and distance in equivalent units. Then the equation for the phase of our wavefunctions reduces to:
Īø =Ā (ĻĀ·t ā kĀ āx) = EĀ·tĀ ā pĀ·x
Now,Ā Īø is the argument of a wavefunction, and we can alwaysĀ re-scaleĀ such argument by multiplying or dividing it by someĀ constant. Itās just like writing the argument of a wavefunction asĀ vĀ·tāx or (vĀ·tāx)/vĀ = t āx/vĀ withĀ vĀ the velocity of the waveform that we happen to be looking at. [In case you have trouble following this argument, please check the post I did for my kids on waves and wavefunctions.] Now, the energy conservation principle tells us the energy of a free particle wonāt change. [Just to remind you, a āfree particleā means it’s in a āfield-freeā space, so our particle is in a region ofĀ uniform potential.] So we can, in this case, treat E as a constant, and divideĀ EĀ·tĀ ā pĀ·x by E, so we get a re-scaled phase for our wavefunction, which Iāll write as:
Ļ =Ā (EĀ·tĀ ā pĀ·x)/E = t ā (p/E)Ā·x = t ā β·x
Alternatively, we could also look at p as some constant, as there is no variation in potential energy that will cause a change in momentum, and the related kinetic energy. Weād then divide by p and weād getĀ (EĀ·tĀ ā pĀ·x)/p = (E/p)Ā·t ā x) = t/β ā x, which amounts to the same, as we can always re-scale by multiplying it with β, which would again yield the same t ā β·x argument.
The point is, if we measure energy and momentum in terms of the Planck unit (I mean:Ā in terms of the Planck constant, i.e. theĀ quantum of energy), and if we measure time and distance in ānaturalā units too, i.e. we take the speed of light to be unity, then our Platonic wavefunction becomes as simple as:
Φ(Ļ) =Ā aĀ·eāiĻĀ = aĀ·eāi(t ā β·x)
This is a wonderful formula, but let me first answer your most likely question: why would we use a relativeĀ velocity?Well⦠Just think of it: when everything is said and done, the whole theory of relativity and, hence, the whole of physics, is based onĀ one fundamental and experimentally verified fact: the speed of light isĀ absolute. In whatever reference frame, we willĀ alwaysĀ measure it asĀ 299,792,458 m/s. Thatās obvious, youāll say, but itās actually the weirdest thing ever if you start thinking about it, and it explains why those Lorentz transformations look so damn complicated. In any case, thisĀ factĀ legitimately establishes cĀ as some kind ofĀ absoluteĀ measure against which all speeds can be measured. Therefore, it is onlyĀ naturalĀ indeed to express a velocity as some number between 0 and 1. Now that amounts to expressing it as the β = v/c ratio.
Letās now go back to that Φ(Ļ) =Ā aĀ·eāiĻĀ = aĀ·eāi(t ā β·x)Ā wavefunction. Its temporal frequency Ļ is equal to one, and its spatial frequency k is equal to β = v/c. It couldnāt be simpler but, of course, weāve got this remarkably simple result because we re-scaled the argument of our wavefunction using theĀ energyĀ andĀ momentumĀ itself as the scale factor. So, yes, we can re-write the wavefunction of our particle in a particular elegant and simple form using the only information that we have when looking at quantum-mechanical stuff: energy and momentum, because thatās what everything reduces to at that level.
So… Well… Weāve pretty much explained what quantum physics is all about here.Ā You just need to get used to that complex exponential: eāiĻĀ = cos(āĻ) + iĀ·sin(āĻ) =Ā cos(Ļ) āiĀ·sin(Ļ). It would have been nice if Nature would have given us a simple sine or cosine function. [Remember the sine and cosine function are actually the same, except for a phase difference of 90 degrees: sin(Ļ) = cos(Ļ/2āĻ) = cos(Ļ+Ļ/2). So we can go always from one to the other by shifting the origin of our axis.] But⦠Well⦠As weāve shown so many times already, a real-valued wavefunction doesnāt explain the interference we observe, be it interference of electrons or whatever other particles or, for that matter, the interference of electromagnetic waves itself, which, as you know, we also need to look at as a stream ofĀ photonsĀ , i.e. light quanta, rather than as some kind of infinitely flexibleĀ aetherĀ thatās undulating, like water or air.
However, the analysis above does notĀ include uncertainty. That’s as fundamental to quantum physics as de Broglie‘s equations, so let’s think about that now.
Introducing uncertainty
Our information on the energy and the momentum of our particle will be incomplete: weāll write E = E0 ± ĻE, and p = p0 ± Ļp. Huh?Ā No ĪE or ĪE?Ā Well… It’s the same, really, but I am a bit tired of using the Ī symbol, so I am using the Ļ symbol here, which denotes aĀ standard deviationĀ of some density function. It underlines the probabilistic, or statistical, nature of our approach.
The simplest model is that of a two-state system, because it involves two energy levels only: E = E0 ± A, with A some constant. Large or small, it doesn’t matter. All is relative anyway. šĀ We explained the basics of the two-state system using the example of an ammonia molecule, i.e. an NH3Ā molecule, so it consists on one nitrogen and three hydrogen atoms. We had two baseĀ states in this system: āupā or ādownā, which we denoted asĀ baseĀ stateĀ | 1 āŖ and baseĀ stateĀ | 2 āŖ respectively. This ‘up’ and ‘down’ had nothing to do with the classical or quantum-mechanical notion of spin, which is related to theĀ magneticĀ moment. No. It’s much simpler than that: the nitrogen atom could be either beneath or, else, above the plane of the hydrogens, as shown below, with ‘beneath’ and ‘above’ being defined in regard to the molecule’s direction of rotation around its axis of symmetry.

In any case, for the details, I’ll refer you to the post(s) on it. Here I just want to mention the result. We wroteĀ theĀ amplitudeĀ to find the molecule in either one of these two states as:
- C1Ā =Ā ā© 1 | Ļ āŖ = (1/2)Ā·eā(i/ħ)Ā·(E0Ā ā A)Ā·tĀ + (1/2)Ā·eā(i/ħ)Ā·(E0Ā + A)Ā·t
- C2Ā =Ā ā© 2 | Ļ āŖ = (1/2)Ā·eā(i/ħ)Ā·(E0Ā ā A)Ā·tĀ ā (1/2)Ā·eā(i/ħ)Ā·(E0Ā + A)Ā·t
That gave us the following probabilities:

If our molecule can be in two states only, and it starts off in one, then the probability that it willĀ remain in that state will gradually decline, while the probability that it flips into the other state will gradually increase.
Now, the point you should note is that we get theseĀ time-dependentĀ probabilitiesĀ onlyĀ because we’re introducingĀ two differentĀ energy levels:Ā E0Ā + A andĀ E0Ā ā A. [Note they separated by an amount equal to 2Ā·A, as I’ll use that information later.] If we’d haveĀ oneĀ energy level onlyĀ ā which amounts to saying that weĀ knowĀ it, and that it’s somethingĀ definiteĀ ā then we’d just have oneĀ wavefunction, which we’d write as:
aĀ·eāiĪøĀ = aĀ·eā(i/ħ)Ā·(E0Ā·t āĀ pĀ·x)Ā = aĀ·eā(i/ħ)Ā·(E0Ā·t)Ā·e(i/ħ)Ā·(pĀ·x)
Note that we can always split our wavefunction in a ātimeā and a āspaceā part, which is quite convenient. In fact, because our ammonia molecule stays where it is, it has no momentum: p = 0. Therefore, its wavefunction reduces to:
aĀ·eāiĪøĀ = aĀ·eā(i/ħ)Ā·(E0Ā·t)
As simple as it can be. š The point is that a wavefunction like this, i.e. a wavefunction that’s defined by a definiteĀ energy, will alwaysĀ yield a constant and equal probability, both in time as well as in space. That’s just the math of it: |aĀ·eāiĪø|2Ā = a2. Always!Ā If you want to know why, you should think of Euler’s formula and Pythagoras’ Theorem: cos2Īø +sin2Īø = 1. Always!Ā š
That constant probability is annoying, because our nitrogen atom never ‘flips’, and we know it actually does, thereby overcoming a energy barrier: it’s a phenomenon that’s referred to as ‘tunneling’, and it’s real! The probabilities in that graph above are real! Also, if our wavefunction would represent some moving particle, it would imply that the probability to find it somewhereĀ in space is the sameĀ all over space, which implies our particle isĀ everywhereĀ and nowhere at the same time, really.
So, in quantum physics, this problem is solved by introducing uncertainty.Ā Introducing some uncertainty about the energy, or about the momentum, is mathematically equivalent to saying that weāre actually looking at a compositeĀ wave, i.e. the sum of a finite or potentially infinite set ofĀ component waves. So we have the sameĀ Ļ = E/ħ and kĀ = p/ħ relations, but we apply them to nĀ energy levels, or to some continuousĀ rangeĀ of energy levelsĀ ĪE. It amounts to saying that our wave function doesnāt have a specific frequency: it now has n frequencies, or a range of frequenciesĀ ĪĻ =Ā ĪE/ħ. In our two-state system, n = 2, obviously! So we’veĀ twoĀ energy levels only and so our composite wave consists of two component waves only.
We know what that does: it ensures our wavefunction is being ācontainedā in some āenvelopeā. It becomes a wavetrain, or a kind of beat note, as illustrated below:

[The animation comes from Wikipedia, and shows the difference between theĀ groupĀ andĀ phaseĀ velocity: the green dot shows the group velocity, while the red dot travels at the phase velocity.]
So… OK. That should be clear enough. Let’s now apply these thoughts to our ‘reduced’ wavefunction
Φ(Ļ) =Ā aĀ·eāiĻĀ = aĀ·eāi(t ā β·x)
Thinking about uncertainty
Frankly, I tried to fool you above. If the functional form of the wavefunction isĀ aĀ·eā(i/ħ)Ā·(EĀ·t āĀ pĀ·x), then we can measure E and p in whatever unit we want, including h or ħ, but we cannotĀ re-scale the argument of the function, i.e. the phaseĀ Īø, without changingĀ the functional form itself. I explained that in that post for my kids on wavefunctions:, in which I explained we may represent theĀ same electromagneticĀ wave by two different functional forms:
Ā F(ctāx) = G(tāx/c)
So F and G represent the same wave, but they are different wavefunctions. In this regard, you should note that the argument of F is expressed in distance units, as we multiply t with the speed of light (so it’s like our time unit is 299,792,458 m now), while the argument of G is expressed in time units, as we divide x by the distance traveled in one second). But F and G are different functional forms. Just do an example and take a simple sine function: you’ll agree that sin(Īø) ā sin(Īø/c) for all values of Īø, except 0. Re-scaling changes the frequency, or the wavelength, and it does so quite drastically in this case. š Likewise, you can see thatĀ aĀ·eāi(Ļ/E)Ā = [aĀ·eāiĻ]1/E, so that’s aĀ veryĀ different function. In short, we were a bit too adventurous above. Now, while weĀ canĀ drop the 1/ħ in the aĀ·eā(i/ħ)Ā·(EĀ·t āĀ pĀ·x)Ā function when measuring energy and momentum in units that are numerically equal to ħ, we’ll just revert to our original wavefunction for the time being, which equals
ĪØ(Īø) =Ā aĀ·eāiĪøĀ = aĀ·eāiĀ·[(E/ħ)Ā·t āĀ (p/ħ)Ā·x]
Let’s now introduce uncertainty once again. The simplest situation is that we have two closely spaced energy levels. In theory, the difference between the two can be as small as ħ, so we’d write: E = E0 ± ħ/2. [Remember what I said about the ± A: it means the differenceĀ is 2A.] However, we can generalize this and write: E = E0 ± n·ħ/2, with n = 1, 2, 3,… This doesĀ notĀ imply any greater uncertaintyĀ ā we still have two states onlyĀ ā but just a largerĀ differenceĀ between the two energy levels.
Let’s also simplify by looking at the ‘time part’ of our equation only, i.e.Ā aĀ·eāiĀ·(E/ħ)Ā·t. It doesn’t mean we don’t care about the ‘space part’: it just means that we’re onlyĀ looking at how our function variesĀ in timeĀ and so we just ‘fix’ or ‘freeze’ x. Now, the uncertainty is in the energy really but, from a mathematical point of view, we’ve got an uncertainty in the argument of our wavefunction, really.Ā This uncertainty in the argument is, obviously, equal to:
(E/ħ)·t = [(E0 ± n·ħ/2)/ħ]·t = (E0/ħ ± n/2)·t = (E0/ħ)·t ± (n/2)·t
So we can write:
aĀ·eāiĀ·(E/ħ)Ā·tĀ =Ā aĀ·eāiĀ·[(E0/ħ)Ā·t ± (1/2)Ā·t]Ā =Ā aĀ·eāiĀ·[(E0/ħ)Ā·t]Ā·eiĀ·[±(n/2)Ā·t]
This is valid forĀ anyĀ value of t. What the expression says is that, from a mathematical point of view, introducing uncertainty about the energy is equivalentĀ to introducing uncertainty about the wavefunction itself. It may be equal to aĀ·eāiĀ·[(E0/ħ)Ā·t]Ā·eiĀ·(n/2)Ā·t, but it may also be equal to aĀ·eāiĀ·[(E0/ħ)Ā·t]Ā·eāiĀ·(n/2)Ā·t. The phases of the eāiĀ·t/2Ā and eiĀ·t/2Ā factors are separated by a distance equal to t.
So… Well…
[…]
Hmm… I am stuck. How is this going to lead me to theĀ ĪEĀ·Īt = ħ/2 principle? To anyone out there: can you help? š
[…]
The thing is: you won’t get the Uncertainty Principle by staring at that formula above. It’s a bit more complicated. The idea is that we have some distribution of theĀ observables, like energy and momentum, and that implies some distribution of the associated frequencies, i.e.Ā Ļ for E, and k for p. The Wikipedia article on the Uncertainty Principle gives you a formal derivation of the Uncertainty Principle, using the so-called Kennard formulation of it. You can have a look, but it involves a lot of formalismāwhich is what I wanted to avoid here!
I hope you get the idea though. It’s like statistics. First, we assume weĀ knowĀ the population, and then we describe that population using all kinds of summary statistics. But then we reverse the situation: we don’t know the population but we do haveĀ sampleĀ information, which we also describe using all kinds of summary statistics. Then, based on what we find for the sample, we calculate the estimated statistics for the population itself, like the mean value and the standard deviation, to name the most important ones. So it’s a bit the same here, except that, in quantum mechanics, there may not be anyĀ realĀ value underneath: the mean and the standard deviation represent something fuzzy, rather than something precise.
Hmm… I’ll leave you with these thoughts. We’ll develop them further as we will be digging into all much deeper over the coming weeks. š
Post scriptum: I know you expect something more from me, so… Well… Think about the following. If we have some uncertainty about the energy E, we’ll have some uncertainty about the momentum p according to that β = p/E. [By the way, pleaseĀ thinkĀ about this relationship: it says, all other things being equal (such as the inertia, i.e. theĀ mass, of our particle), that more energy will all go into more momentum. More specifically, note thatĀ āp/āp = β according to this equation. In fact, if we include theĀ massĀ of our particle, i.e. its inertia, as potential energy, then we might say that (1āβ)Ā·E isĀ the potential energy of our particle, as opposed to its kinetic energy.] So let’s try to think about that.
Let’sĀ denote the uncertainty about the energy as ĪE. As should be obvious from the discussion above, it can be anything: it can mean twoĀ separate energy levels E = E0 ± A, or a potentially infiniteĀ setĀ of values. However, even if the set is infinite, we know the various energy levels need to be separated by ħ, at least. So if the set is infinite, it’s going to be aĀ countableĀ infinite set, like the set of natural numbers, or the set of integers. But let’s stick to our example of two values E = E0 ± A only, with A = ħ so E +Ā ĪE =Ā E0 ± ħ and, therefore, ĪE = ± ħ. That implies Īp = Ī(β·E) = β·ĪE = ± β·ħ.
Hmm… This is a bit fishy, isn’t it? We said we’d measure the momentum in units of ħ, but so here we say the uncertainty in the momentum can actually be a fraction of ħ. […] Well… Yes. Now, the momentum is the product of the mass, as measured by the inertiaĀ of our particle to accelerations or decelerations, and its velocity. If we assume the inertia of our particle, or itsĀ mass, to be constantĀ ā so we say it’s a property of the object that isĀ notĀ subject to uncertainty, which, I admit, is a rather dicey assumption (if all other measurable properties of the particle are subject to uncertainty, then why not its mass?)Ā ā then we can also write: Īp = Ī(mĀ·v) = Ī(m·β) =Ā mĀ·Īβ. [Note that we’re not only assuming that the mass is not subject to uncertainty, but also that the velocity is non-relativistic. If not, we couldn’t treat the particle’s mass as a constant.] But let’s be specific here: what we’re saying is that, if ĪE = ± ħ, then Īv = Īβ will be equal toĀ Īβ =Ā Īp/m = ± (β/m)·ħ. The point to note is that we’re no longer sure about the velocityĀ of our particle. Its (relative) velocity is now:
β ± Īβ = β ± (β/m)·ħ
But, because velocity is the ratio of distance over time, this introduces an uncertainty about time and distance. Indeed, if its velocity is β ± (β/m)·ħ, then, over some time T, it will travel some distance X = [β ± (β/m)·ħ]·T. Likewise, it we have some distance X, then our particle will need a time equal to T = X/[β ± (β/m)·ħ].
You’ll wonder what I am trying to say because… Well… If we’d just measure X and T precisely, then all the uncertainty is gone and we know if the energy isĀ E0Ā + ħ orĀ E0Ā ā ħ. Well… Yes and no. TheĀ uncertainty is fundamentalĀ ā at least that’s what’s quantum physicists believeĀ ā so our uncertainty about the time and the distance we’re measuring is equally fundamental: we can have eitherĀ of the two values X = [β ± (β/m)·ħ] TĀ = X/[β ± (β/m)·ħ], whenever or wherever we measure. So we have aĀ ĪX andĀ ĪT that are equal to ± [(β/m)·ħ]Ā·T andĀ X/[± (β/m)·ħ] respectively. We can relate this toĀ ĪE andĀ Īp:
- ĪX = (1/m)Ā·TĀ·Īp
- ĪT = X/[(β/m)Ā·ĪE]
You’ll grumble: this still doesn’t give us the Uncertainty Principle in its canonical form. Not at all, really. I know… I need to do some more thinking here. But I feel I am getting somewhere. š Let me know if you see where, and if you think you can get any further. š
The thing is: you’ll have to read a bit more about Fourier transforms and why and how variables like time and energy, or position and momentum, are so-called conjugate variables. As you can see, energy and time, and position and momentum, are obviously linked through the EĀ·t and pĀ·xĀ products in theĀ E0Ā·t āĀ pĀ·xĀ sum. That says a lot, and it helps us to understand, in a more intuitive way, why the ĪEĀ·Īt and ĪpĀ·ĪxĀ products should obey the relation they are obeying, i.e. the Uncertainty Principle, which we write asĀ ĪEĀ·Īt ℠ħ/2 and ĪpĀ·Īx ℠ħ/2. But so provingĀ involves more than just staring at that ĪØ(Īø) =Ā aĀ·eāiĪøĀ = aĀ·eāiĀ·[(E/ħ)Ā·t āĀ (p/ħ)Ā·x]Ā relation.
Having said, it helps to think about how that EĀ·t ā pĀ·x sum works. For example, think about two particles, a and b, with different velocity and mass, but with the same momentum, so paĀ = pbĀ ā maĀ·vaĀ = maĀ·vaĀ ā ma/vbĀ = mb/va. The spatial frequency of the wavefunction Ā would be the same for both but the temporalĀ frequency would be different, because their energy incorporates the rest mass and, hence, because maĀ ā mb, we also know that EaĀ ā Eb.Ā So… It all works out but, yes, I admit it’s all very strange, and it takes a long time and a lot of reflection to advance our understanding.