# The Uncertainty Principle revisited

Pre-script (dated 26 June 2020): This post has become less relevant (even irrelevant, perhaps) because my views on all things quantum-mechanical have evolved significantly as a result of my progression towards a more complete realist (classical) interpretation of quantum physics. I keep blog posts like these mainly because I want to keep track of where I came from. I might review them one day, but I currently don’t have the time or energy for it. š

Original post:

I’ve written a few posts on the Uncertainty Principle already. See, for example, my post on the energy-time expression for it (ĪEĀ·Īt ā„ h). So why am I coming back to it once more? Not sure. I felt I left some stuff out. So I am writing this post to just complement what I wrote before. I’ll do so by explaining, and commenting on, the ‘semi-formal’ derivation of the so-called Kennard formulation of the Principle in the Wikipedia article on it.

The Kennard inequalities, ĻxĻpĀ ā„ Ä§/2 andĀ ĻEĻtĀ ā„ Ä§/2, are more accurate than the more general ĪxĀ·Īp ā„ h and ĪEĀ·Īt ā„ h expressions one often sees, which are an early formulation of the Principle by Niels Bohr, and which Heisenberg himself used when explaining the Principle in a thought experiment picturing a gamma-ray microscope. I presented Heisenberg’s thought experiment in another post, and so I won’t repeat myself here. I just want to mention that it ‘proves’ the Uncertainty Principle using the Planck-Einstein relations for the energy and momentum of a photon:

E = hf and p = h/Ī»

Heisenberg’s thought experiment is not a real proof, of course. But then what’s a real proof? The mentioned ‘semi-formal’ derivation looks more impressive, because more mathematical, but it’s not a ‘proof’ either (I hope you’ll understand why I am saying that after reading my post). The main difference between Heisenberg’s thought experiment and the mathematical derivation in the mentioned Wikipedia article is that the ‘mathematical’ approach is based on theĀ de BroglieĀ relation. That de Broglie relationĀ looks the same as the Planck-Einstein relation (p = h/Ī») but it’s fundamentally different.

Indeed, the momentum of a photonĀ (i.e. the p we use in the Planck-Einstein relation) is not the momentum one associates with a proper particle, such as an electron or a proton, for example (so that’s the p we use in the de BroglieĀ relation). The momentum of a particle is defined as the product of its mass (m) and velocity (v). Photons don’t have a (rest) mass, and their velocity is absolute (c), so how do we define momentum for a photon? There are a couple of ways to go about it, but the two most obvious ones are probably the following:

1. We can use the classical theory of electromagnetic radiation and show that the momentum of a photon is related to the magnetic field (we usually only analyze the electric field), and the so-called radiation pressureĀ that results from it. It yields the p = E/c formula which we need to go from E = hf toĀ p = h/Ī», using the ubiquitous relation between the frequency, the wavelength and the wave velocity (c = Ī»f). In case you’re interested in the detail, just click on the radiation pressure link).
2. We can also use the mass-energy equivalence E = mc2. Hence, the equivalent mass of the photon is E/c2, which is relativisticĀ mass only. However, we can multiply that mass with the photon’s velocity, which is c, thereby getting the very same value for its momentum p = cĀ·E/c2Ā = E/c.

So Heisenberg’s ‘proof’ uses the Planck-Einstein relations, as it analyzes the Uncertainty Principle more as an observer effect: probing matter with light, so to say. In contrast, the mentioned derivation takes the de BroglieĀ relation itself as the point of departure. As mentioned, the de BroglieĀ relationsĀ lookĀ exactly the same as the Planck-Einstein relationship (E = hf and p = h/Ī») but the model behind is veryĀ different. In fact, that’s what the Uncertainty Principle is all about: it says that theĀ de BroglieĀ frequency and/or wavelength cannot be determinedĀ exactly: if we want to localize a particle, somewhat at least, we’ll be dealing with a frequency rangeĀ Īf. As such, the de BroglieĀ relation is actually somewhat misleading at first.Ā Let’s talk about the model behind.

A particle, like an electron or a proton, traveling through space, is described by a complex-valued wavefunction, usually denoted by the Greek letterĀ psiĀ (ĪØ)Ā orĀ phiĀ (Ī¦). This wavefunction has a phase, usually denoted as Īø (theta) which ā because we assume the wavefunction is a nice periodic function āĀ varies as a function of time and space. To be precise, we writeĀ Īø asĀ Īø =Ā ĻtĀ ā kx or, if the wave is traveling in the other direction, asĀ Īø = kx ā Ļt.

I’ve explained this in a couple of posts already, including my previous post, so I won’t repeat myself here. Let me just note thatĀ Ļ is the angular frequency, which we express in radians per second, rather than cycles per second, soĀ Ļ = 2ĻfĀ (one cycle covers 2Ļ rad). As for k, that’s the wavenumber, which is often described as the spatialĀ frequency, because it’s expressed in cycles per meter or, more often (and surely in this case), in radians per meter. Hence, if we freeze time, this number isĀ the rate of change of the phase in space. Because one cycle is, again, 2Ļ rad, and one cycle corresponds to the wave traveling one wavelength (i.e.Ā Ī» meter), it’s easy to see that k = 2Ļ/Ī». We can use these definitions to re-write theĀ de BroglieĀ relationsĀ E = hf and p = h/Ī» as:

E = Ä§Ļ and p = Ä§k with h = h/2Ļ

What about the wave velocity? For a photon, we have c = Ī»fĀ and, hence, c = (2Ļ/k)(Ļ/2Ļ) = Ļ/k. For ‘particle waves’ (or matter waves, if you prefer that term), it’s much more complicated, because we need to distinguish between the so-called phase velocity (vp) and the group velocity (vg). The phase velocity is what we’re used to: it’s the product of the frequency (the number of cycles per second) and the wavelength (the distance traveled by the wave over one cycle), or the ratio of the angular frequency and the wavenumber, so we have, once again, Ī»f = Ļ/k =Ā vp. However, this phase velocity is notĀ the classical velocity of the particle that we are looking at. That’s the so-called groupĀ velocity, which corresponds to the velocity of the wave packetĀ representing the particle (or ‘wavicle’, if your prefer that term), as illustrated below.

The animation below illustrates the difference between the phase and the group velocity even more clearly: the green dot travels with the ‘wavicles’, while the red dot travels with the phase. As mentioned above, the group velocity corresponds to the classical velocity of the particle (v). However, the phase velocity is a mathematicalĀ point that actually travelsĀ fasterĀ than light. It isĀ a mathematical point only, which does not carry a signalĀ (unlike the modulation of the wave itself, i.e. the traveling ‘groups’) and, hence, it does not contradict the fundamental principle of relativity theory: the speed of light is absolute, and nothing travels faster than light (except mathematical points, as you can, hopefully, appreciate now).

The two animations above do notĀ represent the quantum-mechanical wavefunction, because the functions that are shown are real-valued, not complex-valued. To imagine a complex-valued wave, you should think of something like the ‘wavicle’ below or, if you prefer animations, the standing waves underneath (i.e. C to H: A and B just present the mathematical model behind, which is that of a mechanical oscillator, like a mass on a spring indeed). These representations clearly show the real as well as the imaginary part of complex-valued wave-functions.

With this general introduction, we are now ready for the more formal treatment that follows. So our wavefunction ĪØĀ is a complex-valued function in space and time. A very general shape for it is one we used in a couple of posts already:

ĪØ(x, t)Ā ā ei(kx ā Ļt)Ā = cos(kx ā Ļt) + isin(kx ā Ļt)

If you don’t know anything about complex numbers, I’d suggest you read my short crash course on it in the essentials page of this blog, because I don’t have the space nor the time to repeat all of that. Now, we can use theĀ de BroglieĀ relationship relating the momentum of a particle with a wavenumber (p = Ä§k) to re-write our psiĀ function as:

ĪØ(x, t)Ā ā ei(kx ā Ļt)Ā = ei(px/Ä§ ā Ļt)Ā

Note that I am using the ‘proportional to’ symbol (ā) because I don’t worry about normalization right now. Indeed, from all of my other posts on this topic, you know that we have to take the absolute square of all these probability amplitudesĀ to arrive at a probability density function, describing the probability of the particle effectivelyĀ beingĀ at point x in space at point t in time, and that all those probabilities, over the function’s domain, have to add up to 1. So we should insert some normalization factor.

Having said that, the problem with the wavefunction above is not normalization really, but the fact that it yields a uniform probability density function. In other words, the particle position is extremely uncertain in the sense that it could be anywhere. Let’s calculate it using a little trick: the absolute square of a complex number equals the product of itself with its (complex) conjugate. Hence, if z = reiĪø, then āzā2Ā = zz* =Ā reiĪøĀ·reāiĪøĀ = r2eiĪøāiĪøĀ = r2e0Ā = r2. Now, in this case, assuming unique values for k,Ā Ļ, p, which we’ll note as k0, Ļ0, p0Ā (and, because we’re freezing time, we can also writeĀ t = t0), we should write:

āĪØ(x)ā2Ā = āa0ei(p0x/Ä§ ā Ļ0t0)Ā ā2Ā =Ā āa0eip0x/Ä§Ā eāiĻ0t0Ā ā2Ā =Ā āa0eip0x/Ä§Ā ā2Ā āeāiĻ0t0Ā ā2Ā = a02Ā

Note that, this time around, I did insert some normalization constant a0Ā as well, so that’s OK. But so the problem is that this very general shape of the wavefunction gives us a constant as the probability for the particle being somewhere between some point a and another point b in space. More formally, we get the surface for a rectangle when we calculate the probability P[aĀ ā¤ XĀ ā¤ b] as we should calculate it, which is as follows:

More specifically, because we’re talking one-dimensional space here, we get P[aĀ ā¤ XĀ ā¤ b] = (bāa)Ā·a02. Now, you may think that such uniform probability makes sense. For example, an electron may be in some orbital around a nucleus, and so you may think that all ‘points’ on the orbital (or within the ‘sphere’, or whatever volume it is) may be equally likely. Or, in another example, we may know an electron is going through some slit and, hence, we may think that all points in that slit should be equally likely positions. However, weĀ knowĀ that it isĀ notĀ the case. Measurements show thatĀ notĀ all points are equally likely. For an orbital, we get complicated patterns, such as the one shown below, and please note that the different colors represent different complex numbers and, hence, different probabilities.

Also, we know that electrons going through a slit will produce an interference patternāeven if they go through it one by one!Ā Hence, we cannot associate some flat line with them: it has toĀ be a proper wavefunction which implies, once again, that we can’t accept a uniform distribution.

In short, uniform probability density functions areĀ notĀ what we see in Nature. They’re non-uniform, like the (very simple) non-uniform distributions shown below. [The left-hand side shows the wavefunction, while the right-hand side shows the associated probability density function: the first two are static (i.e. they do notĀ vary in time), while the third one shows a probability distribution that does vary with time.]

I should also note that, even if you would dare to think that a uniform distribution might be acceptable in some cases (which, let me emphasize this, it is not), an electron can surely notĀ be ‘anywhere’.Ā Indeed, the normalization condition implies that, if we’d have a uniform distribution and if we’d consider all of space, i.e. if we let a go to āā and b to +ā, then a02Ā would tend to zero, which means we’d have a particle that is, literally, everywhere and nowhere at the same time.

In short, a uniform probability distribution does notĀ make sense: we’ll generally haveĀ someĀ idea of where the particle is most likely to be, within someĀ rangeĀ at least. I hope I made myself clear here.

Now, before I continue, I should make some other point as well. You know that the Planck constant (h or Ä§) is unimaginably small: aboutĀ 1Ć10ā34Ā JĀ·s (joule-second). In fact, I’ve repeatedly made that point in various posts. However, having said that, I should add that, while it’s unimaginably small, the uncertainties involved are quite significant. Let us indeed look at the value of Ä§ by relating it to that ĻxĻpĀ ā„ Ä§/2 relation.

Let’s first look at the units. The uncertainty in the position should obviously be expressed in distance units, while momentum is expressed in kgĀ·m/s units. So that works out, because 1 joule is the energy transferred (or work done) when applying a force of 1 newton (N)Ā overĀ a distance of 1 meter (m). In turn, one newton is the force needed to accelerate a mass of one kg at the rate of 1 meter per second per secondĀ (this is not a typing mistake: it’s an acceleration of 1 m/s per second, so the unit is m/s2: meter per second squared). Hence, 1 JĀ·s = 1 NĀ·mĀ·s = 1 kgĀ·m/s2Ā·mĀ·s = kgĀ·m2/s. Now, that’s the same dimension as the ‘dimensional product’ for momentum and distance: mĀ·kgĀ·m/s = kgĀ·m2/s.

Now, these units (kg, m and s) are all rather astronomical at the atomic scale and, hence, h andĀ Ä§ are usually expressed in other dimensions, notably eVĀ·s (electronvolt-second). However, using the standard SI units gives us a better idea of what we’re talking about. If we split the Ä§ = 1Ć10ā34Ā JĀ·s value (let’s forget about the 1/2 factor for now) ‘evenly’ over Ļx and ĻpĀ ā whatever that means: all depends on the units, of course!Ā Ā ā then both factors will have magnitudes of the order of 1Ć10ā17: 1Ć10ā17Ā m times 1Ć10ā17Ā kgĀ·m/s gives us 1Ć10ā34Ā JĀ·s.

You may wonder how this 1Ć10ā17Ā m compares to, let’s say, the classical electron radius, for example. The classical electron radius is, roughly speaking, the ‘space’ an electron seems to occupy as it scatters incoming light. The idea is illustrated below (credit for the image goes to Wikipedia, as usual). The classical electron radiusĀ ā or Thompson scattering length – is about 2.818Ć10ā15Ā m, so that’s almost 300 times our ‘uncertainty’ (1Ć10ā17Ā m). Not bad: it means that we can effectively relate our ‘uncertainty’ in regard to the position to some actual dimension in space. In this case, we’re talking the femtometerĀ scale (1 fm = 10ā15Ā m), and so you’ve surely heard of this before.

What about the other ‘uncertainty’, the one for the momentum (1Ć10ā17Ā kgĀ·m/s)? What’s the typical (linear) momentum of an electron? Its mass, expressed in kg, is aboutĀ 9.1Ć10ā31Ā kg. We also know its relative velocity in an electron: it’s that magical number Ī± = v/c, about which I wrote in some other posts already, so v = Ī±cĀ āĀ 0.0073Ā·3Ć108Ā m/s ā 2.2Ć106Ā m/s. Now,Ā 9.1Ć10ā31Ā kg timesĀ 2.2Ć106Ā m/s is about 2Ć10ā26Ā kgĀ·m/s, so our proposed ‘uncertainty’ in regard to the momentum (1Ć10ā17Ā kgĀ·m/s) is half a billion times larger than the typical value for it. NowĀ that is, obviously,Ā not so good. [Note that calculations like this areĀ extremelyĀ rough. In fact, when one talks electron momentum, it’s usual angular momentum, which is ‘analogous’ to linear momentum, but angular momentum involves very different formulas. If you want to know more about this, check my post on it.]

Of course, now you may feel that we didn’t ‘split’ the uncertainty in a way that makes sense: those ā17 exponents don’t work, obviously. So let’s take 1Ć10ā26Ā kgĀ·m/s for Ļp, which is half of that ‘typical’ value we calculated. Then we’d haveĀ 1Ć10ā8Ā m for ĻxĀ (1Ć10ā8Ā m timesĀ 1Ć10ā26Ā kgĀ·m/s is, once again, 1Ć10ā34 JĀ·s). But then that uncertaintyĀ suddenly becomes a huge number: 1Ć10ā8Ā m is 100 angstrom. That’s not the atomic scale but the molecular scale! So it’s hugeĀ as compared to the pico- or femto-meter scale (1 pm = 1Ć10ā12 m, 1 fm = 1Ć10ā15 m) which we’d sort of expect to see when we’re talking electrons.

OK. Let me get back to the lesson. Why this digression? Not sure. I think I just wanted to show that the Uncertainty Principle involves ‘uncertainties’ that are extremely relevant: despite the unimaginable smallness of the Planck constant, these uncertainties are quite significant at the atomic scale. But back to the ‘proof’ of Kennard’s formulation. Here we need to discuss the ‘model’ we’re using. The rather simple animation below (again, credit for it has to go to Wikipedia) illustrates it wonderfully.

Look at it carefully: we start with a ‘wave packet’ that looks a bit like a normal distribution, but it isn’t, of course. We have negative and positive values, and normal distributions don’t have that. So it’s a wave alright. Of course, you should, once more, remember that we’re only seeing one part of the complex-valued wave here (the real or imaginary partāit could be either). But so then we’re superimposing waves on it. Note the increasing frequency of these waves, and also note how the wave packet becomes increasingly localized with the addition of these waves. In fact, the so-called Fourier analysis, of which you’ve surely heard before, is a mathematical operation that does the reverse: it separates a wave packet into its individual component waves.

So now we know the ‘trick’ for reducing the uncertainty in regard to the position: we just add waves with different frequencies. Of course, different frequencies imply different wavenumbers and, through the de BroglieĀ relationship, we’ll also have different values for the ‘momentum’ associated with these component waves. Let’s write these various values as kn, Ļn, and pnĀ respectively, with n going from 0 to N. Of course, our point in time remains frozen at t0. So we get a wavefunction that’s, quite simply, the sum of N component waves and so we write:

ĪØ(x) = ā anei(pnx/Ä§ ā Ļnt0)Ā = ā an Ā eipnx/Ä§eāiĻnt0Ā = ā Aneipnx/Ä§

Note that, because of the eāiĻnt0, we now haveĀ complex-valuedĀ coefficients AnĀ = aneāiĻnt0Ā in front. More formally, we say thatĀ An represents the relative contribution of the mode pn to the overall ĪØ(x) wave. Hence, we can write these coefficientsĀ AĀ as a function of p. Because Greek letters always make more of an impression, we’ll use the Greek letter Ī¦ (phi) for it. š Now, we can go to the continuum limit and, hence, transform that sum above into an infinite sum, i.e. an integral. So our wave function then becomes an integral over allĀ possible modes, which we write as:

Don’t worry about that new 1/ā2ĻÄ§ factor in front. That’s, once again, something that has to do with normalization and scales. It’s the integral itself you need to understand. We’ve got thatĀ Ī¦(p) function there, which is nothing but ourĀ AnĀ coefficient, but for the continuum case. In fact, these relative contributionsĀ Ī¦(p) are now referred to as the amplitude of all modes p, and so Ī¦(p) is actually another wave function: it’sĀ the wave function in the so-called momentum space.

You’ll probably be very confused now, and wonder where I want to go with an integral like this. The point to note is simple: if we haveĀ thatĀ Ī¦(p) function, we canĀ calculate (or derive, if you prefer that word)Ā theĀ ĪØ(x) from it using that integral above. Indeed, the integral above is referred to as theĀ Fourier transform, and it’s obviously closely related to thatĀ Fourier analysisĀ we introduced above.

Of course, there is also anĀ inverseĀ transform, which looks exactly the same: it just switches the wave functions (ĪØ and Ī¦) and variables (x and p), and then (it’s an important detail!), it has a minus sign in the exponent. Together, the two functionsĀ ā Ā as defined by each other through these two integralsĀ ā form a so-called Fourier integral pair, also known as a Fourier transform pair, and the variables involved are referred to asĀ conjugate variables. So momentum (p) and position (x) are conjugate variables and, likewise, energy and time are also conjugate variables (but so I won’t expand on the time-energy relation here: please have a look at one of my others posts on that).

Now, I thought of copying and explaining the proof of Kennard’s inequality from Wikipedia’s article on the Uncertainty Principle (you need to click on the show button in the relevant section to see it), but then that’s pretty boring math, and simply copying stuff is not my objective with this blog. More importantly, the proof has nothing to do with physics. Nothing at all. Indeed, it just proves a generalĀ mathematicalĀ property of Fourier pairs. More specifically, it proves that, the more concentrated one function is, the more spread out its Fourier transform must be. In other words, it isĀ notĀ possible to arbitrarily concentrate both a function and its Fourier transform.

So, in this case, if we’d ‘squeeze’Ā ĪØ(x), then its Fourier transformĀ Ī¦(p) will ‘stretch out’, and so that’s what the proof in that Wikipedia article basically shows. In other words, there is some ‘trade-off’ between the ‘compaction’ of ĪØ(x), on the one hand, and Ī¦(p), on the other, and so that is what the Uncertainty Principle is all about. Nothing more, nothing less.

But…Ā Yes?Ā What’s all this talk about ‘squeezing’ and ‘compaction’? We can’t change reality, can we? Well… Here we’re entering the philosophical field, of course. How do weĀ interpretĀ the Uncertainty Principle? It surely does look like us trying to measureĀ something has some impact on the wavefunction. In fact, usually, our measurementĀ ā ofĀ eitherĀ positionĀ orĀ momentumĀ ā usually makes the wavefunctionsĀ collapse: we suddenlyĀ knowĀ where the particle is and, hence, Ļ(x) seems to collapse into one point. Alternatively, we measure its momentum and, hence,Ā Ī¦(p) collapses.

That’s intriguing. In fact, even more intriguing is the possibility we may onlyĀ partiallyĀ affect those wavefunctions with measurements that are somewhat less ‘drastic’. It seems a lot of research is focused on that (justĀ GoogleĀ for partial collapse of the wavefunction, and you’ll finds tons of references, including presentations like this one).

Hmm… I need to further study the topic. The decomposition of a wave into its component waves is obviously something that works well in physicsāand not only in quantum mechanics but also in much more mundane examples. Its most general application is signal processing, in which we decompose aĀ signalĀ (which is a function of time) into the frequencies that make it up. Hence, our wavefunction model makes a lot of sense, as it mirrors the physics involved in oscillators and harmonicsĀ obviously.

Still… I feel it doesn’t answer the fundamental question: whatĀ isĀ our electron really? What do those wave packets represent? Physicists will say questions like this don’t matter: as long as our mathematical models ‘work’, it’s fine. In fact, if even Feynman said that nobody ā including himselfĀ ā truly understandsĀ quantum mechanics, then I should just be happy and move on. However, for some reason, I can’t quite accept that. I should probably focus some more on that de BroglieĀ relationship,Ā p = h/Ī», as it’s obviously as fundamental to my understanding of the ‘model’ of reality in physics as that Fourier analysis of the wave packet. So I need to do some more thinking on that.

TheĀ de BroglieĀ relationship is not intuitive. In fact, I am not ashamed to admit that it actually took me quite some time to understand why we can’t just re-write the de BroglieĀ relationship (Ī» = h/p) as an uncertainty relation itself: ĪĪ» = h/Īp. Hence, let me be very clear on this:

Īx = h/Īp (that’s the Uncertainty Principle) butĀ ĪĪ» ā  h/Īp !

Let me quickly explain why.

If theĀ Ī symbol expresses a standard deviation (or some other measurement of uncertainty), we can write the following:

p = h/Ī» ā Īp =Ā Ī(h/Ī») = hĪ(1/Ī»)Ā ā  h/Īp

So I can take h out of the brackets after the Ī symbol, because that’s one of the things that’s allowed when working with standard deviations. More in particular, one can prove the following:

1. The standard deviation of some constant function is 0: Ī(k) = 0
2. The standard deviation is invariant under changes of location: Ī(x + k) = Ī(x + k)
3. Finally, the standard deviationĀ scales directly with the scale of the variable: Ī(kx) = |k |Ī(x).

However, it isĀ notĀ the case that Ī(1/x) = 1/Īx. However, let’s not focus on what we cannot do withĀ Īx: let’s see what we can do with it. Īx equals h/Īp according to the Uncertainty Principleāif we take it as an equality, rather than as an inequality, that is. And then we have the de BroglieĀ relationship: p = h/Ī». Hence, Īx must equal:

Īx = h/Īp = h/[Ī(h/Ī»)] =h/[hĪ(1/Ī»)] = 1/Ī(1/Ī»)

Thatās obvious, but so what? As mentioned, we cannot writeĀ Īx = ĪĪ», because thereās no rule that says thatĀ Ī(1/Ī») = 1/ĪĪ» and, therefore, h/Īp ā  ĪĪ». However, what we canĀ do is defineĀ ĪĪ» as an interval, or a length, defined by the difference between its lower and upper bound (let’s denote those two values by Ī»aĀ andĀ Ī»bĀ respectively. Hence, we write ĪĪ» = Ī»bĀ āĀ Ī»a. Note that this doesĀ notĀ assume we have aĀ continuousĀ range of values forĀ Ī»: we can have any number of frequenciesĀ Ī»nĀ between Ī»aĀ and Ī»b, but so you see the point: we’ve got a range of valuesĀ Ī», discrete or continuous, defined by some lower and upper bound.

Now, theĀ de BroglieĀ relation associates two values paĀ and pbĀ with Ī»aĀ and Ī»bĀ respectively:Ā  paĀ = h/Ī»aĀ and pbĀ = h/Ī»b. Hence, we can similarly define the correspondingĀ Īp interval as paĀ āĀ pb, with paĀ = h/Ī»aĀ and pbĀ =Ā h/Ī»b. Note that, because we’re taking the reciprocal, we have to reverse the order of the values here: if Ī»bĀ > Ī»a, thenĀ paĀ =Ā h/Ī»aĀ > pbĀ =Ā h/Ī»b. Hence, we can write Īp = Ī(h/Ī») = paĀ āĀ pbĀ = h/Ī»1Ā āĀ h/Ī»2Ā = h(1/Ī»1Ā ā 1/Ī»2) = h[Ī»2Ā ā Ī»1]/Ī»1Ī»2. In case you have a bit of difficulty, just draw some reciprocal functions (like the ones below), and have fun connecting intervals on the horizontal axis with intervals on the vertical axis using these functions.

Now, h[Ī»2Ā ā Ī»1]/Ī»1Ī»2) is obviously something veryĀ different thanĀ h/ĪĪ» = h/(Ī»2Ā āĀ Ī»1). So we can surely not equate the two and, hence, we cannot write that Īp = h/ĪĪ».

Having said that, theĀ Īx = 1/Ī(1/Ī») = Ī»1Ī»2/(Ī»2Ā ā Ī»1) that emerges here is quite interesting. We’ve got a ratio here, Ī»1Ī»2/(Ī»2Ā ā Ī»1, which shows thatĀ Īx depends only on the upper and lower bounds of the ĪĪ» range. It does notĀ depend on whether or not the interval is discrete or continuous.

The second thing that is interesting to note is Īx depends not only on theĀ differenceĀ between those two values (i.e. the length of the interval) but also on their value: if the length of the interval, i.e. the difference between the two frequencies is the same, but their values as such are higher, then we get a higher value for Īx, i.e. a greater uncertainty in the position. Again, this shows that the relation between ĪĪ» and Īx is notĀ straightforward. But so we knew that already, and so I’ll end this post right here and right now. šĀ Ā Ā Ā

Some content on this page was disabled on June 17, 2020 as a result of a DMCA takedown notice from Michael A. Gottlieb, Rudolf Pfeiffer, and The California Institute of Technology. You can learn more about the DMCA here: