**Pre-script** (dated 26 June 2020): This post has become less relevant (even irrelevant, perhaps) because my views on all things quantum-mechanical have evolved significantly as a result of my progression towards a more complete *realist *(classical) interpretation of quantum physics. I keep blog posts like these mainly because I want to keep track of where I came from. I might review them one day, but I currently don’t have the time or energy for it. š

**Original post**:

I’ve written a few posts on the Uncertainty Principle already. See, for example, my post on the energy-time expression for it (ĪEĀ·Īt ā„ h). So why am I coming back to it once more? Not sure. I felt I left some stuff out. So I am writing this post to just *complement* what I wrote before. I’ll do so by explaining, and commenting on, the ‘semi-formal’ derivation of the so-called Kennard formulation of the Principle in the Wikipedia article on it.

The Kennard inequalities, Ļ_{x}Ļ_{p}Ā ā„ Ä§/2 andĀ Ļ_{E}Ļ_{t}Ā ā„ Ä§/2, are more accurate than the more general ĪxĀ·Īp ā„ h and ĪEĀ·Īt ā„ h expressions one often sees, which are an early formulation of the Principle by Niels Bohr, and which Heisenberg himself used when explaining the Principle in a thought experiment picturing a gamma-ray microscope. I presented Heisenberg’s thought experiment in another post, and so I won’t repeat myself here. I just want to mention that it ‘proves’ the Uncertainty Principle using the Planck-Einstein relations for the energy and momentum of a *photon*:

E = h*f *and p = h/Ī»

Heisenberg’s thought experiment is not a real proof, of course. But then what’s a real proof? The mentioned ‘semi-formal’ derivation looks more impressive, because more mathematical, but it’s not a ‘proof’ either (I hope you’ll understand why I am saying that after reading my post). The main difference between Heisenberg’s thought experiment and the mathematical derivation in the mentioned Wikipedia article is that the ‘mathematical’ approach is based on theĀ *de BroglieĀ *relation. That *de Broglie *relationĀ looks the same as the Planck-Einstein relation (p = h/Ī») but it’s fundamentally different.

Indeed, the momentum of a *photon*Ā (i.e. the p we use in the Planck-Einstein relation) is *not* the momentum one associates with a proper particle, such as an electron or a proton, for example (so that’s the p we use in the *de BroglieĀ *relation). The momentum of a particle is defined as the product of its mass (m) and velocity (*v*). Photons don’t have a (rest) mass, and their velocity is absolute (*c*), so how do we define momentum for a photon? There are a couple of ways to go about it, but the two most obvious ones are probably the following:

- We can use the classical theory of electromagnetic radiation and show that the momentum of a photon is related to the
*magnetic*field (we usually only analyze the electric field), and the so-called radiation pressureĀ that results from it. It yields the p = E/*c*formula which we need to go from E = h*f*toĀ p = h/Ī», using the ubiquitous relation between the frequency, the wavelength and the wave velocity (*c*= Ī»*f*). In case you’re interested in the detail, just click on the radiation pressure link). - We can also use the mass-energy equivalence E = m
*c*^{2}. Hence, the*equivalent mass*of the photon is E/*c*^{2}, which is*relativisticĀ mass*only. However, we can multiply that mass with the photon’s velocity, which is*c*, thereby getting the very same value for its momentum p =*cĀ·*E/*c*^{2Ā }= E/*c*.

So Heisenberg’s ‘proof’ uses the Planck-Einstein relations, as it analyzes the Uncertainty Principle more as an observer effect: probing matter with light, so to say. In contrast, the mentioned derivation takes the *de BroglieĀ *relation itself as the point of departure. As mentioned, the *de BroglieĀ *relationsĀ *look*Ā exactly the same as the Planck-Einstein relationship (E = h*f *and p = h/Ī») but the model behind is *veryĀ *different. In fact, that’s what the Uncertainty Principle is all about: it says that theĀ *de BroglieĀ *frequency and/or wavelength cannot be determinedĀ *exactly*: if we want to localize a particle, somewhat at least, we’ll be dealing with a frequency *rangeĀ *Ī*f*. As such, the *de BroglieĀ *relation is actually somewhat misleading at first.Ā Let’s talk about the model behind.

A particle, like an electron or a proton, traveling through space, is described by a complex-valued wavefunction, usually denoted by the Greek letterĀ *psi*Ā (ĪØ)*Ā *orĀ *phiĀ *(Ī¦). This wavefunction has a phase, usually denoted as Īø (*theta*) which ā because we assume the wavefunction is a nice periodic function āĀ varies as a function of time and space. To be precise, we writeĀ Īø asĀ Īø =Ā ĻtĀ ā kx or, if the wave is traveling in the other direction, asĀ Īø = kx ā Ļt.

I’ve explained this in a couple of posts already, including my previous post, so I won’t repeat myself here. Let me just note thatĀ Ļ is the *angular* frequency, which we express in radians per second, rather than cycles per second, soĀ Ļ = 2Ļ*fĀ *(one cycle covers 2Ļ *rad*). As for k, that’s the wavenumber, which is often described as the *spatialĀ *frequency, because it’s expressed in cycles per meter or, more often (and surely in this case), in radians per meter. Hence, if we freeze time, this number isĀ the rate of change of the phase *in space*. Because one cycle is, again, 2Ļ *rad*, and one cycle corresponds to the wave traveling one wavelength (i.e.Ā Ī» meter), it’s easy to see that k = 2Ļ/Ī». We can use these definitions to re-write theĀ *de BroglieĀ *relationsĀ E = h*f *and p = h/Ī» as:

E = Ä§Ļ and p = Ä§k with h = h/2Ļ

What about the wave velocity? For a photon, we have *c* = Ī»*f*Ā and, hence, *c* = (2Ļ/k)(Ļ/2Ļ) = Ļ/k. For ‘particle waves’ (or matter waves, if you prefer that term), it’s much more complicated, because we need to distinguish between the so-called phase velocity (*v*_{p}) and the group velocity (*v*_{g}). The phase velocity is what we’re used to: it’s the product of the frequency (the number of cycles per second) and the wavelength (the distance traveled by the wave over one cycle), or the ratio of the angular frequency and the wavenumber, so we have, once again, Ī»*f* = Ļ/k =Ā *v*_{p}. However, this *phase* velocity is *notĀ *the classical velocity of the particle that we are looking at. That’s the so-called *group*Ā velocity, which corresponds to the velocity of the *wave packet*Ā representing the particle (or ‘wavicle’, if your prefer that term), as illustrated below.

The animation below illustrates the difference between the phase and the group velocity even more clearly: the green dot travels with the ‘wavicles’, while the red dot travels with the phase. As mentioned above, the group velocity corresponds to the classical velocity of the particle (*v*). However, the phase velocity is a mathematicalĀ point that actually travelsĀ *fasterĀ *than light. It isĀ a *mathematical* point only, which does not carry a *signal*Ā (unlike the modulation of the wave itself, i.e. the traveling ‘groups’) and, hence, it does not contradict the fundamental principle of relativity theory: the speed of light is absolute, and nothing travels faster than light (except mathematical points, as you can, hopefully, appreciate now).

The two animations above do *notĀ *represent the quantum-mechanical wavefunction, because the functions that are shown are real-valued, not complex-valued. To imagine a *complex-valued* wave, you should think of something like the ‘wavicle’ below or, if you prefer animations, the standing waves underneath (i.e. C to H: A and B just present the mathematical model behind, which is that of a mechanical oscillator, like a mass on a spring indeed). These representations clearly show the real as well as the imaginary part of complex-valued wave-functions.

With this general introduction, we are now ready for the more formal treatment that follows. So our wavefunction ĪØĀ is a complex-valued function in space and time. A very general shape for it is one we used in a couple of posts already:

ĪØ(x, t)Ā ā *e*^{i(kx ā Ļt)Ā }= cos(kx ā Ļt) + *i*sin(kx ā Ļt)

If you don’t know anything about complex numbers, I’d suggest you read my short crash course on it in the essentials page of this blog, because I don’t have the space nor the time to repeat all of that. Now, we can use theĀ *de BroglieĀ *relationship relating the momentum of a particle with a wavenumber (p = Ä§k) to re-write our *psiĀ *function as:

ĪØ(x, t)Ā ā *e*^{i(kx ā Ļt)Ā }= *e*^{i(px/Ä§ ā Ļt)Ā }

Note that I am using the ‘proportional to’ symbol (ā) because I don’t worry about normalization right now. Indeed, from all of my other posts on this topic, you know that we have to take the absolute square of all these *probability amplitudesĀ *to arrive at a *probability density* function, describing the probability of the particle effectivelyĀ *beingĀ *at point x in space at point t in time, and that all those probabilities, over the function’s domain, have to add up to 1. So we should insert some normalization factor.

Having said that, the problem with the wavefunction above is not normalization really, but the fact that it yields a uniform probability density function. In other words, the particle position is extremely uncertain in the sense that it could be anywhere. Let’s calculate it using a little trick: the absolute square of a complex number equals the product of itself with its (complex) conjugate. Hence, if z = r*e*^{iĪø}, then āzā^{2}Ā = zz* =Ā r*e*^{iĪø}Ā·r*e*^{āiĪøĀ }= r^{2}*e*^{iĪø}^{āiĪøĀ }= r^{2}*e*^{0}^{Ā }= r^{2}. Now, in this case, assuming unique values for k,Ā Ļ, p, which we’ll note as k_{0}, Ļ_{0}, p_{0}Ā (and, because we’re freezing time, we can also writeĀ t = t_{0}), we should write:

āĪØ(x)ā^{2}Ā = āa_{0}*e*^{i(p0x/Ä§ ā Ļ0t0)Ā }ā^{2}Ā =Ā āa_{0}*e*^{ip0x/Ä§Ā }*e*^{āiĻ0t0Ā }ā^{2}Ā =Ā āa_{0}*e*^{ip0x/Ä§Ā }*ā*^{2}*Ā āe*^{āi}^{Ļ}^{0t0Ā }ā^{2}Ā = a_{0}^{2}^{Ā }

Note that, this time around, I did insert some normalization constant a_{0}Ā as well, so that’s OK. But so the problem is that this very general shape of the wavefunction gives us a constant as the probability for the particle being somewhere between some point a and another point b in space. More formally, we get the surface for a rectangle when we calculate the probability P[aĀ ā¤ XĀ ā¤ b] as we should calculate it, which is as follows:

More specifically, because we’re talking one-dimensional space here, we get P[aĀ ā¤ XĀ ā¤ b] = (bāa)Ā·a_{0}^{2}. Now, you may think that such uniform probability makes sense. For example, an electron may be in some orbital around a nucleus, and so you may think that all ‘points’ on the orbital (or within the ‘sphere’, or whatever volume it is) may be equally likely. Or, in another example, we may know an electron is going through some slit and, hence, we may think that all points in that slit should be equally likely positions. However, weĀ *knowĀ *that it isĀ *notĀ *the case. Measurements show thatĀ *notĀ *all points are equally likely. For an orbital, we get complicated patterns, such as the one shown below, and please note that *the different colors represent different complex numbers and, hence, different probabilities*.

Also, we know that electrons going through a slit will produce an interference patternā*even if they go through it one by one!*Ā Hence, we cannot associate some flat line with them: it *has toĀ *be a proper wavefunction which implies, once again, that we can’t accept a uniform distribution.

In short, uniform probability density functions areĀ *notĀ *what we see in Nature. They’re *non*-uniform, like the (very simple) non-uniform distributions shown below. [The left-hand side shows the wavefunction, while the right-hand side shows the associated probability density function: the first two are static (i.e. they do *notĀ *vary in time), while the third one shows a probability distribution that does vary with time.]

I should also note that, even if you would dare to think that a uniform distribution might be acceptable in some cases (which, let me emphasize this, it is *not*), an electron can surely *notĀ *be ‘anywhere’.Ā Indeed, the normalization condition implies that, if we’d have a uniform distribution and if we’d consider all of space, i.e. if we let a go to āā and b to +ā, then a_{0}^{2Ā }would tend to zero, which means we’d have a particle that is, literally, everywhere and nowhere at the same time.

In short, a uniform probability distribution does *notĀ *make sense: we’ll generally haveĀ *someĀ *idea of where the particle is *most likely to be*, within someĀ *range*Ā at least. I hope I made myself clear here.

Now, before I continue, I should make some other point as well. You know that the Planck constant (h or Ä§) is unimaginably small: aboutĀ 1Ć10^{ā34Ā }JĀ·s (joule-second). In fact, I’ve repeatedly made that point in various posts. However, having said that, I should add that, while it’s unimaginably small, the uncertainties involved are quite significant. Let us indeed look at the value of Ä§ by relating it to that Ļ_{x}Ļ_{p}Ā ā„ Ä§/2 relation.

Let’s first look at the units. The uncertainty in the position should obviously be expressed in distance units, while momentum is expressed in kgĀ·m/s units. So that works out, because 1 joule is the energy transferred (or work done) when applying a force of 1 newton (N)Ā overĀ a distance of 1 meter (m). In turn, one newton is the force needed to accelerate a mass of one kg at the rate of 1 meter per second per second*Ā *(this is not a typing mistake: it’s an acceleration of 1 m/s *per second*, so the unit is m/s^{2}: meter per second *squared*). Hence, 1 JĀ·s = 1 NĀ·mĀ·s = 1 kgĀ·m/s^{2}Ā·mĀ·s = kgĀ·m^{2}/s. Now, that’s the same dimension as the ‘dimensional product’ for momentum and distance: mĀ·kgĀ·m/s = kgĀ·m^{2}/s.

Now, these units (kg, m and s) are all rather astronomical at the atomic scale and, hence, h andĀ Ä§ are usually expressed in other dimensions, notably eVĀ·s (electronvolt-second). However, using the standard SI units gives us a better idea of what we’re talking about. If we split the Ä§ = 1Ć10^{ā34Ā }JĀ·s value (let’s forget about the 1/2 factor for now) ‘evenly’ over Ļ_{x }and Ļ_{p}Ā ā whatever that means: all depends on the units, of course!Ā Ā ā then both factors will have magnitudes of the order of 1Ć10^{ā17}: 1Ć10^{ā17Ā }m times 1Ć10^{ā17Ā }kgĀ·m/s gives us 1Ć10^{ā34Ā }JĀ·s.

You may wonder how this 1Ć10^{ā17Ā }m compares to, let’s say, the classical electron radius, for example. The classical electron radius is, roughly speaking, the ‘space’ an electron seems to occupy as it scatters incoming light. The idea is illustrated below (credit for the image goes to Wikipedia, as usual). The classical electron radiusĀ ā or Thompson scattering length – is about 2.818Ć10^{ā15Ā }m, so that’s almost 300 times our ‘uncertainty’ (1Ć10^{ā17Ā }m). Not bad: it means that we can effectively relate our ‘uncertainty’ in regard to the position to some actual dimension in space. In this case, we’re talking the *femtometerĀ *scale (1 fm = 10^{ā15Ā }m), and so you’ve surely heard of this before.

What about the other ‘uncertainty’, the one for the momentum (1Ć10^{ā17Ā }kgĀ·m/s)? What’s the typical (linear) momentum of an electron? Its mass, expressed in kg, is aboutĀ 9.1Ć10^{ā31}Ā kg. We also know its relative velocity in an electron: it’s that magical number Ī± = *v*/*c*, about which I wrote in some other posts already, so *v* = Ī±*c*Ā āĀ 0.0073Ā·3Ć10^{8Ā }m/s ā 2.2Ć10^{6Ā }m/s. Now,Ā 9.1Ć10^{ā31}Ā kg timesĀ 2.2Ć10^{6Ā }m/s is about 2Ć10^{ā26Ā }kgĀ·m/s, so our proposed ‘uncertainty’ in regard to the momentum (1Ć10^{ā17Ā }kgĀ·m/s) is half a billion times larger than the typical value for it. NowĀ that is, obviously,Ā *not* so good. [Note that calculations like this areĀ *extremelyĀ *rough. In fact, when one talks electron momentum, it’s usual angular momentum, which is ‘analogous’ to linear momentum, but angular momentum involves very different formulas. If you want to know more about this, check my post on it.]

Of course, now you may feel that we didn’t ‘split’ the uncertainty in a way that makes sense: those ā17 exponents don’t work, obviously. So let’s take 1Ć10^{ā26Ā }kgĀ·m/s for Ļ_{p}, which is half of that ‘typical’ value we calculated. Then we’d haveĀ 1Ć10^{ā8Ā }m for Ļ_{x}Ā (1Ć10^{ā8Ā }m timesĀ 1Ć10^{ā26Ā }kgĀ·m/s is, once again, 1Ć10^{ā34 }JĀ·s). But then *that *uncertaintyĀ suddenly becomes a huge number: 1Ć10^{ā8Ā }m is 100 *angstrom*. That’s not the atomic scale but the molecular scale! So it’s *huge*Ā as compared to the pico- or femto-meter scale (1 pm = 1Ć10^{ā12} m, 1 fm = 1Ć10^{ā15} m) which we’d sort of expect to see when we’re talking electrons.

OK. Let me get back to the lesson. Why this digression? Not sure. I think I just wanted to show that the Uncertainty Principle involves ‘uncertainties’ that are extremely relevant: despite the unimaginable smallness of the Planck constant, these uncertainties are quite significant at the atomic scale. But back to the ‘proof’ of Kennard’s formulation. Here we need to discuss the ‘model’ we’re using. The rather simple animation below (again, credit for it has to go to Wikipedia) illustrates it wonderfully.

Look at it carefully: we start with a ‘wave packet’ that looks a bit like a normal distribution, but it isn’t, of course. We have negative and positive values, and normal distributions don’t have that. So it’s a wave alright. Of course, you should, once more, remember that we’re only seeing one part of the complex-valued wave here (the real or imaginary partāit could be either). But so then we’re superimposing waves on it. Note the increasing frequency of these waves, and also note how the wave packet becomes increasingly localized with the addition of these waves. In fact, the so-called *Fourier analysis*, of which you’ve surely heard before, is a mathematical operation that does the reverse: it separates a wave packet into its individual component waves.

So now we know the ‘trick’ for reducing the uncertainty in regard to the position: we just add waves with different frequencies. Of course, different frequencies imply different wavenumbers and, through the *de BroglieĀ *relationship, we’ll also have different values for the ‘momentum’ associated with these component waves. Let’s write these various values as k_{n}, Ļ_{n}, and p_{n}Ā respectively, with n going from 0 to N. Of course, our point in time remains frozen at t_{0}. So we get a wavefunction that’s, quite simply, the sum of N component waves and so we write:

ĪØ(x) = ā a_{n}*e*^{i(pnx/Ä§ ā Ļnt0)Ā }= ā a_{n Ā }*e*^{ipnx/Ä§}*e*^{āiĻnt0Ā }= ā A_{n}*e*^{ipnx/Ä§}

Note that, because of the *e*^{āiĻnt0}, we now haveĀ *complex-valuedĀ *coefficients A_{n}Ā = a_{n}*e*^{āiĻnt0}Ā in front. More formally, we say thatĀ A_{n} represents the *relative* contribution of the mode p_{n} to the overall ĪØ(x) wave. Hence, we can write these coefficientsĀ AĀ as a function of p. Because Greek letters always make more of an impression, we’ll use the Greek letter Ī¦ (*phi*) for it. š Now, we can go to the continuum limit and, hence, transform that sum above into an infinite sum, i.e. an integral. So our wave function then becomes an integral over allĀ possible modes, which we write as:

Don’t worry about that new 1/ā2ĻÄ§ factor in front. That’s, once again, something that has to do with normalization and scales. It’s the integral itself you need to understand. We’ve got thatĀ Ī¦(p) function there, which is nothing but ourĀ A_{n}Ā coefficient, but for the continuum case. In fact, these relative contributionsĀ Ī¦(p) are now referred to as the *amplitude* of all modes p, and so Ī¦(p) is actually another wave function: it’sĀ the* wave function* in the so-called *momentum space*.

You’ll probably be very confused now, and wonder where I want to go with an integral like this. The point to note is simple: if we haveĀ thatĀ Ī¦(p) function, we canĀ *calculate *(or derive, if you prefer that word)Ā theĀ ĪØ(x) from it using that integral above. Indeed, the integral above is referred to as theĀ *Fourier transform*, and it’s obviously closely related to thatĀ Fourier* analysis*Ā we introduced above.

Of course, there is also anĀ *inverseĀ *transform, which looks exactly the same: it just switches the wave functions (ĪØ and Ī¦) and variables (x and p), and then (it’s an important detail!), it has a *minus* sign in the exponent. Together, the two functionsĀ ā Ā *as defined by each other through these two integrals*Ā ā form a so-called *Fourier integral pair*, also known as a *Fourier transform pair*, and the variables involved are referred to asĀ *conjugate variables*. So momentum (p) and position (x) are conjugate variables and, likewise, energy and time are also conjugate variables (but so I won’t expand on the time-energy relation here: please have a look at one of my others posts on that).

Now, I thought of copying and explaining the proof of Kennard’s inequality from Wikipedia’s article on the Uncertainty Principle (you need to click on the *show* button in the relevant section to see it), but then that’s pretty boring math, and simply copying stuff is not my objective with this blog. More importantly, the proof has nothing to do with physics. Nothing at all. Indeed, it just proves a generalĀ *mathematicalĀ *property of Fourier pairs. More specifically, it proves that, **the more concentrated one function is, the more spread out its Fourier transform must be**. In other words, **it isĀ notĀ possible to arbitrarily concentrate both a function and its Fourier transform**.

So, in this case, if we’d ‘squeeze’Ā ĪØ(x), then its Fourier transformĀ Ī¦(p) will ‘stretch out’, and so that’s what the proof in that Wikipedia article basically shows. In other words, there is some ‘trade-off’ between the ‘compaction’ of ĪØ(x), on the one hand, and Ī¦(p), on the other, and so *that* is what the Uncertainty Principle is all about. Nothing more, nothing less.

But…Ā *Yes?Ā *What’s all this talk about ‘squeezing’ and ‘compaction’? We can’t change reality, can we? Well… Here we’re entering the philosophical field, of course. How do weĀ *interpretĀ *the Uncertainty Principle? It surely does look like us trying to *measureĀ *something has some impact on the wavefunction. In fact, usually, our measurementĀ ā ofĀ *eitherĀ *positionĀ *orĀ *momentumĀ ā usually makes the wavefunctionsĀ *collapse*: we suddenlyĀ *knowĀ *where the particle is and, hence, Ļ(x) seems to collapse into one point. Alternatively, we measure its momentum and, hence,Ā Ī¦(p) collapses.

That’s intriguing. In fact, even more intriguing is the possibility we may onlyĀ *partiallyĀ *affect those wavefunctions with measurements that are somewhat less ‘drastic’. It seems a lot of research is focused on that (justĀ *GoogleĀ *for partial collapse of the wavefunction, and you’ll finds tons of references, including presentations like this one).

Hmm… I need to further study the topic. The decomposition of a wave into its component waves is obviously something that works well in physicsāand not only in quantum mechanics but also in much more mundane examples. Its most general application is signal processing, in which we decompose aĀ *signalĀ *(which is a function of time) into the frequencies that make it up. Hence, our wavefunction model makes a lot of sense, as it mirrors the physics involved in oscillators and harmonicsĀ obviously.

Still… I feel it doesn’t answer the fundamental question: whatĀ *isĀ *our electron really? What do those wave packets represent? Physicists will say questions like this don’t matter: as long as our mathematical models ‘work’, it’s fine. In fact, if even Feynman said that nobody ā including himselfĀ ā truly *understands*Ā quantum mechanics, then I should just be happy and move on. However, for some reason, I can’t quite accept that. I should probably focus some more on that *de BroglieĀ *relationship,Ā p = h/Ī», as it’s obviously as fundamental to my understanding of the ‘model’ of reality in physics as that Fourier analysis of the wave packet. So I need to do some more thinking on that.

TheĀ *de BroglieĀ *relationship is not intuitive. In fact, I am not ashamed to admit that it actually took me quite some time to understand why we can’t just re-write the *de BroglieĀ *relationship (Ī» = h/p) as an uncertainty relation itself: ĪĪ» = h/Īp. Hence, let me be very clear on this:

Īx = h/Īp (that’s the Uncertainty Principle) butĀ **ĪĪ» ā h/Īp !**

Let me quickly explain why.

If theĀ Ī symbol expresses a standard deviation (or some other measurement of uncertainty), we can write the following:

p = h/Ī» ā Īp =Ā Ī(h/Ī») = hĪ(1/Ī»)Ā ā h/Īp

So I can take h out of the brackets after the Ī symbol, because that’s one of the things that’s allowed when working with standard deviations. More in particular, one can prove the following:

- The standard deviation of some constant function is 0: Ī(k) = 0
- The standard deviation is invariant under changes of location: Ī(x + k) = Ī(x + k)
- Finally, the standard deviationĀ scales directly with the scale of the variable: Ī(kx) = |k |Ī(x).

However, it isĀ *notĀ *the case that Ī(1/x) = 1/Īx. However, let’s not focus on what we can*not* do withĀ Īx: let’s see what we *can* do with it. Īx equals h/Īp according to the Uncertainty Principleāif we take it as an equality, rather than as an *in*equality, that is. And then we have the *de BroglieĀ *relationship: p = h/Ī». Hence, Īx must equal:

Īx = h/Īp = h/[Ī(h/Ī»)] =h/[hĪ(1/Ī»)] = 1/Ī(1/Ī»)

Thatās obvious, but so what? As mentioned, we cannot writeĀ Īx = ĪĪ», because thereās no rule that says thatĀ Ī(1/Ī») = 1/ĪĪ» and, therefore, h/Īp ā ĪĪ». However, what we *canĀ *do is defineĀ ĪĪ» as an interval, or a length, defined by the difference between its lower and upper bound (let’s denote those two values by Ī»_{a}Ā andĀ Ī»_{b}Ā respectively. Hence, we write ĪĪ» = Ī»_{b}Ā āĀ Ī»_{a}. Note that this doesĀ *notĀ *assume we have aĀ *continuousĀ *range of values forĀ Ī»: we can have any number of frequenciesĀ Ī»_{nĀ }between Ī»_{a}Ā and Ī»_{b}, but so you see the point: we’ve got a range of valuesĀ Ī», discrete or continuous, defined by some lower and upper bound.

Now, theĀ *de BroglieĀ *relation associates two values p_{a}Ā and p_{b}Ā with Ī»_{a}Ā and Ī»_{b}Ā respectively:Ā p_{a}Ā = h/Ī»_{a}Ā and p_{b}Ā = h/Ī»_{b}. Hence, we can similarly define the correspondingĀ Īp interval as p_{a}Ā āĀ p_{b}, with p_{a}Ā = h/Ī»_{a}Ā and p_{bĀ }=Ā h/Ī»_{b}. Note that, because we’re taking the reciprocal, we have to reverse the order of the values here: if Ī»_{b}Ā > Ī»_{a}, thenĀ p_{a}Ā =Ā h/Ī»_{a}Ā > p_{bĀ }=Ā h/Ī»_{b}. Hence, we can write Īp = Ī(h/Ī») = p_{a}Ā āĀ p_{b}Ā = h/Ī»_{1}Ā āĀ h/Ī»_{2Ā }= h(1/Ī»_{1}Ā ā 1/Ī»_{2}) = h[Ī»_{2}Ā ā Ī»_{1}]/Ī»_{1}Ī»_{2}. In case you have a bit of difficulty, just draw some reciprocal functions (like the ones below), and have fun connecting intervals on the horizontal axis with intervals on the vertical axis using these functions.

Now, h[Ī»_{2}Ā ā Ī»_{1}]/Ī»_{1}Ī»_{2}) is obviously something *veryĀ *different than*Ā *h/ĪĪ» = h/(Ī»_{2}Ā āĀ Ī»_{1}). So we can surely not equate the two and, hence, we can*not* write that Īp = h/ĪĪ».

Having said that, theĀ Īx = 1/Ī(1/Ī») = Ī»_{1}Ī»_{2}/(Ī»_{2}Ā ā Ī»_{1}) that emerges here is quite interesting. **We’ve got a ratio here, Ī» _{1}Ī»_{2}/(Ī»_{2}Ā ā Ī»_{1}, which shows thatĀ Īx depends only on the upper and lower bounds of the ĪĪ» range. It does notĀ depend on whether or not the interval is discrete or continuous.**

The second thing that is interesting to note is Īx depends not only on theĀ *differenceĀ *between those two values (i.e. the length of the interval) but also on their value: if the length of the interval, i.e. the difference between the two frequencies is the same, but their values as such are higher, then we get a higher value for Īx, i.e. a greater uncertainty in the position. Again, this shows that the relation between ĪĪ» and Īx is *notĀ *straightforward. But so we knew that already, and so I’ll end this post right here and right now. šĀ Ā **Ā Ā **

Some content on this page was disabled on June 17, 2020 as a result of a DMCA takedown notice from Michael A. Gottlieb, Rudolf Pfeiffer, and The California Institute of Technology. You can learn more about the DMCA here: