The analysis of a two-state system (i.e.Ā the rather famous example of an ammonia molecule ‘flipping’ its spin direction from ‘up’ to ‘down’, or vice versa) in my previous post is a good opportunity to think about Occam’s Razor once more. What are we doing? What does the mathĀ tell us?
In the example we chose, we didn’t need to worry about space. It was all aboutĀ time: an evolvingĀ stateĀ over time.Ā We also knew the answers we wanted to get: if there isĀ someĀ probability for the system to ‘flip’ from one state to another, we know it will, atĀ someĀ point in time. We also want probabilities to add up to one, so we knew the graph below had to be the result we would find: if our molecule can be in two states only, and it starts of in one, then the probability that it willĀ remain in that state will gradually decline, while the probability that it flips into the other state will gradually increase, which is what is depicted below.

However, the graph above is only a Platonic idea: we don’t bother to actually verifyĀ what state the molecule is in. If we did, we’d have to ‘re-set’ our t = 0 point, and start all over again. The wavefunction would collapse, as they say, because we’ve made a measurement. However, having said that, yes, in the physicist’s Platonic world of ideas, the probability functions above make perfect sense. They are beautiful. You should note, for example, that P1Ā (i.e. the probability to be in state 1) and P2Ā (i.e. the probability to be in state 2) add up to 1 all of the time, so we don’t need to integrate over a cycle or something: so it’s allĀ perfect!
These probability functions areĀ based on ideas that are even more Platonic: interfering amplitudes. Let me explain.
Quantum physics is based on the idea that these probabilitiesĀ are determined by some wavefunction, a complex-valuedĀ amplitudeĀ that varies in time and space. It’s a two-dimensional thing, and then it’s not. It’s two-dimensional because it combines a sine and cosine, i.e. a real and an imaginary part, but the argument of the sine and the cosine is the same, and the sine and cosine are the same function, except for a phase shift equal to Ļ. We write:
aĀ·eāiĪøĀ = aĀ·cos(Īø) āĀ aĀ·sin(āĪø) =Ā aĀ·cosĪø āĀ aĀ·sinĪø
The minus sign is there because it turns out that Nature measures angles, i.e. our phase,Ā clockwise, rather than counterclockwise, so that’s notĀ as per ourĀ mathematical convention. But that’s a minor detail, really. [It should give you some food for thought, though.] For the rest, theĀ related graph is as simple as the formula:

Now, theĀ phaseĀ of this wavefunction is written asĀ Īø =Ā (ĻĀ·t ā kĀ āx). Hence, Ļ determines how this wavefunction varies in time, and the wavevectorĀ k tellsĀ us how this wave varies in space. The young Frenchman Comte Louis de Broglie noted the mathematical similarity between theĀ ĻĀ·t ā kĀ āxĀ expression and Einstein’s four-vector product pμxμ =Ā EĀ·t ā pāx, which remains invariant under a Lorentz transformation. He also understood that the Planck-Einstein relation E =Ā Ä§Ā·Ļ actually definesĀ the energy unit and, therefore, that any frequency,Ā any oscillation really, in space or in time, is to be expressed in terms of ħ.
[To be precise, the fundamental quantum of energy is h = ħ·2Ļ, because that’s the energy of one cycle. To illustrate the point, think of the Planck-Einstein relation. It gives us the energy of aĀ photon with frequency f: Eγ = hĀ·f. If we re-write this equation as Eγ/fĀ = h, and we do a dimensional analysis, we get:Ā h = Eγ/fĀ āĀ 6.626Ć10ā34Ā jouleĀ·second =Ā [x joule]/[fĀ cycles per second]Ā āĀ hĀ =Ā 6.626Ć10ā34Ā jouleĀ per cycle. It’s only because we are expressing Ļ and k as angularĀ frequencies (i.e.Ā in radiansĀ per second or per meter, rather than in cyclesĀ per second or per meter) that we have to think of ħ = h/2Ļ rather than h.]
Louis de Broglie connected the dots between some other equations too. He was fully familiar with the equations determining the phase and group velocity of compositeĀ waves, or aĀ wavetrainĀ that actually might represent aĀ wavicleĀ traveling through spacetime. In short, he boldly equatedĀ Ļ with Ļ = E/ħ and k with kĀ = p/ħ, and all came out alright. It made perfect sense!
I’ve written enough about this. What I want to write about here is how this also makes for the situation on hand: a simple two-state system that depends on time only. So its phase isĀ Īø = ĻĀ·t =Ā E0/ħ. What’s E0? It is theĀ totalĀ energy of the system, including the equivalent energy of the particle’s rest mass and any potential energy that may be thereĀ because of the presence of one or the other force field. What about kinetic energy? Well… We said it: in this case, there is no translational or linear momentum, so p = 0. So our Platonic wavefunction reduces to:
aĀ·eāiĪøĀ = aeā(i/ħ)Ā·(E0Ā·t)
Great! […] But… Well… No! The problem with this wavefunction is that it yields a constantĀ probability. To be precise, when we take the absoluteĀ square of this wavefunctionĀ ā which is what we do when calculating a probability from a wavefunctionĀ ā we get P = a2, always. The ‘normalization’ condition (so that’s the condition that probabilities have to add up to one) implies that P1Ā = P2Ā = a2Ā = 1/2. Makes sense, you’ll say, but the problem is that this doesn’t reflect reality: these probabilities do not evolve over time and, hence, ourĀ ammonia molecule never ‘flips’ its spin direction from ‘up’ to ‘down’, or vice versa. In short, our wavefunction doesĀ notĀ explain reality.
The problem is not unlike the problem we’d had with a similar function relating the momentumĀ and theĀ positionĀ of a particle. You’ll remember it: we wrote it asĀ aĀ·eāiĪøĀ = ae(i/ħ)Ā·(pĀ·x). [Note that we can writeĀ aĀ·eāiĪøĀ = aĀ·eā(i/ħ)Ā·(E0Ā·t āĀ pĀ·x)Ā = aĀ·eā(i/ħ)Ā·(E0Ā·t)Ā·e(i/ħ)Ā·(pĀ·x), so we can always split our wavefunction in a ‘time’ and a ‘space’ part.] But then we found that this wavefunction also yielded a constant and equal probability all over space, which implies our particle is everywhere (and, therefore, nowhere, really).
In quantum physics, this problem is solved by introducing uncertainty.Ā Introducing some uncertainty about the energy, or about the momentum, is mathematically equivalent to saying that we’re actually looking at a compositeĀ wave, i.e. the sum of a finite or infinite set of component waves. So we have the sameĀ Ļ = E/ħ and kĀ = p/ħ relations, but we apply them to n energy levels, or to some continuousĀ rangeĀ of energy levelsĀ ĪE. It amounts to saying that our wave function doesnāt have a specific frequency: it now has n frequencies, or a range of frequenciesĀ ĪĻ =Ā ĪE/ħ.
We know what that does: it ensures our wavefunction is being ‘contained’ in some ‘envelope’. It becomes a wavetrain, or a kind of beat note, as illustrated below:

[The animation also shows the difference between theĀ groupĀ andĀ phaseĀ velocity: the green dot shows the group velocity, while the red dot travels at the phase velocity.]
This begs the following question: whatās the uncertainty really? Is it an uncertainty in the energy, or is it an uncertainty in the wavefunction? I mean: we have a function relating the energy to a frequency. Introducing some uncertainty about the energy is mathematically equivalent to introducing uncertainty about the frequency.Ā Of course, the answer is: the uncertainty is in both, so itās in theĀ frequency and in the energy and both are related through the wavefunction. So⦠Well… Yes. In some way, weāre chasing our own tail. š
However, the trick does the job, and perfectly so. Let me summarize what we did in the previous post: we had the ammonia molecule, i.e. anĀ NH3Ā molecule, with the nitrogen āflippingā across the hydrogens from time to time, as illustrated below:

ThisĀ āflipā requires energy, which is why we associate twoĀ energy levels with the molecule, rather than just one. We wrote these two energy levels as E0Ā + A and E0Ā ā A. ThatĀ assumption solved all of our problems. [Note that we don’t specify what the energy barrier really consists of: moving the center of mass obviously requires some energy, but it is likely that a ‘flip’ also involves overcoming some electrostatic forces, as shown by the reversal of the electricĀ dipole moment in the illustration above.]Ā To be specific, it gave us the following wavefunctions for the amplitudeĀ to be in the ‘up’ or ‘1’ state versus the ‘down’ or ‘2’ state respectivelly:
- C1Ā = (1/2)Ā·eā(i/ħ)Ā·(E0Ā ā A)Ā·tĀ + (1/2)Ā·eā(i/ħ)Ā·(E0Ā + A)Ā·t
- C2Ā = (1/2)Ā·eā(i/ħ)Ā·(E0Ā ā A)Ā·tĀ ā (1/2)Ā·eā(i/ħ)Ā·(E0Ā + A)Ā·t
Both areĀ compositeĀ waves. To be precise, they are the sum of two component waves with aĀ temporalĀ frequency equal to Ļ1Ā =Ā (E0Ā ā A)/ħ and Ļ1Ā =Ā (E0Ā + A)/ħ respectively. [As for the minus sign in front of the second term in the wave equation forĀ C2, ā1 = e±iĻ, so + (1/2)Ā·eā(i/ħ)Ā·(E0Ā + A)Ā·tĀ and ā (1/2)Ā·eā(i/ħ)Ā·(E0Ā + A)Ā·tĀ are the same wavefunction: they only differ because theirĀ relativeĀ phase is shifted by ±Ļ.] So theĀ so-calledĀ base states of the molecule themselvesĀ are associated with two different energy levels: it’s notĀ like one state has more energy than the other.
You’ll say: so what?
Well… Nothing. That’s it really. That’s all I wanted to say here. The absolute square of those two wavefunctions gives us those time-dependent probabilities above, i.e. the graph we started this post with. So… Well… Done!
You’ll say: where’s the ‘envelope’? Oh! Yes! Let me tell you. TheĀ C1(t) and C2(t) equations can be re-written as:

Now, remembering our rules for adding and subtracting complex conjugates (eiĪøĀ +Ā eāiĪøĀ =Ā 2cosĪø andĀ eiĪøĀ āĀ eāiĪøĀ =Ā 2sinĪø), we can re-write this as:

So there we are! We’ve got wave equations whoseĀ temporal variation is basically defined by E0Ā but, on top of that, we have an envelope here: the cos(AĀ·t/ħ) and sin(AĀ·t/ħ) factor respectively. So their magnitudeĀ is no longer time-independent: both the phase as well as the amplitude now vary with time. The associated probabilities are the ones we plotted:
- |C1(t)|2 = cos2[(A/ħ)·t], and
- |C2(t)|2 = sin2[(A/ħ)·t].
So, to summarize it all once more, allowing the nitrogen atom to push its way through the three hydrogens, so as to flip to the other side, thereby breaking the energy barrier, is equivalent to associating twoĀ energy levels to the ammonia molecule as a whole, thereby introducing someĀ uncertainty, orĀ indefinitenessĀ as to its energy, and that, in turn, gives us the amplitudes and probabilities that weāve just calculated. [And you may want to note here that the probabilities āsloshing back and forthā, or ādumping into each otherāĀ ā as Feynman puts itĀ ā is the result of the varyingĀ magnitudesĀ of our amplitudes, so that’s the ‘envelope’ effect. It’s only because the magnitudes vary in time that their absolute square, i.e. the associatedĀ probability,Ā varies too.
So⦠Well⦠Thatās it. I think this and all of the previous posts served as a nice introduction to quantum physics. More in particular, I hope this post made you appreciate the mathematical framework is not as horrendous as it often seems to be.
When thinking about it, it’s actually all quite straightforward, and it surely respectsĀ Occam’s principle of parsimony in philosophical and scientific thought, also know as Occamās Razor: āWhen trying to explain something, it is vain to do with more what can be done with less.ā So the math we need is the math we need, really: nothing more, nothing less. As I’ve said a couple of times already, Occam would have loved the math behind QM:Ā the physics call for the math, and the math becomes the physics.
That’s what makes it beautiful. š
Post scriptum:
One might think that the addition of a term in the argument in itself would lead to a beat note and, hence, a varying probability but, no! We may look atĀ eā(i/ħ)Ā·(E0Ā + A)Ā·tĀ as aĀ productĀ of two amplitudes:
eā(i/ħ)Ā·(E0Ā + A)Ā·tĀ =Ā eā(i/ħ)Ā·E0Ā·tĀ·eā(i/ħ)Ā·AĀ·t
But, when writing this all out, one just gets a cos(α·t+β·t)āsin(α·t+β·t), whose absolute square |cos(α·t+β·t)āsin(α·t+β·t)|2Ā = 1. However, writing eā(i/ħ)Ā·(E0Ā + A)Ā·tĀ as a productĀ of two amplitudes in itself is interesting. WeĀ multiplyĀ amplitudes when an event consists of two sub-events. For example, the amplitude for some particle to go from s toĀ x via some point a is written as:
ā© xĀ | sĀ āŖvia aĀ =Ā ā© xĀ | a āŖā© aĀ | sĀ āŖ
Having said that, the graph of the product is uninteresting: the real and imaginary part of the wavefunction are a simple sine and cosine function, and their absolute square is constant, as shown below.Ā 
AddingĀ two waves with very different frequenciesĀ āĀ A is aĀ fractionĀ ofĀ E0Ā ā gives a much more interesting pattern, like the one below, which shows anĀ eāiαt+eāiβtĀ = cos(αt)āiĀ·sin(αt)+cos(βt)āiĀ·sin(βt) =Ā cos(αt)+cos(βt)āiĀ·[sin(αt)+sin(βt)] pattern for α = 1 and β = 0.1.

That doesn’t look a beat note, does it? The graphs below, which use 0.5 and 0.01 for β respectively, are not typical beat notes either.
Ā 

We get our typical ‘beat note’ only when we’re looking at a waveĀ traveling in space, so then we involve the space variableĀ xĀ again, and the relations that come with in, i.e. aĀ phaseĀ velocity vpĀ = Ļ/k Ā = (E/ħ)/(p/ħ) = E/p = c2/v (read: all componentĀ waves travel at the same speed), and a group velocity vgĀ = dĻ/dk = v (read: the compositeĀ wave or wavetrainĀ travels at the classical speed of our particle, so it travels withĀ the particle, so to speak). That’s what’s I’ve shown numerous times already, but I’ll insert one more animation here, just to make sure you see what we’re talking about. [Credit for the animation goes to another site, one on acoustics, actually!]

So what’s left? Nothing much. The only thing you may want to do is to continue thinking about that wavefunction. It’s tempting to think it actuallyĀ isĀ the particle, somehow. But it isn’t. So what is it then? Well… Nobody knows, really, but I like to think it doesĀ travelĀ with the particle. So it’s like a fundamentalĀ propertyĀ of the particle. We need it every time when we try to measure something: its position, its momentum, its spin (i.e. angular momentum) or, in the example of our ammonia molecule, its orientation in space. So the funny thing is that, in quantum mechanics,
- We can measure probabilitiesĀ only, so there’s always some randomness. That’s how Nature works: we don’t really know what’s happening. We don’t know the internal wheels and gears, so to speak, or the ‘hidden variables’, as one interpretation of quantum mechanics would say. In fact, the most commonly accepted interpretation of quantum mechanics says there are no ‘hidden variables’.
- But then, as Polonius famously put, there is a method in this madness, and the pioneers ā I mean Werner Heisenberg, Louis de Broglie, Niels Bohr, Paul Dirac, etcetera ā discovered. All probabilities can be found by taking the square of the absolute value of a complex-valued wavefunctionĀ (often denoted by ĪØ), whose argument, or phaseĀ (Īø),Ā is given by the de Broglie relationsĀ Ļ = E/ħ and kĀ =Ā p/ħ:
Īø =Ā (ĻĀ·t ā kĀ āx) = (E/ħ)Ā·tĀ ā (p/ħ)Ā·x
That should be obvious by now, as I’ve written dozens of posts on this by now. š I still have trouble interpreting this, howeverāand I am not ashamed, because the Great Ones I just mentioned have trouble with that too. But let’s try to go as far as we can by making a few remarks:
- Ā Adding two terms in math implies the two terms should have the sameĀ dimension: we can only add apples to apples, and oranges to oranges. We shouldn’t mix them. Now, theĀ (E/ħ)Ā·t and (p/ħ)Ā·x termsĀ are actually dimensionless: they are pure numbers. So that’s even better. Just check it: energy is expressed in newtonĀ·meterĀ (force over distance, remember?) or electronvoltsĀ (1 eVĀ =Ā 1.6Ć10ā19 J = 1.6Ć10ā19 NĀ·m); Planck’s constant, as the quantum of action,Ā is expressed in JĀ·s or eVĀ·s; and the unit ofĀ (linear) momentum is 1Ā NĀ·s = 1Ā kgĀ·m/s = 1Ā NĀ·s. E/ħ gives a number expressed per second, and p/ħ a number expressed per meter. Therefore, multiplying it by t and x respectively gives us a dimensionless number indeed.
- It’s also an invariant number, which means we’ll always get the same value for it. As mentioned above, that’s because theĀ four-vector product pμxμ =Ā EĀ·t ā pāxĀ is invariant: it doesn’t change when analyzing a phenomenon in oneĀ reference frame (e.g. our inertial reference frame)Ā or another (i.e. in a moving frame).
- Now, Planck’s quantum of actionĀ h or ħ (they only differ in their dimension: h is measured in cyclesĀ per second and ħ is measured inĀ radiansĀ per second) is the quantum of energy really. Indeed, if “energy is the currency of the Universe”, and it’s real and/or virtual photons who are exchanging it, then it’s good to know the currency unit is h, i.e. the energy that’s associated with one cycleĀ of a photon.
- It’s not only time and space that are related, as evidenced by the fact that t ā x itself is an invariant four-vector, E and p are related too, of course! They are related through the classical velocity of the particle that we’re looking at: E/p = c2/v and, therefore, we can write:Ā E·β = pĀ·c, with β = v/c, i.e. the relativeĀ velocity of our particle, as measured as aĀ ratioĀ of the speed of light.Ā Now, I should add that theĀ t ā xĀ four-vector is invariant only if we measure time and space in equivalent units. Otherwise, we have to write cĀ·t ā x. If we do that, so our unit of distance becomes cĀ meter, rather than one meter, or our unit of time becomes the time that is needed for light to travel one meter, thenĀ cĀ = 1, and the E·β = pĀ·cĀ becomes E·β = p, which we also write as β = p/E: the ratio of the energyĀ and theĀ momentumĀ of our particle is its (relative) velocity.
Combining all of the above, we may want to assume that we are measuring energyĀ andĀ momentum in terms of the Planck constant, i.e. theĀ ‘natural’Ā unit for both. In addition, we may also want to assume that we’re measuring time and distance in equivalent units. Then the equation for the phase of our wavefunctions reduces to:
Īø =Ā (ĻĀ·t ā kĀ āx) = EĀ·tĀ ā pĀ·x
Now,Ā Īø is the argument of a wavefunction, and we can alwaysĀ re-scaleĀ such argument by multiplying or dividing it by someĀ constant. It’s just like writing the argument of a wavefunction asĀ vĀ·tāx or (vĀ·tāx)/vĀ = t āx/vĀ withĀ vĀ the velocity of the waveform that we happen to be looking at. [In case you have trouble following this argument, please check the post I did for my kids on waves and wavefunctions.] Now, the energy conservation principle tells us the energy of a free particle won’t change. [Just to remind you, a ‘free particle’ means it is present in a ‘field-free’ space, so our particle is in a region ofĀ uniform potential.] You see what I am going to do now: we can, in this case, treat E as a constant, and divideĀ EĀ·tĀ ā pĀ·x by E, so we get a re-scaled phase for our wavefunction, which I’ll write as:
Ļ =Ā (EĀ·tĀ ā pĀ·x)/E = t ā (p/E)Ā·x = t ā β·x
Now that’s the argument of a wavefunction with the argument expressed in distance units. Alternatively, we could also look at p as some constant, as there is no variation in potential energy that will cause a change in momentum, i.e. in kinetic energy. We’d then divide by p and we’d getĀ (EĀ·tĀ ā pĀ·x)/p = (E/p)Ā·t ā x) = t/β ā x, which amounts to the same, as we can always re-scale by multiplying it with β, which would then yield the same t ā β·x argument.
The point is, if we measure energy and momentum in terms of the Planck unit (I mean:Ā in terms of the Planck constant, i.e. theĀ quantum of energy), and if we measure time and distance in ‘natural’ units too, i.e. we take the speed of light to be unity, then our Platonic wavefunction becomes as simple as:
Φ(Ļ) =Ā aĀ·eāiĻĀ = aĀ·eāi(t ā β·x)
This is a wonderful formula, but let me first answer your most likely question: why would we use a relativeĀ velocity?Well… Just think of it: when everything is said and done, the whole theory of relativity and, hence, the whole of physics, is based onĀ one fundamental and experimentally verified fact: the speed of light isĀ absolute. In whatever reference frame, we willĀ alwaysĀ measure it asĀ 299,792,458 m/s. That’s obvious, you’ll say, but it’s actually the weirdest thing ever if you start thinking about it, and it explains why those Lorentz transformations look so damn complicated. In any case, thisĀ factĀ legitimately establishes cĀ as some kind ofĀ absoluteĀ measure against which all speeds can be measured. Therefore, it is onlyĀ naturalĀ indeed to express a velocity as some number between 0 and 1. Now that amounts to expressing it as the β = v/c ratio.
Let’s now go back to that Φ(Ļ) =Ā aĀ·eāiĻĀ = aĀ·eāi(t ā β·x)Ā wavefunction. Its temporal frequency Ļ is equal to one, and its spatial frequency k is equal to β = v/c. It couldn’t be simpler but, of course, we’ve got this remarkably simple result because we re-scaled the argument of our wavefunction using theĀ energyĀ andĀ momentumĀ itself as the scale factor. So, yes, we can re-write the wavefunction of our particle in a particular elegant and simple form using the only information that we have when looking at quantum-mechanical stuff: energy and momentum, because that’s what everything reduces to at that level.
Of course, the analysis above does notĀ include uncertainty. Our information on the energy and the momentum of our particle will be incomplete: we’ll write E = E0 ± ĻE, and p = p0 ± Ļp. [I am a bit tired of using the Ī symbol, so I am using the Ļ symbol here, which denotes a standard deviationĀ of some density function. It underlines the probabilistic, or statistical, nature of our approach.] But, including that, we’ve pretty much explained what quantum physics is about here.
You just need to get used to that complex exponential: eāiĻĀ = cos(āĻ) + iĀ·sin(āĻ) =Ā cos(Ļ) ā iĀ·sin(Ļ). Of course, it would have been nice if Nature would have given us a simple sine or cosine function. [Remember the sine and cosine function are actually the same, except for a phase difference of 90 degrees: sin(Ļ) = cos(Ļ/2āĻ) = cos(Ļ+Ļ/2). So we can go always from one to the other by shifting the origin of our axis.] But… Well… As we’ve shown so many times already, a real-valued wavefunction doesn’t explain the interference we observe, be it interference of electrons or whatever other particles or, for that matter, the interference of electromagnetic waves itself, which, as you know, we also need to look at as a stream ofĀ photonsĀ , i.e. light quanta, rather than as some kind of infinitely flexibleĀ aetherĀ that’s undulating, like water or air.
So… Well… Just accept thatĀ eāiĻĀ is a very simple periodic function, consisting of two sine waves rather than just one, as illustrated below.
Ā 
And then you need to think of stuff like this (the animation is taken from Wikipedia), but then with a projection of the sineĀ of thoseĀ phasorsĀ too. It’s all great fun, so I’ll let you play with it now. š

Some content on this page was disabled on June 20, 2020 as a result of a DMCA takedown notice from Michael A. Gottlieb, Rudolf Pfeiffer, and The California Institute of Technology. You can learn more about the DMCA here:
https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 20, 2020 as a result of a DMCA takedown notice from Michael A. Gottlieb, Rudolf Pfeiffer, and The California Institute of Technology. You can learn more about the DMCA here:
https://wordpress.com/support/copyright-and-the-dmca/