Some thoughts on the nature of reality

Some other comment on an article on my other blog, inspired me to structure some thoughts that are spread over various blog posts. What follows below, is probably the first draft of an article or a paper I plan to write. Or, who knows, I might re-write my two introductory books on quantum physics and publish a new edition soon. 馃檪

Physical dimensions and Uncertainty

The physical dimension of the quantum of action (h or聽魔 = h/2蟺) is force (expressed in newton)聽times distance (expressed in meter)聽times time (expressed in seconds): N路m路s. Now, you may think this N路m路s dimension is kinda hard to imagine. We can imagine its individual components, right? Force, distance and time. We know what they are. But the product of all three? What is it, really?

It shouldn’t be all that hard to imagine what it might be, right? The N路m路s unit is also the unit in which angular momentum is expressed – and you can sort of imagine what that is, right? Think of a spinning top, or a gyroscope. We may also think of the following:

  1. [h] = N路m路s = (N路m)路s = [E]路[t]
  2. [h] = N路m路s = (N路s)路m = [p]路[x]

Hence, the physical dimension of action is that of energy (E) multiplied by time (t) or, alternatively, that of momentum (p) times distance (x). To be precise, the second dimensional equation should be written as [h] = [p]路[x], because both the momentum and the distance traveled will be associated with some direction. It’s a moot point for the discussion at the moment, though. Let’s think about the first equation first:聽[h] = [E]路[t]. What does it mean?

Energy… Hmm… In聽real life, we are usually not interested in the energy of a system as such, but by the energy it can deliver, or absorb, per second. This is referred to as the聽power聽of a system, and it’s expressed in J/s, or watt. Power is also defined as the (time) rate at which work is done. Hmm… But so here we’re multiplying energy and time. So what’s that? After Hiroshima and Nagasaki, we can sort of imagine the energy of an atomic bomb. We can also sort of imagine the power that’s being released by the Sun in light and other forms of radiation, which is about 385脳1024 joule per second. But energy times time? What’s that?

I am not sure. If we think of the Sun as a huge reservoir of energy, then the physical dimension of action is just like having that reservoir of energy guaranteed for some time, regardless of how fast or how slow we use it. So, in short, it’s just like the Sun – or the Earth, or the Moon, or whatever object – just being there, for some聽definite聽amount of time. So, yes: some聽definite amount of mass or energy (E) for some聽definite聽amount of time (t).

Let’s bring the mass-energy equivalence formula in here: E = mc2. Hence, the physical dimension of action can also be written as [h] = [E]路[t] = [mc]2路[t] = (kg路m2/s2)路s =聽kg路m2/s.聽What does that say? Not all that much – for the time being, at least. We can get this聽[h] = kg路m2/s through some other substitution as well. A force of one newton will give a mass of 1 kg an acceleration of 1 m/s per second. Therefore, 1 N = 1 kg路m/s2聽and, hence, the physical dimension of h, or the unit of angular momentum, may also be written as 1 N路m路s = 1 (kg路m/s2)路m路s = 1 kg路m2/s, i.e. the product of mass, velocity and distance.

Hmm… What can we do with that? Nothing much for the moment: our first reading of it is just that it reminds us of the definition of angular momentum – some mass with some velocity rotating around an axis. What about the distance? Oh… The distance here is just the distance from the axis, right? Right. But… Well… It’s like having some amount of linear momentum available over some distance – or in some space, right? That’s sufficiently significant as an interpretation for the moment, I’d think…

Fundamental units

This makes one think about what units would be fundamental – and what units we’d consider as being derived. Formally, the聽newton is a聽derived聽unit in the metric system, as opposed to the units of mass, length and time (kg, m, s). Nevertheless, I personally like to think of force as being fundamental:聽 a force is what causes an object to deviate from its straight trajectory in spacetime. Hence, we may want to think of the聽quantum of action as representing three fundamental physical dimensions: (1)聽force, (2)聽time and (3) distance – or space. We may then look at energy and (linear) momentum as physical quantities combining (1) force and distance and (2) force and time respectively.

Let me write this out:

  1. Force times length (think of a force that is聽acting on some object over some distance) is energy: 1 joule聽(J) =聽1 newtonmeter (N). Hence, we may think of the concept of energy as a projection聽of action in space only: we make abstraction of time. The physical dimension of the quantum of action should then be written as [h] = [E]路[t]. [Note the square brackets tell us we are looking at a聽dimensional聽equation only, so [t] is just the physical dimension of the time variable. It’s a bit confusing because I also use square brackets as parentheses.]
  2. Conversely, the magnitude of linear momentum (p = m路v) is expressed in newtonseconds: 1 kg路m/s = 1 (kg路m/s2)路s = 1 N路s. Hence, we may think of (linear) momentum as a projection of action in time only: we make abstraction of its spatial dimension. Think of a force that is acting on some object聽during some time.聽The physical dimension of the quantum of action should then be written as [h] = [p]路[x]

Of course, a force that is acting on some object during some time, will usually also act on the same object over some distance but… Well… Just try, for once, to make abstraction of one of the two dimensions here: time聽or聽distance.

It is a difficult thing to do because, when everything is said and done, we don’t live in space or in time alone, but in spacetime and, hence, such abstractions are not easy. [Of course, now you’ll say that it’s easy to think of something that moves in time only: an object that is standing still does just that – but then we know movement is relative, so there is no such thing as an object that is standing still in space聽in an absolute sense: Hence, objects never stand still in spacetime.] In any case, we should try such abstractions, if only because of the principle of least action聽is so essential and deep in physics:

  1. In classical physics, the path of some object in a force field will minimize聽the total action (which is usually written as S) along that path.
  2. In quantum mechanics, the same action integral will give us various values S – each corresponding to a particular path – and each path (and, therefore, each value of S, really) will be associated with a probability amplitude that will be proportional to some constant times e鈭抜路胃聽=聽ei路(S/魔). Because is so tiny, even a small change in S will give a completely different phase angle 胃. Therefore, most amplitudes will cancel each other out as we take the sum of the amplitudes over all possible paths: only the paths that nearly聽give the same phase matter. In practice, these are the paths that are associated with a variation in S of an order of magnitude that is equal to .

The paragraph above summarizes, in essence, Feynman’s path integral formulation of quantum mechanics. We may, therefore, think of the quantum of action聽expressing聽itself (1) in time only, (2) in space only, or – much more likely – (3) expressing itself in both dimensions at the same time. Hence, if the quantum of action gives us the order of magnitude聽of the uncertainty – think of writing something like S 卤 , we may re-write our dimensional [] = [E]路[t] and [] = [p]路[x] equations as the uncertainty equations:

  • 螖E路螖t =
  • 螖p路螖x =

You should note here that it is best to think of the uncertainty relations as a聽pair聽of equations, if only because you should also think of the concept of energy and momentum as representing different aspects聽of the same reality, as evidenced by the (relativistic) energy-momentum relation (E2聽= p2c2聽鈥 m02c4). Also, as illustrated below, the actual path – or, to be more precise, what we might associate with the concept of the actual path – is likely to be some mix of 螖x and 螖t. If 螖t is very small, then 螖x will be very large. In order to move over such distance, our particle will require a larger energy, so 螖E will be large. Likewise, if 螖t is very large, then 螖x will be very small and, therefore, 螖E will be very small. You can also reason in terms of 螖x, and talk about momentum rather than energy. You will arrive at the same conclusions: the 螖E路螖t = h and 螖p路螖x = h聽relations represent two aspects of the same reality – or, at the very least, what we might聽think聽of as reality.


Also think of the following: if聽螖E路螖t =聽h聽and 螖p路螖x =聽h, then聽螖E路螖t =聽螖p路螖x and, therefore,聽螖E/螖p must be equal to 螖x/螖t. Hence, the聽ratio聽of the uncertainty about x (the distance) and the uncertainty about t (the time) equals the聽ratio聽of the uncertainty about E (the energy) and the uncertainty about p (the momentum).

Of course, you will note that the actual uncertainty relations have a factor 1/2 in them. This may be explained by thinking of both negative as well as positive variations in space and in time.

We will obviously want to do some more thinking about those physical dimensions. The idea of a force implies the idea of some object – of some mass on which the force is acting. Hence, let’s think about the concept of mass now. But… Well… Mass and energy are supposed to be equivalent, right? So let’s look at the concept of energy聽too.

Action, energy and mass

What is聽energy, really? In聽real life, we are usually not interested in the energy of a system as such, but by the energy it can deliver, or absorb, per second. This is referred to as the聽power聽of a system, and it’s expressed in J/s. However, in physics, we always talk energy – not power – so… Well… What is the energy of a system?

According to the de Broglie聽and Einstein – and so many other eminent physicists, of course – we should not only think of the kinetic energy of its parts, but also of their potential energy, and their rest聽energy, and – for an atomic system – we may add some internal energy, which may be binding energy, or excitation energy (think of a hydrogen atom in an excited state, for example). A lot of stuff. 馃檪 But, obviously, Einstein’s mass-equivalence formula comes to mind here, and summarizes it all:

E = m路c2

The m in this formula refers to mass – not to meter, obviously. Stupid remark, of course… But… Well… What is energy, really? What is mass,聽really? What’s that聽equivalence聽between mass and energy,聽really?

I don’t have the definite answer to that question (otherwise I’d be famous), but… Well… I do think physicists and mathematicians should invest more in exploring some basic intuitions here. As I explained in several posts, it is very tempting to think of energy as some kind of two-dimensional oscillation of mass. A force over some distance will cause a mass to accelerate. This is reflected in the聽dimensional analysis:

[E] = [m]路[c2] = 1 kg路m2/s2聽= 1 kg路m/s2路m = 1 N路m

The kg and m/s2聽factors make this abundantly clear: m/s2聽is the physical dimension of acceleration: (the change in) velocity per time unit.

Other formulas now come to mind, such as the Planck-Einstein relation: E = h路f = 蠅路魔. We could also write: E = h/T. Needless to say, T = 1/f聽is the聽period聽of the oscillation. So we could say, for example, that the energy of some particle times the period of the oscillation gives us Planck’s constant again. What does that mean? Perhaps it’s easier to think of it the other way around: E/f = h = 6.626070040(81)脳10鈭34聽J路s. Now, f聽is the number of oscillations聽per second. Let’s write it as聽f聽= n/s, so we get:

E/f聽= E/(n/s) = E路s/n聽= 6.626070040(81)脳10鈭34聽J路s 鈬 E/n聽= 6.626070040(81)脳10鈭34聽J

What an amazing result! Our wavicle – be it a photon or a matter-particle – will always聽pack聽6.626070040(81)脳10鈭34joule聽in聽one聽oscillation, so that’s the numerical聽value of Planck’s constant which, of course, depends on our fundamental聽units (i.e. kg, meter, second, etcetera in the SI system).

Of course, the obvious question is: what’s one聽oscillation? If it’s a wave packet, the oscillations may not have the same amplitude, and we may also not be able to define an exact period. In fact, we should expect the amplitude and duration of each oscillation to be slightly different, shouldn’t we? And then…

Well… What’s an oscillation? We’re used to聽counting聽them:聽n聽oscillations per second, so that’s聽per time unit. How many do we have in total? We wrote about that in our posts on the shape and size of a photon. We know photons are emitted by atomic oscillators – or, to put it simply, just atoms going from one energy level to another. Feynman calculated the Q of these atomic oscillators: it鈥檚 of the order of 108聽(see his聽Lectures,聽I-33-3: it鈥檚 a wonderfully simple exercise, and one that really shows his greatness as a physics teacher), so… Well… This wave train will last about 10鈥8聽seconds (that鈥檚 the time it takes for the radiation to die out by a factor 1/e). To give a somewhat more precise example,聽for sodium light, which has a frequency of 500 THz (500脳1012聽oscillations per second) and a wavelength of 600 nm (600脳10鈥9聽meter), the radiation will lasts about 3.2脳10鈥8聽seconds. [In fact, that鈥檚 the time it takes for the radiation鈥檚 energy to die out by a factor 1/e, so(i.e. the so-called decay time 蟿), so the wavetrain will actually last聽longer, but so the amplitude becomes quite small after that time.]聽So… Well… That鈥檚 a very short time but… Still, taking into account the rather spectacular frequency (500 THz) of sodium light, that makes for some 16 million oscillations and, taking into the account the rather spectacular speed of light (3脳108聽m/s), that makes for a wave train with a length of, roughly,聽9.6 meter. Huh? 9.6 meter!? But a photon is supposed to be pointlike, isn’it it? It has no length, does it?

That’s where relativity helps us out: as I wrote in one of my posts, relativistic length contraction may explain the apparent paradox. Using the reference frame of the photon聽– so if we’d be traveling at speed c,鈥 riding鈥 with the photon, so to say, as it鈥檚 being emitted – then we’d 鈥榮ee鈥 the electromagnetic transient as it鈥檚 being radiated into space.

However, while we can associate some mass聽with the energy of the photon, none of what I wrote above explains what the (rest) mass of a matter-particle could possibly be.There is no real answer to that, I guess. You’ll think of the Higgs field now but… Then… Well. The Higgs field is a scalar field. Very simple: some number that’s associated with some position in spacetime. That doesn’t explain very much, does it? 馃槮 When everything is said and done, the scientists who, in 2013 only, got the Nobel Price for their theory on the Higgs mechanism, simply tell us mass is some number. That’s something we knew already, right? 馃檪

The reality of the wavefunction

The wavefunction is, obviously, a mathematical construct: a聽description聽of reality using a very specific language. What language? Mathematics, of course! Math may not be universal (aliens might not be able to decipher our mathematical models) but it’s pretty good as a global聽tool of communication, at least.

The real聽question is: is the description聽accurate? Does it match reality and, if it does, how聽good聽is the match? For example, the wavefunction for an electron in a hydrogen atom looks as follows:

蠄(r, t) = ei路(E/魔)路tf(r)

As I explained in previous posts (see, for example, my recent post聽on reality and perception), the聽f(r) function basically provides some envelope for the two-dimensional ei路胃聽=聽ei路(E/魔)路t聽= cos胃 + isin胃聽oscillation, with r= (x, y, z),聽胃 = (E/魔)路t聽= 蠅路t聽and 蠅 = E/魔. So it presumes the聽duration of each oscillation is some constant. Why? Well… Look at the formula: this thing has a constant frequency in time. It’s only the amplitude that is varying as a function of the r= (x, y, z) coordinates. 馃檪 So… Well… If each oscillation is to always聽pack聽6.626070040(81)脳10鈭34joule, but the amplitude of the oscillation varies from point to point, then… Well… We’ve got a problem. The wavefunction above is likely to be an approximation of reality only. 馃檪 The associated energy is the same, but… Well… Reality is probably聽not聽the nice geometrical shape we associate with those wavefunctions.

In addition, we should think of the聽Uncertainty Principle: there聽must聽be some uncertainty in the energy of the photons when our hydrogen atom makes a transition from one energy level to another. But then… Well… If our photon packs something like 16 million oscillations, and the order of magnitude of the uncertainty is only of the order of聽h聽(or 魔 = h/2蟺) which, as mentioned above, is the (average) energy of one聽oscillation only, then we don’t have much of a problem here, do we? 馃檪

Post scriptum: In previous posts, we offered some analogies – or metaphors – to a two-dimensional oscillation (remember the V-2 engine?). Perhaps it’s all relatively simple. If we have some tiny little ball of mass – and its center of mass has to stay where it is – then any rotation – around any axis – will be some combination of a rotation around our聽x- and z-axis – as shown below. Two axes only. So we may want to think of a two-dimensional聽oscillation as an oscillation of the polar and azimuthal angle. 馃檪

oscillation of a ball

Thinking again…

One of the comments on my other blog made me think I should, perhaps, write something on waves again. The animation below shows the聽elementary聽wavefunction聽蠄 =聽aei胃聽= 蠄 =聽aei路胃聽聽= aei(蠅路t鈭択路x)聽= ae(i/魔)路(E路t鈭抪路x)聽.AnimationWe know this elementary wavefunction cannotrepresent a real-life聽particle. Indeed, the aei路胃聽function implies the probability of finding the particle – an electron, a photon, or whatever – would be equal to P(x, t) = |蠄(x, t)|2聽= |ae(i/魔)路(E路t鈭抪路x)|2聽= |a|2路|e(i/魔)路(E路t鈭抪路x)|2聽= |a|2路12= a2everywhere. Hence, the particle would be everywhere – and, therefore, nowhere really. We need to localize the wave – or build a wave packet. We can do so by introducing uncertainty: we then add聽a potentially infinite number of these elementary wavefunctions with slightly different values for E and p, and various amplitudes a. Each of these amplitudes will then reflect the聽contribution聽to the composite wave, which – in three-dimensional space – we can write as:

蠄(r, t) = ei路(E/魔)路tf(r)

As I explained in previous posts (see, for example, my recent post聽on reality and perception), the聽f(r) function basically provides some envelope for the two-dimensional ei路胃聽=聽ei路(E/魔)路t聽= cos胃 + isin胃聽oscillation, with r= (x, y, z),聽胃 = (E/魔)路t聽= 蠅路t聽and 蠅 = E/魔.

Note that it looks like the wave propagates聽from left to right – in the聽positive direction of an axis which we may refer to as the x-axis. Also note this perception results from the fact that, naturally, we’d associate time with the聽rotation聽of that arrow at the center – i.e. with the motion in the illustration,聽while the spatial dimensions are just what they are: linear spatial dimensions. [This point is, perhaps, somewhat less self-evident than you may think at first.]

Now, the axis which points upwards is usually referred to as the z-axis, and the third and final axis – which points towards聽us –聽would then be the y-axis, obviously.聽Unfortunately, this definition would violate the so-called right-hand rule for defining a proper reference frame: the figures below shows the two possibilities – a left-handed and a right-handed reference frame – and it’s the right-handed reference (i.e. the illustration on the right) which we have to use in order to correctly define all directions, including the direction of聽rotation聽of the argument of the wavefunction.400px-Cartesian_coordinate_system_handednessHence, if we don’t change the direction of the y– and z-axes – so we keep defining the z-axis as the axis pointing upwards, and the y-axis as the axis pointing towards聽us – then the positive direction of the x-axis would actually be the direction from right to left, and we should say that the elementary wavefunction in the animation above seems to propagate in the negativex-direction. [Note that this left- or right-hand rule is quite astonishing: simply swapping the direction of聽one聽axis of a left-handed frame makes it right-handed, and vice versa.]

Note my language when I talk about the direction of propagation of our wave. I wrote: it looks like, or it seems to聽go in this or that direction. And I mean that: there is no real travelinghere. At this point, you may want to review a post I wrote for my son, which explains the basic math behind waves, and in which I also explained the animation below.


Note how the peaks and troughs of this pulse seem to move leftwards, but the wave packet (or the聽group聽or the聽envelope聽of the wave鈥攚hatever you want to call it) moves to the right. The point is: the pulse itself doesn’t聽travel left or right. Think of the horizontal axis in the illustration above as an oscillating guitar string: each point on the string just moves up and down. Likewise, if our repeated pulse would represent a physical wave in water, for example, then the water just stays where it is: it just moves up and down. Likewise, if we shake up some rope, the rope is not going anywhere: we just started some motion聽that is traveling down the rope.聽In other words, the phase velocity is just a mathematical concept. The peaks and troughs that seem to be traveling are just mathematical points that are 鈥榯raveling鈥 left or right. That鈥檚 why there鈥檚 no limit on the phase velocity: it can聽– and, according to quantum mechanics, actually will聽–聽exceed the speed of light. In contrast, the group聽velocity – which is the actual speed of the particle that is being represented by the wavefunction – may approach聽– or, in the case of a massless photon, will actually equal聽–聽the speed of light, but will never exceed聽it, and its聽direction聽will, obviously, have a聽physical聽significance as it is, effectively, the direction of travel of our particle – be it an electron, a photon (electromagnetic radiation), or whatever.

Hence, you should not think the聽spin聽of a particle – integer or half-integer – is somehow related to the direction of rotation of the argument of the elementary wavefunction. It isn’t: Nature doesn’t give a damn about our mathematical conventions, and that’s what the direction of rotation of the argument of that wavefunction is: just some mathematical convention. That’s why we write aei(蠅路t鈭択路x)聽rather than聽aei(蠅路t+k路x)聽or聽aei(蠅路t鈭択路x): it’s just because of the right-hand rule for coordinate frames, and also because Euler defined the counter-clockwise direction as the聽positive direction of an angle. There’s nothing more to it.

OK. That’s obvious. Let me now return to my interpretation of Einstein’s E = m路c2聽formula (see my previous posts on this). I noted that, in the reference frame of the particle itself (see my basics page), the elementary wavefunction ae(i/魔)路(E路t鈭抪路x)聽reduces to ae(i/魔)路(E’路t’): the origin of the reference frame then coincides with (the center of) our particle itself, and the wavefunction only varies with the time in the inertial reference frame (i.e. the proper聽time t’), with the rest energy of the object (E’) as the time scale factor. How should we interpret this?

Well… Energy is force times distance, and force is defined as that what causes some mass聽to聽accelerate. To be precise, the聽newton聽– as the unit of force – is defined as the聽magnitude of a force which would cause a mass of one kg to accelerate with one meter per second聽per second. Per second per second. This is not a typo: 1 N corresponds to 1 kg times 1 m/s聽per second, i.e. 1 kg路m/s2. So… Because energy is force times distance, the unit of energy聽may be expressed in units of kg路m/s2路m, or kg路m2/s2, i.e. the unit of mass times the unit of聽velocity squared. To sum it all up:

1 J = 1 N路m = 1 kg路(m/s)2

This reflects the physical dimensions聽on both sides of the聽E = m路c2聽formula again but… Well… How should we聽interpret聽this? Look at the animation below once more, and imagine the green dot is some tiny聽mass聽moving around the origin, in an equally tiny circle. We’ve got聽two聽oscillations here: each packing聽half聽of the total energy of… Well… Whatever it is that our elementary wavefunction might represent in reality聽– which we don’t know, of course.


Now, the blue and the red dot – i.e. the horizontal and vertical projection聽of the green dot –聽accelerate up and down. If we look carefully, we see these dots accelerate聽towards聽the zero point and, once they’ve crossed it, they聽decelerate, so as to allow for a reversal of direction: the blue dot goes up, and then down. Likewise, the red dot does the same. The interplay between the two oscillations, because of the 90掳 phase difference, is interesting: if the blue dot is at maximum speed (near or at the origin), the red dot reverses speed (its speed is, therefore, (almost) nil), and vice versa. The metaphor of our frictionless V-2 engine, our perpetuum mobile,聽comes to mind once more.

The question is: what’s going on, really?

My answer is: I don’t know. I do think that, somehow, energy should be thought of as some two-dimensional oscillation of something – something which we refer to as聽mass, but we didn’t define mass very clearly either. It also, somehow, combines linear and rotational motion. Each of the two dimensions packs half of the energy of the particle that is being represented by our wavefunction. It is, therefore, only logical that the physical unit聽of both is to be expressed as a force over some distance – which is, effectively, the physical dimension of energy – or the rotational equivalent of them: torque聽over some angle.聽Indeed, the analogy between linear and angular movement is obvious: the聽kinetic聽energy of a rotating object is equal to K.E. = (1/2)路I路蠅2. In this formula, I is the rotational inertia聽– i.e. the rotational equivalent of mass – and 蠅 is the angular velocity – i.e. the rotational equivalent of linear聽velocity. Noting that the (average) kinetic energy in any system must be equal to the (average) potential energy in the system, we can add both, so we get a formula which is structurally聽similar to the聽E = m路c2聽formula. But is聽it the same? Is the effective mass of some object the sum of an almost infinite number of quanta聽that incorporate some kind of聽rotational聽motion? And – if we use the right units – is the angular velocity of these infinitesimally small rotations effectively equal to the speed of light?

I am not sure. Not at all, really. But, so far, I can’t think of any explanation of the wavefunction that would make more sense than this one. I just need to keep trying to find better ways to聽articulate聽or聽imagine聽what might be going on. 馃檪 In this regard, I’d like to add a point – which may or may not be relevant. When I talked about that guitar string, or the water wave, and wrote that each point on the string – or each water drop – just moves up and down, we should think of the physicality of the situation: when the string oscillates, its聽length聽increases. So it’s only because our string is flexible that it can vibrate between the fixed points at its ends. For a rope that’s聽not聽flexible, the end points would need to move in and out with the oscillation. Look at the illustration below, for example: the two kids who are holding rope must come closer to each other, so as to provide the necessary space inside of the oscillation for the other kid. 馃檪kid in a ropeThe next illustration – of how water waves actually propagate – is, perhaps, more relevant. Just think of a two-dimensional equivalent – and of the two oscillations as being transverse聽waves, as opposed to longitudinal.聽See how string theory starts making sense? 馃檪

rayleighwaveThe most fundamental question remains the same: what is it,聽exactly, that is oscillating here? What is the聽field? It’s always some force on some charge – but what charge, exactly? Mass? What is it? Well… I don’t have the answer to that. It’s the same as asking: what is聽electric聽charge,聽really? So the question is: what’s the聽reality聽of mass, of electric charge, or whatever other charge that causes a force to聽act聽on it?

If you聽know, please let聽me聽know. 馃檪

Post scriptum: The fact that we’re talking some聽two-dimensional oscillation here – think of a surface now – explains the probability formula: we need to聽square聽the absolute value of the amplitude to get it. And normalize, of course. Also note that, when normalizing, we’d expect to get some factor involving聽蟺 somewhere, because we’re talking some聽circular聽surface – as opposed to a rectangular one. But I’ll let聽you聽figure that out. 馃檪

An introduction to virtual particles (2)

When reading quantum mechanics, it often feels like the more you know, the less you understand. My reading of the Yukawa theory of force, as an exchange of virtual particles (see my previous post), must have left you with many questions. Questions I can’t answer because… Well… I feel as much as a fool as you do when thinking about it all. Yukawa first talks about some potential – which we usually think of as being some scalar聽function – and then聽suddenly this potential becomes a wavefunction. Does that make sense? And think of the mass of that ‘virtual’ particle: the rest mass of a neutral pion is about 135 MeV. That’s an awful lot – at the (sub-)atomic scale that is: it’s equivalent to the rest mass of some 265 electrons!

But… Well… Think of it: the use of a static potential when solving Schr枚dinger’s equation for the electron orbitals around a hydrogen nucleus (a proton, basically) also raises lots of questions: if we think of our electron as a point-like particle being first here and then there, then that’s also not very consistent with a static (scalar) potential either!

One of the weirdest aspects of the Yukawa theory is that these emissions and absorptions of virtual particles violate the energy conservation principle. Look at the animation once again (below): it sort of assumes a rather heavy particle – consisting of a d- or u-quark and its antiparticle – is emitted聽– out of nothing, it seems – to then vanish as the antiparticle is destroyed when absorbed. What about the energy balance here: are we talking six quarks (the proton and the neutron), or six plus two?Nuclear_Force_anim_smallerNow that we’re talking mass, note a neutral pion (蟺0) may either be a u奴 or a d膽combination, and that the mass of a u-quark and a d-quark is only 2.4 and 4.8 MeV – so the聽binding聽energy of the constituent parts of this 蟺0聽particle is enormous: it accounts for most of its mass.

The thing is… While we’ve presented the 蟺0聽particle as a virtual聽particle here, you should also note we find 蟺0聽particles in cosmic rays. Cosmic rays are particle rays, really: beams of highly energetic particles. Quite a bunch of them are just protons that are being ejected by our Sun. [The Sun also ejects electrons – as you might imagine – but let’s think about the protons here first.] When these protons hit an atom or a molecule in our atmosphere, they usually break up in various particles, including our聽蟺0聽particle, as shown below.聽850px-Atmospheric_Collision


So… Well… How can we relate these things? What is聽going on, really, inside of that nucleus?

Well… I am not sure. Aitchison and Hey聽do their utmost to try to explain the pion – as a聽virtual聽particle, that is – in聽terms of聽energy fluctuations聽that obey the Uncertainty Principle for energy and time:聽螖E路螖t聽鈮ヂ/2. Now, I find such explanations difficult to follow. Such explanations usually assume any measurement instrument – measuring energy, time, momentum of distance – measures those variables on some discrete scale, which implies some uncertainty indeed. But that uncertainty is more like an imprecision, in my view. Not something fundamental. Let me quote Aitchison and Hey:

“Suppose a device is set up capable of checking to see whether energy is, in fact, conserved while the pion crosses over.. The crossing time 螖t must be at least r/c, where r is the distance apart of the nucleons. Hence, the device must be capable of operating on a time scale smaller than 螖t to be able to detect the pion, but it need not be very much less than this. Thus the energy uncertainty in the reading by the device will be of the order 螖E聽鈭悸/螖t) = 魔路(c/r).”

As said, I find such explanations really difficult, although I can sort of sense some of the implicit assumptions. As I mentioned a couple of times already, the E = m路c2聽equation tells us energy is mass in motion, somehow: some weird two-dimensional oscillation in spacetime. So, yes, we can appreciate we need some聽time unit聽to聽count聽the oscillations – or, equally important, to measure their聽amplitude.

[…] But… Well… This falls short of a more聽fundamental聽explanation of what’s going on. I like to think of Uncertainty in terms of Planck’s constant itself:聽魔 or聽h聽or – as you’ll usually see it – as half聽of that value: /2. [The Stern-Gerlach experiment implies it’s /2, rather than h/2 or 魔 or聽h聽itself.] The physical dimension of Planck’s constant is action: newton times distance times time. I also like to think action can express itself in two ways: as (1) some amount of energy (螖E: some force of some distance) over some time (螖t) or, else, as (2) some momentum (螖p: some force during some time) over some distance (螖s). Now, if we equate 螖E with the energy of the pion (135 MeV), then we may calculate the order of magnitude聽of聽螖t from 螖E路螖t 鈮 /2 as follows:

聽螖t = (/2)/(135 MeV) 鈮 (3.291脳10鈭16聽eV路s)/(134.977脳106聽eV)聽鈮 0.02438脳10鈭22聽s

Now, that’s an聽unimaginably聽small time unit – but much and much聽larger than the Planck time (the Planck time unit is about 5.39 脳 10鈭44 s). The corresponding distance聽r聽is equal to r聽= 螖t路c聽= (0.02438脳10鈭22聽s)路(2.998脳108聽m/s) 鈮 0.0731脳10鈭14聽m = 0.731 fm. So… Well… Yes. We got the answer we wanted… So… Well… We should be happy about that but…

Well… I am not. I don’t like this indeterminacy. This randomness in the approach. For starters, I am very puzzled by the fact that the lifetime of the actual0聽particle we see in the debris聽of proton collisions with other particles as cosmic rays enter the atmosphere is like聽8.4脳10鈭17 seconds, so that’s like 35聽million聽times longer than the 螖t =聽0.02438脳10鈭22聽s we calculated above.

Something doesn’t feel right. I just can’t see the logic here.聽Sorry. I’ll be back. :-/

An introduction to virtual particles

We are going to聽venture beyond quantum mechanics as it is usually understood – covering electromagnetic interactions only. Indeed, all of my posts so far – a bit less than 200, I think 馃檪 – were all centered around electromagnetic interactions – with the model of the hydrogen atom as our most precious gem, so to speak.

In this post, we’ll be talking the strong force – perhaps not for the first time but surely for the first time at this level of detail. It’s an entirely different world – as I mentioned in one of my very first posts in this blog. Let me quote what I wrote there:

“The math describing the ‘reality’ of electrons and photons (i.e. quantum mechanics and quantum electrodynamics), as complicated as it is, becomes even more complicated 鈥 and, important to note, also much less accurate聽鈥 when it is used to try to describe the behavior of 聽quarks. Quantum chromodynamics (QCD) is a different world. […]聽Of course, that should not surprise us, because we’re talking very different order of magnitudes here: femtometers (10鈥15 m), in the case of electrons, as opposed to attometers (10鈥18 m)聽or even zeptometers (10鈥21聽m) when we’re talking quarks.”

In fact, the femtometer scale is used to measure the radius聽of both protons as well as electrons and, hence, is much smaller than the atomic scale, which is measured in nanometer (1 nm = 10鈭9聽m). The so-called Bohr radius for example, which is a measure for the size of an atom, is measured in nanometer indeed, so that’s a scale that is a聽million聽times larger than the femtometer scale. This聽gap聽in the scale effectively separates entirely different worlds. In fact, the gap is probably as large a gap as the gap between our macroscopic world and the strange reality of quantum mechanics. What happens at the femtometer scale,聽really?

The honest answer is: we don’t know, but we do have models聽to describe what happens. Moreover, for want of better models, physicists sort of believe these models are credible. To be precise, we assume there’s a force down there which we refer to as the聽strong聽force. In addition, there’s also a weak force. Now, you probably know these forces are modeled as聽interactions聽involving an聽exchange聽of聽virtual聽particles. This may be related to what Aitchison and Hey refer to as the physicist’s “distaste for action-at-a-distance.” To put it simply: if one particle – through some force – influences some other particle, then something must be going on between the two of them.

Of course, now you’ll say that something is聽effectively going on: there’s the electromagnetic field, right? Yes. But what’s the field? You’ll say: waves. But then you know electromagnetic waves also have a particle aspect. So we’re stuck with this weird theoretical framework: the conceptual distinction between particles and forces, or between particle and field, are not so clear. So that’s what the more advanced theories we’ll be looking at – like quantum field theory – try to bring together.

Note that we’ve been using a lot of confusing and/or ambiguous terms here: according to at least one leading physicist, for example, virtual particles should not be thought of as particles! But we’re putting the cart before the horse here. Let’s go step by step. To better understand the ‘mechanics’ of how the strong and weak interactions are being modeled in physics, most textbooks – including Aitchison and Hey, which we’ll follow here – start by explaining the original ideas as developed by the Japanese physicist Hideki Yukawa, who received a Nobel Prize for his work in 1949.

So what is it all about? As said, the ideas聽– or the聽model聽as such, so to speak – are more important than Yukawa’s original application, which was to model the force between a proton and a neutron. Indeed, we now explain such force as a force between quarks, and the force carrier is the gluon, which carries the so-called聽color聽charge. To be precise, the force between protons and neutrons – i.e. the so-called nuclear聽force – is聽now聽considered to be a rather minor聽residual force: it’s just what’s left of the actual聽strong force that binds quarks together. The Wikipedia article on this聽has some聽good text and聽a really nice animation on this. But… Well… Again, note that we are only interested in the聽model right now. So how does that look like?

First, we’ve got the equivalent of the electric charge: the nucleon is supposed to have some ‘strong’ charge, which we’ll write as gs. Now you know the formulas for the聽potential聽energy – because of the gravitational force – between two masses, or the聽potential聽energy between two charges – because of the electrostatic force. Let me jot them down once again:

  1. U(r) =聽鈥揋路M路m/r
  2. U(r) = (1/4蟺蔚0)路q1路q2/r

The two formulas are exactly the same. They both assume U = 0 for聽r聽鈫 鈭. Therefore, U(r) is always negative. [Just think of q1聽and q2聽as opposite charges, so the minus sign is not explicit – but it is also there!] We know that聽U(r)聽curve will look like the one below: some work (force times distance) is needed to move the two charges some distance聽away from each other – from point 1 to point 2, for example. [The distance r is x here – but you got that, right?]potential energy

Now, physics textbooks – or other articles you might find, like on Wikipedia – will sometimes mention that the strong force is non-linear, but that’s very confusing because… Well… The electromagnetic force – or the gravitational force – aren’t linear either: their strength is inversely proportional to the square聽of the distance and – as you can see from the formulas for the potential energy – that 1/r factor isn’t lineareither. So that isn’t very helpful. In order to further the discussion, I should now write down Yukawa’s聽hypothetical聽formula for the potential energy between a neutron and a proton, which we’ll refer to, logically, as the n-p potential:n-p potentialThe 鈭抔s2聽factor is, obviously, the equivalent of the q1路q2聽product: think of the proton and the neutron having equal but opposite ‘strong’ charges. The 1/4蟺 factor reminds us of the Coulomb constant:聽ke聽= 1/4蟺蔚0. Note this constant ensures the physical dimensions of both sides of the equation make sense: the dimension of 蔚0聽is N路m2/C2, so U(r) is – as we’d expect – expressed in newton路meter, or聽joule. We’ll leave the question of the units for gs聽open – for the time being, that is. [As for the 1/4蟺 factor, I am not sure why Yukawa put it there. My best guess is that he wanted to remind us some constant should be there to ensure the units come out alright.]

So, when everything is said and done, the big new thing is the er/a/r聽factor, which replaces the usual 1/r dependency on distance. Needless to say, e is Euler’s number here –聽not聽the electric charge. The two green curves below show what the er/a聽factor does to the classical 1/r function for a聽= 1 and聽a聽= 0.1 respectively: smaller values for聽a聽ensure the curve approaches zero more rapidly. In fact, for聽a聽= 1,聽er/a/r聽is equal to 0.368 for聽r聽= 1, and remains significant for values r聽that are greater than 1 too.聽In contrast, for聽a聽= 0.1, er/a/r聽is equal to 0.004579 (more or less, that is) for r聽= 4 and rapidly goes to zero for all values greater than that.

graph 1graph 2Aitchison and Hey call聽a, therefore, a聽range parameter: it effectively defines the聽range聽in which the n-p potential has a significant value: outside of the range, its value is, for all practical purposes, (close to) zero. Experimentally, this range was established as being more or less equal to r聽鈮 2 fm.聽Needless to say, while this range factor may do its job, it’s obvious Yukawa’s formula for the n-p potential comes across as being somewhat random: what’s the theory behind? There’s none, really. It makes one think of the logistic function: the logistic function fits many statistical patterns, but it is (usually) not obvious why.

Next in Yukawa’s argument is the establishment of an equivalent, for the nuclear force, of the Poisson equation in electrostatics: using the聽E = 鈥桅 formula, we can re-write Maxwell’s 鈭団E= 蟻/蔚0聽equation (aka Gauss’ Law) as聽鈭団 =聽鈥撯垏鈥⑩垏= 鈥2鈬斅2=聽鈥撓/蔚0聽indeed. The divergence聽operator聽the聽鈥 operator gives us the聽volume聽density of the flux of E out of an infinitesimal volume around a given point. [You may want to check one of my post on this. The formula becomes somewhat more obvious if we re-write it as 鈭団EdV = 鈥(蟻路dV)/蔚0: 鈭団EdV is then, quite simply, the flux of E out of the infinitesimally small volume dV, and the right-hand side of the equation says this is given by the product of the charge inside (蟻路dV) and 1/蔚0, which accounts for the permittivity of the medium (which is the vacuum in this case).] Of course, you will also remember the 桅 notation: is just the gradient (or vector derivative) of the (scalar) potential 桅, i.e. the electric (or electrostatic) potential in a space around that infinitesimally small volume with charge density 蟻. So… Well… The Poisson equation is probably not聽so聽obvious as it seems at first (again, check聽my post on it聽on it for more detail) and, yes, that 鈥 operator – the divergence聽operator – is a pretty impressive mathematical beast. However, I must assume you master this topic and move on. So… Well… I must now give you the equivalent of Poisson’s equation for the nuclear force. It’s written like this:Poisson nuclearWhat the heck? Relax. To derive this equation, we’d need to take a pretty complicated d茅tour, which we won’t do. [See Appendix G of Aitchison and Grey if you’d want the details.] Let me just point out the basics:

1. The Laplace operator (鈭2) is replaced by one that’s nearly the same: 鈭2聽鈭 1/a2. And it operates on the same concept: a potential, which is a (scalar) function of the position r. Hence, U(r) is just the equivalent of聽桅.

2. The right-hand side of the equation involves Dirac’s delta function. Now that’s a weird mathematical beast. Its definition seems to defy what I refer to as the ‘continuum assumption’ in math. 聽I wrote a few things about it in one of my posts on Schr枚dinger’s equation聽– and I could give you its formula – but that won’t help you very much. It’s just a weird thing. As Aitchison and Grey聽write, you should just think of the whole expression as a finite range analogue聽of Poisson’s equation in electrostatics. So it’s only for extremely small聽r聽that the whole equation makes sense. Outside of the range defined by our range parameter聽a, the whole equation just reduces to 0 = 0 – for all practical purposes, at least.

Now, of course, you know that the neutron and the proton are not supposed to just sit there. They’re also in these sort of intricate dance which – for the electron case – is described by some wavefunction, which we derive as a solution from Schr枚dinger’s equation. So U(r) is going to vary not only in space but also in time and we should, therefore, write it as U(r, t). Now, we will, of course, assume it’s going to vary in space and time as some聽wave聽and we may, therefore, suggest some wave聽equation聽for it. To appreciate this point, you should review some of the posts I did on waves. More in particular, you may want to review the post I did on traveling fields, in which I showed you the following:聽if we see an equation like:f8then the function(x, t) must have the following general functional form:solutionAny聽function 蠄聽like that will work – so it will be a solution to the differential equation – and we’ll refer to it as a wavefunction. Now, the equation (and the function) is for a wave traveling in聽one dimension only (x) but the same post shows we can easily generalize to waves traveling in three dimensions. In addition, we may generalize the analyse to include聽complex-valued聽functions as well. Now, you will still be shocked by Yukawa’s field equation for U(r, t) but, hopefully, somewhat less so after the above reminder on how wave equations generally look like:Yukawa wave equationAs said, you can look up the nitty-gritty in Aitchison and Grey聽(or in its appendices) but, up to this point, you should be able to sort of appreciate what’s going on without getting lost in it all. Yukawa’s next step – and all that follows – is much more baffling. We’d think U, the nuclear potential, is just some scalar-valued wave, right? It varies in space and in time, but… Well… That’s what classical waves, like water or sound waves, for example do too. So far, so good. However, Yukawa’s next step is to associate a聽de Broglie-type wavefunction with it. Hence, Yukawa imposes聽solutions of the type:potential as particleWhat?Yes. It’s a big thing to swallow, and it doesn’t help most physicists refer to U as a聽force field. A force and the potential that results from it are two different things. To put it simply: the聽force聽on an object is聽not聽the same as the聽work聽you need to move it from here to there. Force and potential are聽related聽but聽different聽concepts. Having said that, it sort of make sense now, doesn’t it? If potential is energy, and if it behaves like some wave, then we must be able to associate it with a聽de Broglie-type particle. This U-quantum, as it is referred to, comes in two varieties, which are associated with the ongoing聽absorption-emission process that is supposed to take place inside of the nucleus (depicted below):

p + U聽鈫 n and聽n + U+聽鈫 p

absorption emission

It’s easy to see that the聽U聽and聽U+聽particles are just each other’s anti-particle. When thinking about this, I can’t help remembering Feynman, when he enigmatically wrote – somewhere in his Strange Theory of Light and Matter聽– that聽an anti-particle might just be the same particle traveling back in time.聽In fact, the聽exchange聽here is supposed to happen within a聽time window聽that is so short it allows for the brief聽violation聽of the energy conservation principle.

Let’s be more precise and try to find the properties of that mysterious U-quantum. You’ll need to refresh what you know about operators to understand how substituting Yukawa’s聽de Broglie聽wavefunction in the complicated-looking differential equation (the wave聽equation) gives us the following relation between the energy and the momentum of our new particle:mass 1Now, it doesn’t take too many gimmicks to compare this against the relativistically correct energy-momentum relation:energy-momentum relationCombining both gives us the associated (rest) mass of the U-quantum:rest massFor聽a聽鈮 2 fm,聽mU聽is about 100 MeV. Of course, it’s always to check the dimensions and calculate stuff yourself. Note the physical dimension of聽/(ac) is N路s2/m = kg (just think of the F = m路a formula). Also note that N路s2/m = kg = (N路m)路s2/m2聽= J/(m2/s2), so that’s the [E]/[c2] dimension.聽The calculation – and interpretation – is somewhat tricky though: if you do it, you’ll find that:

/(ac) 鈮 (1.0545718脳10鈭34聽N路m路s)/[(2脳10鈭15聽m)路(2.997924583脳108聽m/s)] 鈮 0.176脳10鈭27聽kg

Now, most physics handbooks continue that terrible habit of writing particle weights in eV, rather than using the correct eV/c2聽unit. So when they write: mU聽is about 100 MeV, they actually mean to say that it’s 100 MeV/c2. In addition, the eV is not聽an SI unit. Hence, to get that number, we should first write 0.176脳10鈭27kg as some value expressed in J/c2, and then convert the joule聽(J) into electronvolt (eV). Let’s do that. First, note that c2聽鈮 9脳1016聽m2/s2, so 0.176脳10鈭27聽kg聽鈮埪1.584脳10鈭11聽J/c2. Now we do the conversion from joule聽to electronvolt. We聽get: (1.584脳10鈭11聽J/c2)路(6.24215脳1018聽eV/J)聽鈮 9.9脳107聽eV/c2聽= 99 MeV/c2.聽Bingo!聽So that was Yukawa’s prediction for the聽nuclear force quantum.

Of course, Yukawa was wrong but, as mentioned above, his ideas are now generally accepted. First note the mass of the U-quantum is quite considerable:聽100 MeV/c2聽is a bit more than 10% of the individual proton or neutron mass (about 938-939 MeV/c2). While the聽binding energy聽causes the mass of an atom to be less than the mass of their constituent parts (protons, neutrons and electrons), it’s quite remarkably that the deuterium atom – a hydrogen atom with an extra neutron – has an excess mass of about 13.1 MeV/c2, and a binding energy with an equivalent mass of only 2.2 MeV/c2. So… Well… There’s something there.

As said, this post only wanted to introduce some basic ideas. The current model of nuclear physics is represented by the animation below, which I took from the Wikipedia article on it. The U-quantum appears as the pion here – and it does聽not聽really turn the proton into a neutron and vice versa. Those particles are assumed to be stable. In contrast, it is the聽quarks聽that change聽color聽by exchanging gluons between each other. And we know look at the exchange particle – which we refer to as the pion聽–聽between the proton and the neutron as consisting of two quarks in its own right: a quark and a anti-quark. So… Yes… All weird. QCD is just a different world. We’ll explore it more in the coming days and/or weeks. 馃檪Nuclear_Force_anim_smallerAn alternative – and simpler – way of representing this exchange of a virtual particle (a neutral聽pion聽in this case) is obtained by drawing a so-called Feynman diagram:Pn_scatter_pi0OK. That’s it for today. More tomorrow. 馃檪

Reality and perception

It’s quite easy to get lost in all of the math when talking quantum mechanics. In this post, I’d like to freewheel a bit. I’ll basically try to relate the wavefunction we’ve derived for the electron orbitals to the more speculative posts I wrote on how to聽interpret聽the wavefunction. So… Well… Let’s go. 馃檪

If there is one thing you should remember from all of the stuff I wrote in my previous posts, then it’s that the wavefunction for an electron orbital – 蠄(x, t), so that’s a complex-valued function in two聽variables (position and time) – can聽be written as the product of two functions in one聽variable:

蠄(x, t) = ei路(E/魔)路tf(x)

In fact, we wrote f(x) as 蠄(x), but I told you how confusing that is: the 蠄(x) and 蠄(x, t) functions are, obviously,聽very聽different. To be precise,聽the聽f(x) = 蠄(x) function basically provides some envelope for the two-dimensional ei聽=聽ei路(E/魔)路t聽= cos胃 + isin胃聽oscillation – as depicted below (胃 = 鈭(E/魔)路t聽= 蠅路t聽with 蠅 = 鈭扙/魔).Circle_cos_sinWhen analyzing this animation – look at the movement of the green, red and blue dots respectively – one cannot miss the equivalence between this oscillation and the movement of a mass on a spring – as depicted below.spiral_sThe ei路(E/魔)路t聽function just gives us two聽springs for the price of one. 馃檪 Now, you may want to imagine some kind of elastic medium – Feynman’s famous drum-head, perhaps 馃檪 – and you may also want to think of all of this in terms of superimposed waves but… Well… I’d need to review if that’s really relevant to what we’re discussing here, so I’d rather not聽make things too complicated and stick to basics.

First note that the amplitude of the two linear oscillations above is normalized: the maximum displacement of the object from equilibrium, in the positive or negative direction, which we may denote by x = 卤A, is equal to one. Hence, the energy formula is just the sum of the potential and kinetic energy: T + U = (1/2)路A2路m路蠅2聽= (1/2)路m路蠅2. But so we have聽two聽springs and, therefore, the energy in this two-dimensional oscillation is equal to E = 2路(1/2)路m路蠅2聽=聽m路蠅2.

This formula is structurally similar to Einstein’s聽E = m路c2聽formula. Hence, one may want to assume that the energy of some particle (an electron, in our case, because we’re discussing electron orbitals here)聽is just the two-dimensional motion of its聽mass. To put it differently, we might also want to think that the oscillating real and imaginary component of our wavefunction each store one half聽of the total energy of our particle.

However, the interpretation of this rather bold statement is not so straightforward. First, you should note that the 蠅 in the E =聽m路蠅2聽formula is an angular聽velocity, as opposed to the c聽in the聽E = m路c2聽formula, which is a linear velocity. Angular velocities are expressed in聽radians聽per second, while linear velocities are expressed in聽meter聽per second. However, while the聽radian聽measures an angle, we know it does so by measuring a length. Hence, if our distance unit is 1 m, an angle of 2蟺聽rad will correspond to a length of 2蟺聽meter, i.e. the circumference of the unit circle. So… Well… The two velocities may聽not聽be so different after all.

There are other questions here. In fact, the other questions are probably more relevant. First, we should note that the 蠅 in the E =聽m路蠅2聽can take on any value. For a mechanical spring, 蠅 will be a function of (1) the stiffnessof the spring (which we usually denote by k, and which is typically measured in newton (N) per meter) and (2) the mass (m) on the spring. To be precise, we write:聽蠅2聽= k/m – or, what amounts to the same, 蠅聽= 鈭(k/m). Both k and m are variables聽and, therefore, 蠅 can really be anything. In contrast, we know that c is a constant: c聽equals聽299,792,458 meter per second, to be precise. So we have this rather remarkable expression: c聽= 鈭(E/m), and it is valid for any聽particle – our electron, or the proton at the center, or our hydrogen atom as a whole. It is also valid for more complicated atoms, of course. In fact, it is valid for聽any聽system.

Hence, we need to take another look at the energy concept聽that is used in our 蠄(x, t) = ei路(E/魔)路tf(x) wavefunction. You’ll remember (if not, you聽should) that the E here is equal to En= 鈭13.6 eV, 鈭3.4 eV, 鈭1.5 eV and so on, for n聽= 1, 2, 3, etc. Hence, this energy concept is rather particular. As Feynman puts it: “The energies are negative because we picked our zero point as the energy of an electron located far from the proton. When it is close to the proton, its energy is less, so somewhat below zero. The energy is lowest (most negative) for n = 1, and increases toward zero with increasing n.”

Now, this is the聽one and only聽issue I have with the standard physics story. I mentioned it in one of my previous posts and, just for clarity, let me copy what I wrote at the time:

Feynman gives us a rather casual explanation [on choosing a zero point for measuring energy] in one of his very first聽Lectures聽on quantum mechanics, where he writes the following:聽鈥淚f we have a 鈥渃ondition鈥 which is a mixture of two different states with different energies, then the amplitude for each of the two states will vary with time according to an equation like聽aei蠅t, with 魔路蠅 =聽E聽= m路c2. Hence, we can write the amplitude for the two states, for example as:

ei(E1/魔)路t聽and ei(E2/魔)路t

And if we have some combination of the two, we will have an interference. But notice that if we added a constant to both energies, it wouldn鈥檛 make any difference. If somebody else were to use a different scale of energy in which all the energies were increased (or decreased) by a constant amount鈥攕ay, by the amount A鈥攖hen the amplitudes in the two states would, from his point of view, be:

ei(E1+A)路t/魔聽and ei(E2+A)路t/魔

All of his amplitudes would be multiplied by the same factor ei(A/魔)路t, and all linear combinations, or interferences, would have the same factor. When we take the absolute squares to find the probabilities, all the answers would be the same. The choice of an origin for our energy scale makes no difference; we can measure energy from any zero we want. For relativistic purposes it is nice to measure the energy so that the rest mass is included, but for many purposes that aren鈥檛 relativistic it is often nice to subtract some standard amount from all energies that appear. For instance, in the case of an atom, it is usually convenient to subtract the energy Msc2, where Ms聽is the mass of all the separate pieces鈥攖he nucleus and the electrons鈥攚hich is, of course, different from the mass of the atom. For other problems, it may be useful to subtract from all energies the amount Mgc2, where Mg聽is the mass of the whole atom in the ground state; then the energy that appears is just the excitation energy of the atom. So, sometimes we may shift our zero of energy by some very large constant, but it doesn鈥檛 make any difference, provided we shift all the energies in a particular calculation by the same constant.鈥

It鈥檚 a rather long quotation, but it鈥檚 important. The key phrase here is, obviously, the following: 鈥淔or other problems, it may be useful to subtract from all energies the amount Mgc2, where Mg聽is the mass of the whole atom in the ground state; then the energy that appears is just the excitation energy of the atom.鈥 So that鈥檚 what he鈥檚 doing when solving Schr枚dinger鈥檚 equation. However, I should make the following point here: if we shift the origin of our energy scale, it does not make any difference in regard to the聽probabilities聽we calculate, but it obviously does make a difference in terms of our wavefunction itself. To be precise, itsdensity聽in time will be聽very聽different. Hence, if we鈥檇 want to give the wavefunction some聽physical聽meaning 鈥 which is what I鈥檝e been trying to do all along 鈥 it聽does聽make a huge difference. When we leave the rest mass of all of the pieces in our system out, we can no longer pretend we capture their energy.

So… Well… There you go. If we’d want to try to interpret our 蠄(x, t) = ei路(En/魔)路tf(x) function as a two-dimensional oscillation of the聽mass聽of our electron, the energy concept in it – so that’s the En聽in it – should include all pieces. Most notably, it should also include the electron’s聽rest energy, i.e. its energy when it is not聽in a bound state. This rest energy is equal to 0.511 MeV. […]聽Read this again: 0.511 mega-electronvolt (106聽eV), so that’s huge as compared to the tiny energy values we mentioned so far (鈭13.6 eV, 鈭3.4 eV, 鈭1.5 eV,…).

Of course, this gives us a rather phenomenal order of magnitude for the oscillation that we’re looking at. Let’s quickly calculate it. We need to convert to SI units,聽of course: 0.511 MeV is about 8.2脳10鈭14joule聽(J), and so the associated frequency聽is equal to聽谓 = E/h = (8.2脳10鈭14聽J)/(6.626脳10鈭34 J路s) 鈮 1.23559脳1020聽cycles per second. Now, I know such number doesn’t say all that much: just note it’s the same order of magnitude as the frequency of gamma rays聽and… Well… No. I won’t say more. You should try to think about this for yourself. [If you do,聽think – for starters – about聽the difference between bosons and fermions: matter-particles are fermions, and photons are bosons. Their nature is very different.]

The corresponding聽angular聽frequency is just the same number but multiplied by 2蟺 (one cycle corresponds to 2蟺聽radians聽and, hence, 蠅 = 2蟺路谓 = 7.76344脳1020rad聽per second. Now, if our green dot would be moving around the origin, along the circumference of our unit circle, then its horizontal and/or vertical velocity would approach the same value. Think of it. We have this聽ei聽=聽ei路(E/魔)路t聽=聽ei路蠅路t聽= cos(蠅路t) +聽isin(蠅路t) function, with 蠅 = E/魔. So the聽cos(蠅路t) captures the motion along the horizontal axis, while the sin(蠅路t) function captures the motion along the vertical axis.聽Now, the velocity along the horizontal聽axis as a function of time is given by the following formula:

v(t) = d[x(t)]/dt = d[cos(蠅路t)]/dt =聽鈭捪壜sin(蠅路t)

Likewise, the velocity along the聽vertical聽axis is given by聽v(t) = d[sin(蠅路t)]/dt = 蠅路cos(蠅路t). These are interesting formulas: they show the velocity (v) along one of the two axes is always less聽than the聽angular velocity (蠅). To be precise, the velocity v聽approaches – or, in the limit, is equal to –聽the angular velocity 蠅 when 蠅路t is equal to 蠅路t聽= 0,聽蟺/2, 蟺 or 3蟺/2. So… Well… 7.76344脳1020meter聽per second!? That’s like 2.6聽trillion聽times the speed of light. So that’s not possible, of course!

That’s where the聽amplitude聽of our wavefunction comes in – our envelope function聽f(x): the green dot does聽not聽move along the unit circle. The circle is much tinier and, hence, the oscillation should聽not聽exceed the speed of light. In fact, I should probably try to prove it oscillates聽at聽the speed of light, thereby respecting Einstein’s universal formula:

c聽= 鈭(E/m)

Written like this – rather than as you know it: E = m路c2聽– this formula shows the speed of light is just a property of spacetime, just like the 蠅聽= 鈭(k/m) formula (or the 蠅聽= 鈭(1/LC) formula for a resonant AC circuit) shows that 蠅, the natural聽frequency of our oscillator, is a characteristic of the system.

Am I absolutely certain of what I am writing here? No. My level of understanding of physics is still that of an undergrad. But… Well… It all makes a lot of sense, doesn’t it? 馃檪

Now, I said there were a few聽obvious questions, and so far I answered only one. The other obvious question is why energy would appear to us as mass in motion聽in two dimensions only. Why is it an oscillation in a plane? We might imagine a third spring, so to speak, moving in and out from us, right? Also, energy聽densities聽are measured per unit聽volume, right?

Now聽that‘s a clever question, and I must admit I can’t answer it right now. However, I do suspect it’s got to do with the fact that the wavefunction depends on the orientation of our reference frame. If we rotate it, it changes. So it’s like we’ve lost one degree of freedom already, so only two are left. Or think of the third direction as the direction of propagation聽of the wave. 馃檪聽Also, we should re-read what we wrote about the Poynting vector for the matter wave, or what Feynman wrote about probability聽currents. Let me give you some appetite for that by noting that we can re-write聽joule聽per cubic meter (J/m3) as聽newton聽per聽square聽meter: J/m3聽= N路m/m3聽= N/m2. [Remember: the unit of energy is force times distance. In fact, looking at Einstein’s formula, I’d say it’s kg路m2/s2聽(mass times a squared velocity), but that simplifies to the same: kg路m2/s2聽= [N/(m/s2)]路m2/s2.]

I should probably also remind聽you that there is no three-dimensional equivalent of Euler’s formula, and the way the kinetic and potential energy of those two oscillations works together is rather unique. Remember I illustrated it with the image of a V-2 engine in previous posts. There is no such thing as a V-3 engine. [Well… There actually is – but not with the third cylinder being positioned sideways.]two-timer-576-px-photo-369911-s-original

But… Then… Well… Perhaps we should think of some weird combination of聽two聽V-2 engines. The illustration below shows the superposition of two聽one-dimensional waves – I think – one traveling east-west and back, and the other one traveling north-south and back. So, yes, we may to think of Feynman’s drum-head again – but combining聽two-dimensional waves –聽two聽waves that聽both聽have an imaginary as well as a real dimension


Hmm… Not sure. If we go down this path, we’d need to add a third dimension – so w’d have a super-weird V-6 engine! As mentioned above, the wavefunction does depend on our reference frame: we’re looking at stuff from a certain direction聽and, therefore, we can only see what goes up and down, and what goes left or right. We can’t see what comes near and what goes away from us. Also think of the particularities involved in measuring angular momentum – or the magnetic moment of some particle. We’re measuring that along one direction only! Hence, it’s probably no use to imagine we’re looking at聽three聽waves simultaneously!

In any case…聽I’ll let you think about all of this. I do feel I am on to something. I am convinced that my interpretation of the wavefunction as an聽energy propagation聽mechanism, or as聽energy itself聽– as a two-dimensional oscillation of mass – makes sense. 馃檪

Of course, I haven’t answered one聽key聽question here: what聽is聽mass? What is that green dot – in reality, that is? At this point, we can only waffle – probably best to just give its standard definition: mass is a measure of聽inertia. A resistance to acceleration or deceleration, or to changing direction. But that doesn’t say much. I hate to say that – in many ways – all that I’ve learned so far has聽deepened聽the mystery, rather than solve it. The more we understand, the less we understand? But… Well… That’s all for today, folks ! Have fun working through it for yourself. 馃檪

Post scriptum: I’ve simplified the wavefunction a bit. As I noted in my post on it, the complex exponential is actually equal to聽ei路[(E/魔)路t聽鈭捖m路蠁], so we’ve got a phase shift because of m, the quantum number which denotes the z-component of the angular momentum. But that’s a minor detail that shouldn’t trouble or worry you here.

The periodic table

This post is, in essence, a continuation of my series on electron orbitals. I’ll just further tie up some loose ends and then – hopefully – have some time to show how we get the electron orbitals for other atoms than hydrogen. So we’ll sort of build up the periodic table. Sort of. 馃檪

We should first review a bit. The illustration below copies the energy level diagram from Feynman’s聽Lecture on the hydrogen wave function. Note he uses 鈭欵 for the energy scale because… Well… I’ve copied the En聽values for n = 1, 2, 3,… 7 next to it: the value for聽E1聽(-13.6 eV) is four times the value of E2聽(-3.4 eV).

exponential scale

How do we know those values? We discussed that before – long time back: we have the so-called聽gross structure of the hydrogen聽spectrum聽here. The table below gives the energy values for the first seven levels, and you can calculate an example for yourself: the difference聽between E2聽(-3.4 eV) and聽E4聽(-0.85 eV) is 2.55 eV, so that’s 4.08555脳10鈭19聽J, which corresponds to a聽frequency聽equal to聽f聽= E/h聽= (4.08555脳10鈭19聽J)/(6.626脳10鈭34聽J路s) 鈮 0.6165872脳1015聽Hz. Now that frequency corresponds to a wavelength that’s equal to聽位 = c/f聽= (299,792,458 m/s)/0.6165872脳1015/s)聽鈮 486脳10鈭9聽m. So that’s the 486 nano-meter line the so-called Balmer series, as shown in the illustration next to the table with the energy values.

So far, so good. An interesting point to note is that we only have聽one聽solution for聽n聽= 1. To be precise, we have one聽spherical聽solution only: the 1s solution. Now, for n = 2, we have one 2s solution聽but also three聽2p聽solutions (remember the聽p聽stands for聽principal聽lines). In the simplified model we’re using (we’re not聽discussing the fine or hyperfine structure here), these three聽solutions are referred to as ‘degenerate states’: they are different states聽with the same energy. Now, we know that any linear combination聽of the solutions for a differential equation must also be a solution. Therefore, any linear combination of the 2p聽solutions will also be a stationary state of the same energy. In fact, a superposition of the 2s and one or more of the 2p states should also be a solution. There is an interesting聽app聽which visualizes how such superimposed states look like. I copy three illustrations below, but I recommend you google聽for stuff like this yourself: it’s really fascinating! You should, once again, pay attention to the symmetries planes and/or symmetry axes.

But we’ve written enough about the orbital of one聽electron now. What if there are two electrons, or three, or more. In other word, how does it work for helium,聽lithium, and so on? Feynman gives us a bit of an intuitive explanation here – nothing analytical, really. First, he notes Schr枚dinger’s equation for聽two聽electrons would look as follows:

two electronsSecond, the 蠄(x) function in the 蠄(x, t) = ei路(E/魔)路t路蠄(x) function now becomes a function in six聽variables, which he – curiously enough – now no longer writes as 蠄 but as f:formulaThe rest of the text speaks for itself, although you might be disappointed by what he writes (the bold-face and/or italics are mine):

“The geometrical dependence is contained in聽f, which is a function of six variables鈥攖he simultaneous positions of the two electrons. No one has found an analytic solution, although solutions for the lowest energy states have been obtained by numerical methods. With 3,聽4, or聽5聽electrons it is hopeless to try to obtain exact solutions, and it is going too far to say that quantum mechanics has given a precise understanding of the periodic table.聽It is possible, however, even with a sloppy approximation鈥攁nd some fixing鈥攖o understand, at least qualitatively, many chemical properties which show up in the periodic table.

The chemical properties of atoms are determined primarily by their lowest energy states. We can use the following approximate theory to find these states and their energies. First, we neglect the electron spin, except that we adopt the exclusion principle and say that any particular electronic state can be occupied by only one electron. This means that any particular orbital configuration can have up to two electrons鈥攐ne with spin up, the other with spin down.

Next we disregard the details of the interactions between the electrons in our first approximation, and say that each electron moves in a central field which is the combined field of the nucleus and all the other electrons. For neon, which has 10聽electrons, we say that one electron sees an average potential due to the nucleus plus the other nine electrons. We imagine then that in the Schr枚dinger equation for each electron we put a聽V(r)聽which is a 1/r聽field modified by a spherically symmetric charge density coming from the other electrons.

In this model each electron acts like an independent particle. The angular dependence of its wave function will be just the same as the ones we had for the hydrogen atom. There will be s-states, p-states, and so on; and they will have the various possible m-values. Since V(r)聽no longer goes as聽1/r, the radial part of the wave functions will be somewhat different, but it will be qualitatively the same, so we will have the same radial quantum numbers,聽n. The energies of the states will also be somewhat different.”

So that’s rather disappointing, isn’t it? We can only get some聽approximate – or聽qualitative – understanding of the periodic table from quantum mechanics – because the聽math聽is too complex: only聽numerical聽methods can give us those orbitals! Wow!聽Let me list some of the salient points in Feynman’s treatment of the matter:

  • For聽helium聽(He), we have two electrons in the lowest state (i.e. the 1s state): one has its spin ‘up’ and the other is ‘down’. Because the shell is filled, the ionization energy (to remove聽one聽electron) has an even larger value than the ionization energy for hydrogen: 24.6 eV! That’s why there is “practically no tendency” for the electron to be attracted by some other atom: helium is chemically inert – which explains it being part of the group of聽noble聽or聽inert聽gases.
  • For聽lithium聽(Li), two electrons will occupy the 1s聽orbital, and the third should go to an n聽= 2 state. But which one? With聽l聽= 0, or聽l聽= 1? A 2s state or a 2p state?聽In hydrogen, these two n聽= 2聽states have the same energy, but in other atoms they don鈥檛. Why not? That’s a complicated story, but the gist of the argument is as follows: a聽2s state has some amplitude to be near the nucleus, while the 2p state does not. That means that a 2s聽electron will feel some of the triple electric charge of the Li nucleus, and this extra attraction lowers the energy of the 2s聽state relative to the 2p聽state.

To make a long story short, the energy levels will be roughly as shown in the table below. For example, the energy that’s needed to remove the 2s electron of the lithium – i.e. the聽ionization聽energy of lithium – is only 5.4 eV because… Well… As you can see, it has a higher energy (less聽negative, that is) than the 1s聽state (鈭13.6 eV for hydrogen and, as mentioned above, 鈭24.6 eV for helium). So lithium is chemically active – as opposed to helium.聽energy values more electrons

You should compare the table below with the table above. If you do, you’ll understand how electrons ‘fill up’ those electron shells. Note, for example, that the energy of the 4s state is slightly lower聽than the energy of the 3d state, so it fills up before聽the 3d聽shell does. [I know the table is hard to read – just check out the original text if you want to see it better.]

periodic table

This, then, is what you learnt in high school and, of course, there are 94 naturally occurring elements – and another 24 heavier elements that have been produced in labs, so we’d need to go all the way to no. 118. Now, Feynman doesn’t do that, and so I won’t do that either. 馃檪

Well… That’s it, folks. We’re done with Feynman. It’s time to move to a physics聽grad course now! Talk stuff like quantum field theory, for example. Or string theory. 馃檪 Stay tuned!

Re-visiting electron orbitals (III)

In my previous post, I mentioned that it was聽not so obvious (both from a physical聽as well as from a聽mathematical聽point of view) to write the wavefunction for electron orbitals – which we denoted as 蠄(x, t), i.e. a function of two variables (or four: one time coordinate and three space coordinates) –聽as the product of two other functions in one variable only.

[…] OK. The above sentence is difficult to read. Let me write in math. 馃檪 It is聽not聽so obvious to write 蠄(x, t) as:

蠄(x, t) = ei路(E/魔)路t路蠄(x)

As I mentioned before, the physicists’ use of the same symbol (蠄, psi) for both the 蠄(x, t) and 蠄(x) function is quite confusing – because the two functions are聽very聽different:

  • 蠄(x, t) is a complex-valued function of two聽(real)variables: x and t. Or聽four, I should say, because x= (x, y, z) – but it’s probably easier to think of x as one聽vector聽variable – a聽vector-valued argument, so to speak. And then t is, of course, just a聽scalar聽variable. So… Well… A function of two聽variables: the position in space (x), and time (t).
  • In contrast, 蠄(x) is a real-valued聽function of聽one聽(vector) variable only: x, so that’s the position in space only.

Now you should cry foul, of course: 蠄(x) is not聽necessarily聽real-valued. It may聽be complex-valued. You’re right.聽You know the formula:wavefunctionNote the derivation of this formula involved a switch from Cartesian to polar coordinates here, so from x聽= (x, y, z) to r= (r, 胃, 蠁), and that the function is also a function of the two聽quantum numbers聽l and m now, i.e. the orbital angular momentum (l) and its z-component (m) respectively. In my previous post(s), I gave you the formulas for Yl,m(胃, 蠁) and Fl,m(r) respectively. Fl,m(r) was a real-valued function alright, but the Yl,m(胃, 蠁) had that eim路蠁聽factor in it. So… Yes. You’re right: the Yl,m(胃, 蠁) function is real-valued if – and only聽if – m = 0, in which case eim路蠁聽= 1.聽Let me copy the table from Feynman’s treatment of the topic once again:spherical harmonics 2The Plm(cos胃) functions are the so-called (associated) Legendre polynomials, and the formula for these functions is rather horrible:Legendre polynomialDon’t worry about it too much: just note the Plm(cos胃)聽is a聽real-valued聽function. The point is the following:the聽蠄(x, t) is a complex-valued聽function because – and聽only聽because – we multiply a real-valued envelope function – which depends on position聽only – with ei路(E/魔)路teim路蠁聽= ei路[(E/魔)路t聽鈭捖m路蠁].


Please read the above once again and – more importantly – think about it for a while. 馃檪 You’ll have to agree with the following:

  • As mentioned in my previous post,聽the eim路蠁聽factor just gives us phase shift: just a聽re-set of our zero point for measuring time, so to speak, and the whole ei路[(E/魔)路t聽鈭捖m路蠁]聽factor just disappears when we鈥檙e calculating probabilities.
  • The envelope function gives us the basic amplitude – in the聽classical聽sense of the word:聽the maximum displacement from聽the聽zero聽value. And so it’s that ei路[(E/魔)路t聽鈭捖m路蠁]聽that ensures the whole expression somehow captures the energy聽of the oscillation.

Let’s first look at the envelope function again. Let me copy the illustration for聽n = 5 and l聽= 2 from a聽Wikimedia Commons聽article.聽Note the symmetry planes:

  • Any plane containing the聽z-axis is a symmetry plane – like a mirror in which we can reflect one half of the聽shape to get the other half. [Note that I am talking the聽shape聽only here. Forget about the colors for a while – as these reflect the complex phase of the wavefunction.]
  • Likewise, the plane containing聽both聽the x– and the y-axis is a symmetry plane as well.

n = 5

The first symmetry plane – or symmetry聽line, really (i.e. the聽z-axis) – should not surprise us, because the azimuthal angle 蠁 is conspicuously absent in the formula for our envelope function if, as we are doing in this article here, we merge the聽eim路蠁聽factor with the ei路(E/魔)路t, so it’s just part and parcel of what the author of the illustrations above refers to as the ‘complex phase’ of our wavefunction.聽OK. Clear enough – I hope. 馃檪 But why is the聽the xy-plane a symmetry plane too? We need to look at that monstrous formula for the Plm(cos胃) function here: just note the cos胃 argument in it is being squared聽before it’s used in all of the other manipulation. Now, we know that cos胃 = sin(蟺/2聽鈭捖犖). So we can define some聽new聽angle – let’s just call it 伪 – which is measured in the way we’re used to measuring angle, which is not聽from the z-axis but from the xy-plane. So we write: cos胃 = sin(蟺/2聽鈭捖犖) = sin伪. The illustration below may or may not help you to see what we’re doing here.angle 2So… To make a long story short, we can substitute the cos胃 argument in the Plm(cos胃) function for sin伪 = sin(蟺/2聽鈭捖犖). Now, if the xy-plane is a symmetry plane, then we must find the same value for Plm(sin伪) and Plm[sin(鈭捨)]. Now, that’s not obvious, because sin(鈭捨) = 鈭sin伪 鈮犅sin伪. However, because the argument in that Plm(x) function is being squared before any other operation (like subtracting 1 and exponentiating the result), it is OK: [鈭sin伪]2聽= [sin伪]2聽=聽sin2伪. […] OK, I am sure the geeks amongst my readers will be able to explain this more rigorously. In fact, I hope they’ll have a look at it, because there’s also that dl+m/dxl+m聽operator, and so you should check what happens with the minus sign there. 馃檪

[…] Well… By now, you’re probably totally lost, but the fact of the matter is that we’ve got a beautiful result here. Let me highlight the most significant results:

  • A聽definite聽energy state of a hydrogen atom (or of an electron orbiting around some nucleus, I should say) appears to us as some beautifully shaped orbital – an envelope聽function in three dimensions, really – which聽has the z-axis – i.e. the vertical axis – as a symmetry line and the xy-plane as a symmetry plane.
  • The ei路[(E/魔)路t聽鈭捖m路蠁]聽factor gives us the oscillation within the envelope function. As such, it’s this factor that, somehow,聽captures the energy聽of the oscillation.

It’s worth thinking about this. Look at the geometry of the situation again – as depicted below. We’re looking at the situation along the x-axis, in the direction of the origin, which is the nucleus of our atom.


The eim路蠁聽factor just gives us phase shift: just a聽re-set of our zero point for measuring time, so to speak. Interesting, weird – but probably less relevant than the ei路[(E/魔)路t聽factor, which gives us the two-dimensional oscillation that captures the energy of the state.


Now, the obvious question is: the oscillation of what, exactly? I am not quite sure but – as I explained in my Deep Blue聽page – the real and imaginary part of our wavefunction are really like the electric and magnetic field vector of an oscillating electromagnetic field (think of electromagnetic radiation – if that makes it easier). Hence, just like the electric and magnetic field vector represent some rapidly changing force聽on a unit charge, the real and imaginary part of our wavefunction must also represent some rapidly changing聽force聽on… Well… I am not quite sure on what though. The unit charge is usually defined as the charge of a proton – rather than an electron – but then forces act on some mass, right? And the mass聽of a proton is hugely different from the mass of an electron. The same electric (or magnetic) force will, therefore, give a hugely different acceleration to both.

So… Well… My guts instinct tells me the real and imaginary part of our wavefunction just represent, somehow, a rapidly changing force on some unit of聽mass, but then I am not sure how to define that unit right now (it’s probably not聽the kilogram!).

Now, there is another thing we should note here: we’re actually sort of de-constructing a rotation聽(look at the illustration above once again) in two linearly oscillating vectors – one along the z-axis and the other along the y-axis.聽Hence, in essence, we’re actually talking about something that’s spinning.聽In other words, we’re actually talking some聽torque聽around the x-axis. In what direction? I think that shouldn’t matter – that we can write E or 鈭扙, in other words, but… Well… I need to explore this further – as should you! 馃檪

Let me just add one more note on the eim路蠁聽factor. It sort of defines the geometry聽of the complex phase itself. Look at the illustration below. Click on it to enlarge it if necessary – or, better still, visit the magnificent Wikimedia Commons article from which I get these illustrations. These are the orbitals n聽= 4 and l聽= 3. Look at the red hues in particular – or the blue – whatever: focus on one color only, and see how how – for m= 卤1, we’ve got one appearance of that color only. For m= 卤1, the same color appears at two ends of the ‘tubes’ – or tori聽(plural of torus), I should say – just to sound more professional. 馃檪 For m= 卤2, the torus consists of three parts – or, in mathematical terms, we’d say the order of its rotational symmetry聽is equal to 3.聽Check that Wikimedia Commons article for higher values of聽n聽and聽l: the shapes become very convoluted, but the observation holds. 馃檪

l = 3

Have fun thinking all of this through for yourself – and please do look at those symmetries in particular. 馃檪

Post scriptum: You should do some thinking on whether or not these聽m聽=聽卤1, 卤2,…, 卤l聽orbitals are really different. As I mentioned above, a phase difference is just what it is: a re-set of the t = 0 point. Nothing more, nothing less. So… Well… As far as I am concerned, that’s not聽a聽real聽difference, is it? 馃檪 As with other stuff, I’ll let you think about this for yourself.

Re-visiting electron orbitals (II)

I’ve talked about electron orbitals in a couple of posts already – including a fairly recent one, which is why I put the (II) after the title. However, I just wanted to tie up some loose ends here – and do some more thinking about the concept of a definite energy state. What is it really? We know the wavefunction for a聽definite聽energy state can always be written as:

蠄(x, t) = ei路(E/魔)路t路蠄(x)

Well… In fact, we should probably formally prove聽that but… Well… Let us just聽explore this formula聽in a more intuitive way – for the time being, that is – using those electron orbitals we’ve derived.

First, let me note that 蠄(x, t) and 蠄(x) are very聽different functions and, therefore, the choice of the same聽symbol聽for both (the Greek聽psi) is – in my humble opinion – not very fortunate, but then… Well… It is the聽choice of physicists – as copied in textbooks all over – and so we’ll just have to live with it. Of course, we can appreciate why they choose to use the same symbol – 蠄(x) is like a time-independent wavefunction now, so that’s nice – but… Well… You should note that it is聽not聽so obvious to write some function as the product of two other functions. To be complete, I’ll be a bit more explicit here: if some function in two variables – say F(x, y) – can be written as the product of two functions in one variable – say f(x) and g(y), so we can write F as F(x, y) = f(x)路g(y) – then we say F is a separable聽function. For a full overview of what that means, click on this link. And note mathematicians do choose a different symbol for the functions F and g. It would probably be interesting to explore what the conditions for separability actually imply in terms of properties聽of… Well… The wavefunction and its argument, i.e. the space and time variables. But… Well… That’s stuff for another post. 馃檪

Secondly, note that the聽momentum聽variable (p) – i.e. the pin our elementary wavefunction aei路(px鈭扙路t)/聽has sort of vanished: 蠄(x) is a function of the position only. Now, you may think it should be聽somewhere聽there – that, perhaps, we can write something like 蠄(x) = 蠄[x), p(x)]. But… No. The momentum variable has effectively vanished. Look at Feynman’s solutions for the electron orbitals聽of a hydrogen atom:Grand EquationThe Yl,m(胃, 蠁) and Fn,l(蟻) functions here are functions of the (polar) coordinates 蟻, 胃, 蠁 only. So that’s the position聽only (these coordinates are polar or聽spherical聽coordinates, so聽蟻 is the radial distance, 胃 is the polar angle, and 蠁 is the azimuthal angle). There’s no idea whatsoever of any momentum in one or the other spatial聽direction here. I find that rather remarkable. Let’s see how it all works with a simple example.

The functions below are the Yl,m(胃, 蠁) for l聽= 1. Note the symmetry: if we swap 胃 and 蠁 for -胃 and -蠁 respectively, we get the other function: 2-1/2sin(-胃)路ei(-蠁)聽= -2-1/2sin胃路ei.


To get the probabilities, we need to take the absolute square of the whole thing, including ei路(E/魔), but we know |ei路未|2聽= 1 for any value of 未. Why? Because the absolute square of any聽complex number is the product of the number with its complex conjugate, so |ei路未|2聽= ei路未ei路未聽= ei路0聽= 1. So we only have to look at the absolute square of the Yl,m(胃, 蠁) and Fn,l(蟻) functions here. The Fn,l(蟻) function is a real-valued function, so its absolute square is just what it is: some real number (I gave you the formula for the聽ak聽coefficients in my post on it, and you shouldn’t worry about them: they’re real too). In contrast, the聽Yl,m(胃, 蠁) functions are complex-valued – most of them are, at least. Unsurprisingly, we find the probabilities are also symmetric:

P = |-2-1/2sin胃路ei|2聽= (-2-1/2sin胃路ei)路(-2-1/2sin胃路ei)

= (2-1/2sin胃路ei)路(2-1/2sin胃路ei) =聽聽|2-1/2sin胃路ei|2聽= (1/2)路sin2

Of course, for m聽= 0, the probability is just cos2胃. The graphs below are the polar graphs for the聽cos2胃 and (1/2)路sin2胃 functions respectively.

combinationThese polar graphs are not so easy to interpret, so let me say a few words about them. The points that are plotted combine (a) some radialdistance聽from the center – which I wrote as P because this distance is, effectively,a聽probability – with (b) the polar angle 胃 (so that’s one of the 聽three coordinates). To be precise,聽the plot gives us, for a given 蟻, all of the (胃, P) combinations. It works as follows. To calculate the probability for some 蟻 and 胃 (note that 蠁 can be any angle), we must take the absolute square of that 蠄n,l,m,聽= Yl,m(胃, 蠁)路Fn,l(蟻) product. Hence, we must calculate聽|Yl,m(胃, 蠁)路Fn,l(蟻)|2聽= |Fn,l(蟻)|2cos2胃 for m聽= 0, and (1/2)路|Fn,l(蟻)|2sin2胃 for m聽=聽卤1. Hence, the value of 蟻 determines the value of Fn,l(蟻), and that Fn,l(蟻) value then determines the shape of the polar graph. The three graphs below – P = cos2胃, P = (1/2)路cos2胃 and P = (1/4)路cos2胃 – illustrate the idea. polar comparativeNote that we’re measuring 胃 from the z-axis聽here, as we should. So that gives us the right orientation of this volume, as opposed to the other polar graphs above, which measured 胃 from the x-axis. So… Well… We’re getting there, aren’t we? 馃檪

Now you’ll have two or three – or even more – obvious questions. The first one is: where is the third lobe? That’s a good question. Most illustrations will represent the p-orbitals as follows:p orbitalsThree lobes. Well… Frankly, I am not quite sure here, but the equations speak for themselves: the probabilities only depend on 蟻 and 胃. Hence, the azimuthal angle 蠁 can be anything. So you just need to rotate those P = (1/2)路sin2胃 and P = cos2胃 curves about the the聽z-axis. In case you wonder how to do that, the illustration below may inspire you.sphericalThe second obvious question is about the size of those lobes. That 1/2 factor must surely matter, right? Well… We still have that聽Fn,l(蟻) factor, of course, but you’re right: that factor does not聽depend on the value for聽m: it’s the same for m聽= 0 or聽卤 1. So… Well… Those representations above – with the three lobes, all of the same volume – may not be accurate. I found an interesting site – Atom in a Box – with an聽app that visualizes the atomic orbitals in a fun and exciting way. Unfortunately, it’s for Mac and iPhone only – but this YouTube video shows how it works. I encourage you to explore it. In fact, I need to explore it – but what I’ve seen on that YouTube video (I don’t have a Mac nor an iPhone) suggests the three-lobe illustrations may effectively be wrong: there’s some asymmetry here – which we’d expect, because those p-orbitals are actually supposed to be asymmetric! In fact, the most accurate pictures may well be the ones below. I took them from聽Wikimedia Commons. The author explains the use of the color codes as follows: “The depicted rigid body is where the probability density exceeds a certain value. The color shows the complex phase of the wavefunction, where blue means real positive, red means imaginary positive, yellow means real negative and green means imaginary negative.” I must assume he refers to the sign of a聽and聽b聽when writing a complex number as聽a聽+ i路b

The third obvious question is related to the one above: we should get some聽cloud, right? Not some rigid body or some surface. Well… I think you can answer that question yourself now, based on what the author of the illustration above wrote: if we change聽the cut-off value for the probability, then we’ll give a different shape. So you can play with that and, yes, it’s some cloud, and that’s what the mentioned聽app聽visualizes. 馃檪

The fourth question is the most obvious of all. It’s the question I started this post with: what are聽those definite energy states? We have uncertainty, right? So how does聽that聽play out? Now聽that聽is a question I’ll try to tackle in my next post. Stay tuned ! 馃檪

Post scriptum: Let me add a few remarks here so as to – hopefully – contribute to an even better interpretation of what’s going on here. As mentioned, the聽key to understanding is, obviously, the following basic functional form:

蠄(r, t) = ei路(E/魔)路t路蠄(r)

Wikipedia refers to the ei路(E/魔)路tfactor as a time-dependent phase factor which, as you can see, we can separate out because we are looking at a definite聽energy state here. Note the minus sign in the exponent – which reminds us of the minus sign in the exponent of the elementary聽聽wavefunction, which we wrote as:

a路ei路胃聽=聽a路ei路[(E/魔)路t 鈭 (p/魔)鈭x]聽=聽a路ei路[(p/魔)鈭x聽鈭捖(E/魔)路t]聽=聽a路ei路(E/魔)路t路ei路(p/魔)鈭x

We know this聽elementary聽wavefunction is problematic in terms of interpretation because its absolute square gives us some聽constant聽probability聽P(x, t) = |a路ei路[(E/魔)路t 鈭 (p/魔)鈭x]|2聽= a2. In other words, at any point in time, our electron is equally likely to be anywhere聽in space. That is not聽consistent with the idea of our electron being somewhere at some point in time.

The other question is: what reference frame do we use to measure E and p? Indeed, the value of E and p = (px,聽py,聽pz) depends on our reference frame: from the electron’s own point of view, it has聽no momentum whatsoever: p = 0. Fortunately, we do have a point of reference here: the nucleus of our hydrogen atom. And our own position, of course, because you聽should note, indeed, that both the subject and the object of the observation are necessary to define the Cartesian x =聽x, y, z聽– or, more relevant in this context – the polar r=聽蟻, 胃, 蠁 coordinates.

This, then, defines some finite or infinite聽box in space聽in which the (linear) momentum (p) of our electron vanishes, and then we just need to solve Schr枚dinger’s diffusion equation聽to find the solutions for 蠄(r). These solutions are more conveniently written in terms of the radial distance 蟻, the polar angle 胃, and the azimuthal angle 蠁:Grand Equation

The functions below are the Yl,m(胃, 蠁) functions for l聽= 1.


The interesting thing about these Yl,m(胃, 蠁) functions is the ei路蠁聽and/or ei路蠁聽factor. Indeed, note the following:

  1. Because the sin胃 and cos胃 factors are聽real-valued, they聽only define some envelope for the 蠄(r) function.
  2. In contrast, the ei路蠁聽and/or ei路蠁聽factor define some phase shift.

Let’s have a look at the聽physicality聽of the situation, which is depicted below.


The nucleus of our hydrogen atom is at the center. The polar angle is measured from the z-axis, and we know we only have an amplitude there for m聽= 0, so let’s look at what that聽cos胃 factor does. If 胃 = 0掳, the amplitude is just what it is, but when 胃 >聽0掳, then聽|cos胃|聽< 1 and, therefore, the probability聽P = |Fn,l(蟻)|2cos2胃 will diminish. Hence, for the same radial distance (蟻), we are聽less聽likely to find the electron at some angle 胃 >聽0掳 than on the z-axis itself. Now that聽makes sense, obviously. You can work out the argument for聽m聽=聽卤 1 yourself, I hope. [The axis of symmetry will be different, obviously!]聽angle 2In contrast, the ei路蠁聽and/or ei路蠁聽factor work very differently. These just give us a phase shift,聽as illustrated below. A re-set of our zero point for measuring time, so to speak, and the ei路蠁聽and/or ei路蠁聽factor effectively disappears when we’re calculating probabilities, which is consistent with the fact that this angle clearly doesn’t influence the聽magnitude聽of the amplitude fluctuations.phase shiftSo… Well… That’s it, really. I hope you enjoyed this ! 馃檪

Some more on symmetries…

In our previous post, we talked a lot about symmetries in space – in a rather playful way. Let’s try to take it further here by doing some more thinking on symmetries in聽spacetime. This post will pick up some older stuff – from my posts on states聽and the related quantum math in November 2015, for example – but that shouldn’t trouble you too much. On the contrary, I actually hope to tie up some loose ends here.

Let’s first review some obvious ideas. Think about the direction of time. On a time axis, time goes from left to right. It will usually be measured from some聽zero聽point – like when we started our experiment or something 馃檪 – to some +t聽point but we may also think of some point in time聽before聽our聽zero聽point, so the minus聽(鈭t)points – the left side of the axis – make sense as well. So the direction聽of time is clear and intuitive. Now, what does it mean to reverse聽the direction of time?聽We need to distinguish two things here: the convention, and… Well… Reality. If we would suddenly decide to reverse the direction in which we measure聽time, then that’s just another convention. We don’t change reality: trees and kids would still grow the way they always did. 馃檪 We would just have to change the numbers on our clocks or, alternatively, the direction of聽rotation聽of the hand(s) of our clock, as shown below. [I only showed the hour hand because… Well… I don’t want to complicate things by introducing two聽time units. But adding the minute hand doesn’t make any difference.]

clock problemNow, imagine you’re the dictator who decided to change our time measuring convention. How would yougo about it? Would you change the numbers on the clock or the direction of rotation? Personally, I’d be in favor of changing the direction of rotation. Why? Well… First, we wouldn’t have to change expressions such as: “If you are looking north right now, then west is in the 9 o’clock direction, so go there.” 馃檪 More importantly, it would align our clocks with the way we’re measuring angles. On the other hand, it would not聽align our clocks with the way the聽argument (胃) of our elementary聽wavefunction 蠄 =聽aei聽=聽ei(E路t 鈥 p路x)/魔聽is measured, because that’s… Well… Clockwise.

So… What are the implications here? We would need to change t for聽鈭t in our wavefunction as well, right? Yep.聽Good point. So that’s another convention that would change: we should write our elementary wavefunction now as聽蠄 =聽aei(E路t 鈥 px)/魔. So we would have to re-define 胃 as 胃 = 鈥揈路t + px = px聽鈥揈路t. So… Well…聽Done!

So… Well… What’s next? Nothing. Note that we’re not聽changing reality here. We’re just adapting our formulas to a new dictatorial convention according to which we should count time from positive聽to聽negative聽–聽like 2, 1, 0, -1, -2 etcetera, as shown below. Fortunately, we can fix all聽of our laws and formulas in physics by聽swapping t聽for聽-t. So that’s great. No sweat.聽time reversal

Is that all? Yes. We don’t need to do anything else. We’ll still measure the argument of our wavefunction as an angle, so that’s… Well… After changing our convention, it’s now clockwise. 馃檪 Whatever you want to call it: it’s still the same聽direction. Our dictator can’t change physical reality聽馃檪

Hmm… But so we are obviously interested in changing physical reality. I mean… Anyone can become a dictator, right? In contrast, we聽– enlightened scientists – want to really聽change the world, don’t we? 馃檪 So what’s a time reversal聽in reality? Well… I don’t know… You聽tell me. 馃檪 We may imagine some movie being played backwards, or trees and kids shrinking聽instead of growing,聽or some bird flying backwards – and I am聽not聽talking the hummingbird here. 馃檪

Hey!聽The latter illustration – that bird flying backwards – is probably the better one: if we reverse the direction of聽time – in reality, that is – then we should also reverse all directions in space. But… Well… What does聽that聽mean, really? We need to think in terms of force fields here. A stone that’d be falling must now go back up. Two opposite charges that were going聽towards聽each other, should now move away from each other. But… My God!Such world cannot exist, can it?

No. It cannot. And we don’t need to invoke the second law of thermodynamics for that. 馃檪 None of what happens in a movie that’s played backwards makes sense: a heavy stone does聽not聽suddenly fly up and decelerate upwards. So it is not聽like the anti-matter聽world we described in our previous post. No. We can effectively imagine some world in which all charges have been replaced by their opposite: we’d have positive聽electrons (positrons) around聽negatively聽charged nuclei consisting of antiprotons and聽antineutrons and, somehow, negative聽masses. But Coulomb’s law聽would still tell us two opposite charges – q1聽and –q2聽, for example – don’t repel but聽attract聽each other, with a force that’s proportional to the product of their charges, i.e. q1路(-q2) = –q1q2. Likewise, Newton’s law of gravitation would still tell us that two masses m1聽and m2聽– negative or positive –聽will attract each other with a force that’s proportional to the product of their masses, i.e. m1m2聽= (-m1)路(-m2). If you’d make a movie in the antimatter world, it would look just like any other movie. It would definitely聽not聽look like a movie being played backwards.

In fact, the latter formula – m1m2聽= (-m1)路(-m2)聽– tells us why: we’re not changing anything by putting a minus sign in front of all of our variables, which are time (t), position (x), mass (m)聽and charge (q). [Did I forget one? I don’t think so.] Hence, the famous CPT Theorem聽– which tells us that a world in which (1) time is reversed, (2) all charges have been conjugated (i.e. all particles have been replaced by their antiparticles), and (3) all spatial coordinates now have the opposite sign, is entirely possible (because it would obey the same Laws of Nature that we, in our聽world, have discovered over the past few hundred years)聽– is actually nothing but a tautology. Now, I mean that literally: a tautology is a聽statement that is true by necessity or by virtue of its logical form. Well… That’s the case here: if we flip the signs of all聽of our variables, we basically just agreed to count or measure everything from positive聽to聽negative. That’s it. Full stop. Such exotic聽convention is… Well… Exotic, but it聽cannot聽change the real world. Full stop.

Of course, this leaves the more intriguing questions entirely open. Partial聽symmetries. Like time reversal only. 馃檪 Or charge conjugation only. 馃檪 So let’s think about that.

We know that the world that we see in a mirror must聽be made of anti-matter but, apart from that particularity, that world makes sense: if we drop a stone in front of the mirror, the stone in the mirror will drop down too. Two like charges will be seen as repelling each other in the mirror too, and concepts such as kinetic or potential energy look just the same. So time just seems to tick away in both worlds – no time reversal here! – and… Well… We’ve got two CP-symmetrical worlds here, don’t we? We only flipped the sign of the coordinate frame and of the charges. Both are possible, right? And what’s possible must exist, right? Well… Maybe. That’s the next step. Let’s first see if both are possible. 馃檪

Now, when you’ve read my previous post, you’ll note聽that I did not聽flip the聽z-coordinate when reflecting聽my world in the mirror. That’s true. But… Well… That’s entirely beside the point. We could flip the z-axis too and so then we’d have a full parity inversion. [Or parity聽transformation聽– sounds more serious, doesn’t it? But it’s only a simple inversion, really.]聽It really doesn’t matter. The point is: axial vectors have the opposite sign in the mirror world, and so it’s not only about whether or not an antimatter world is possible (it should be, right?): it’s about whether or not the sign reversal of all聽of those axial vectors makes sense in each and every situation. The illustration below, for example, shows how a聽left-handed聽neutrino should be a聽right-handed聽antineutrino in the mirror world.right-handed antineutrinoI hope you understand the left- versus right-handed thing. Think, for example, of how the left-circularly polarized wavefunction below would look like in the mirror. Just apply the customary right-hand rule to determine the direction of the angular momentum vector. You’ll agree it will be right-circularly polarized in the mirror, right? That’s why we need the charge conjugation: think of the magnetic moment of a circulating charge! So… Well… I can’t dwell on this too much but – if Maxwell’s equations are to hold – then that world in the mirror聽must聽be made of antimatter.animation

Now, we know that some processes – in our聽world – are聽not聽entirely CP-symmetrical. I wrote about this at length in previous posts, so I won’t dwell on these experiments here. The point is: these experiments – which are not easy to understand – lead physicists, philosophers, bloggers and what have you to solemnly state that the world in the mirror cannot really聽exist. And… Well… They’re right. However, I think their observations are beside the point.聽Literally.

So… Well… I would just like to make a very fundamental聽philosophical remark about all those discussions. My point is quite simple:

We should realize that the mirror world and聽our聽world are effectively separated by the mirror. So we should not聽be looking at stuff in聽the mirror from聽our perspective, because that perspective is well… Outside聽of the mirror. A different world. 馃檪 In my humble opinion,聽the valid point of reference would be the observer聽in聽the mirror, like the photographer in the image below. Now note the following: if the聽real聽photographer, on this side of the mirror, would have a left-circularly polarized beam in front of him, then the聽imaginary聽photographer, on the聽other聽side of the mirror, would see the聽mirror聽image of this left-circularly polarized beam as a left-circularly polarized beam too. 馃檪 I know that sounds complicated but re-read it a couple of times and – I hope – you’ll see the point. If you don’t… Well… Let me try to rephrase it: the point is that the observer in聽the mirror聽would be seeing聽our聽world – just the same laws and what have you, all makes sense!聽– but he would see our聽world聽in聽his聽world, so he’d see it in the mirror world. 馃檪


Capito? If you would actually be living inthe mirror world, then all the things you would see聽inthe mirror world would make perfectly sense. But you would be living inthe mirror world. You would not聽look at it聽from outside, i.e. from the other side of the mirror.聽In short, I actually think the mirror world does exist – but in the mirror only. 馃檪 […] I am, obviously, joking here. Let me be explicit: our聽world is our world, and I think those CP violations in Nature are telling us that it’s the only聽real聽world. The other worlds exist in our mind only – or in some mirror. 馃檪

Post scriptum: I know the聽Die Hard聽philosophers among you will now have an immediate rapid-backfire question. [Hey – I just invented a new word, didn’t I? A聽rapid-backfire聽question. Neat.] How would the photographer聽in聽the mirror look at聽our聽world? The answer to that question is simple: symmetry! He (or she) would think it’s a mirror world only.聽His聽world and聽our聽world would be separated by the same mirror. So… What are the implications here?

Well… That mirror is only a piece of glass with a coating. We made it. Or… Well… Some man-made company made it. 馃檪聽So… Well… If you think that observer in the mirror – I am talking about that image聽of the photographer in that picture above now – would actually exist, then… Well… Then you need to be aware of the consequences: the corollary of hisexistence is that聽youdo聽not聽exist. 馃檪 And… Well… No. I won’t say more. If you’re reading stuff like this, then you’re smart enough to figure it out for yourself. We live in聽one聽world. Quantum mechanics tells us the聽perspective on that world聽matters聽very聽much – amplitudes are different in different reference frames – but… Well… Quantum mechanics – or physics in general – does聽not聽give us many degrees of freedoms. None, really. It basically tells us the world we live in is the only world that’s聽possible, really. But… Then… Well… That’s just because physics… Well… When everything is said and done, it’s just mankind’s drive to ensure our perception聽of the Universe lines up with… Well… What we聽perceive聽it to be. 馃槮 or 馃檪 Whatever your appreciation of it. Those Great Minds did an incredible job. 馃檪

Symmetries and transformations

In my previous post, I promised to do something on symmetries. Something simple but then… Well… You聽know how it goes: one question always triggers another one. 馃檪

Look at the situation in the illustration on the left below. We suppose we have something聽real聽going on there: something is moving from left to right (so that’s in the 3 o’clock direction), and then something else is going around聽clockwise (so that’s not聽the direction in which we measure angles (which also include the argument聽胃 of our wavefunction), because that’s always聽counter-clockwise, as I note at the bottom of the illustration). To be precise, we should note that the angular momentum here is all about the y-axis, so the angular momentum vector L points in the (positive) y-direction. We get that direction from the familiar right-hand rule, which is illustrated in the top right corner.

mirrorNow, suppose someone else is looking at this from the other side – or just think of yourself going around a full 180掳 to look at the same thing from the back side. You’ll agree you’ll see the same thing going from聽right聽to聽left (so that’s in the聽9 o’clock direction now – or, if our clock is transparent, the 3 o’clock direction of our reversed clock). Likewise, the thing that’s turning around will now go counter-clockwise.

Note that both observers – so that’s me and that other person (or myself after my walk around this whole thing) – use a regular coordinate system, which implies the following:

  1. We’ve got regular 90掳 degree angles between our coordinates axes.
  2. Our x-axis goes from negative to positive from left to right, and our y-axis does the same going away from us.
  3. We also both define our z-axis using, once again, the ubiquitous right-hand rule, so our z-axis points upwards.

So we have two observers looking at the same reality聽– some linear聽as well as some聽angular聽momentum – but from opposite sides. And so we’ve got a reversal of both the linear as well as the angular momentum. Not聽in reality, of course, because we’re looking at the same thing. But we聽measure聽it differently. Indeed, if we use the subscripts 1 and 2 to denote the measurements in the two coordinate systems, we find that p2聽=聽鈥p1.聽Likewise, we also find that L2聽=聽鈥L1.

Now, when you see these two equations, you聽will probably not worry about聽that聽p2聽=聽鈥p1聽equation – although you should, because it’s actually only valid for this rather particular orientation of the linear momentum (I’ll come back to that in a moment). It’s the L2聽=聽鈥L1聽equation which should surprise you most. Why? Because you’ve always been told there is a big聽difference between (1)聽real聽vectors (aka polar vectors), like the momentum聽p, or the velocity聽v, or the force F,聽and (2)聽pseudo-vectors (aka axial vectors), like the angular聽momentum聽L. You may also remember how to distinguish between the two:聽if you change the聽direction聽of the axes of your reference frame, polar vectors will change sign too, as opposed to axial vectors: axial vectors do not聽swap sign if we swap the coordinate signs.

So… Well… How does that work here? In fact, what we should ask ourselves is: why does that not聽work here? Well… It’s simple, really. We’re not changing the direction of the axes here. Or… Well… Let me be more precise: we’re only swapping the sign of the x– and y-axis. We did聽not聽flip the z-axis. So we turned things around, but we didn’t turn them upside down. It makes a huge difference. Note, for example, that if all of the linear momentum would have been in the z-direction only (so our聽p vector would have been pointing in the z-direction, and in the z-direction only), it would聽not聽swap sign. The illustration below shows what really happens with the coordinates of some vector when we’re doing a聽rotation. It’s, effectively, only the聽x– and聽y-coordinates that flip sign.reflection symmetry

It’s easy to see that this聽rotation about the z-axis here preserves our deep sense of ‘up’ versus ‘down’, but that it swaps ‘left’ for ‘right’, and vice versa. Note that this is not聽a reflection. We are聽not聽looking at some mirror world here. The difference between a reflection (a mirror world) and a rotation (the real world seen from another angle) is illustrated below. It’s quite confusing but, unlike what you might think, a reflection does not swap left for right. It does turn things inside out, but that’s what a rotation does as well: near becomes far, and far becomes near.difference between reflection and rotation

Before we move on, let me say a few things about the聽mirror world聽and, more in particular, about the obvious question: could it possibly聽exist? Well… What do you think? Your first reaction might well be: “Of course! What nonsense question! We just walk around whatever it is that we’re seeing – or, what amounts to the same, we just turn it around – and there it is: that’s the mirror world, right? So of course it exists!” Well… No. That’s聽not聽the mirror world. That’s just the聽real聽world seen from the opposite direction, and that world… Well… That’s just the real world. 馃檪聽The mirror world is, literally, the world聽in the mirror聽– like the photographer in the illustration below. We don’t swap left for right here: some object going from left to right in the real world is still going from left to right in the mirror world!MirrorOf course, you may now involve the photographer in the picture above and observe – note that you’re now an observer of the observer of the mirror 馃檪 – that, if he would move his left arm in the real world, the photographer in the mirror world would be moving his right arm. But… Well… No. You’re saying that because you’re now聽imaging that you’re the photographer in the mirror world yourself now, who’s looking at the real world from inside, so to speak. So you’ve rotated the perspective in your mind聽and you’re saying it’s his right arm because you imagine聽yourself to be the photographer in the mirror.聽We usually do that because… Well… Because we look in a mirror every day, right? So we’re used to seeing ourselves that way and we always think it’s us we’re seeing. 馃檪 However, the illustration above is correct: the mirror聽world only swaps near for far, and far for near, so it only swaps the sign of the y-axis.

So the question聽is聽relevant: could the mirror world actually exist? What we’re really聽asking here is the following: can we swap the sign of one聽coordinate axis聽only in all of our physical laws and equations and… Well… Do we then still get the same laws and equations? Do we get the same Universe – because that’s what those laws and equations describe? If so, our mirror world can exist. If not, then not.

Now,聽I’ve done a post on that, in which I explain that mirror world can only exist if it would consist of anti-matter. So if our real world and the mirror world would actually meet, they would annihilate each other. 馃檪 But that post is quite technical. Here I want to keep it very聽simple: I basically only want to show what the rotation聽operation implies for the wavefunction. There is no doubt whatsoever that the rotated聽world exists. In fact, the rotated world is just our聽world. We聽walk around some object, or we turn it around, but so we’re still watching the same object. So we’re not thinking about the mirror world here. We just want to know how things look like when adopting some other perspective.

So, back to the starting point: we just have two observers here, who look at the same thing but from opposite directions. Mathematically, this corresponds to a rotation of our reference frame聽about聽the z-axis of 180掳. Let me spell out – somewhat more precisely – what happens to the linear and angular momentum here:

  1. The direction of the linear momentum in the xy-plane swaps direction.
  2. The angular momentum about the y-axis, as well as about the x-axis, swaps direction too.

Note that the illustration only shows angular momentum about the y-axis, but you can easily verify the statement about the angular momentum about the聽x-axis. In fact, the angular momentum about聽any聽line in the聽xy-plane will swap direction.

Of course, the聽x-, y-, z-axes in the other reference frame are different than mine, and so I should give them a subscript, right? Or, at the very least, write something like x’, y’, z’, so we have a primed聽reference frame here,聽right? Well… Maybe. Maybe not. Think about it. 馃檪 A coordinate system is just a mathematical thing… Only the momentum is real… Linear or angular… Equally real… And then Nature doesn’t care about our position, does it? So… Well… No subscript needed, right? Or… Well… What do you聽think?馃檪

It’s just funny, isn’t it? It looks like we can’t really separate reality and perception here. Indeed, note how our聽p2聽=聽鈥p1聽and L2聽=聽鈥L1聽equations already mix reality with how we perceive it. It’s the same thing聽in reality聽but the coordinates of p1聽and L1 are positive, while the coordinates of p2聽and L2聽are negative. To be precise, these coordinates will look like this:

  1. p1聽= (p, 0, 0) and L1 =聽(0, L, 0)
  2. p2聽= (鈭抪, 0, 0) and L1 =聽(0, 鈭扡, 0)

So are they two different things or are they not? 馃檪 Think about it. I’ll move on in the meanwhile. 馃檪

Now, you probably know a thing or two about parity聽symmetry, or聽P-symmetry: if if we flip the sign of all coordinates, then we’ll still find the same physical laws, like F = m路a and what have you. [It works for all physical laws, including quantum-mechanical laws – except those involving the聽weak聽force (read: radioactive decay processes).] But so here we are talking rotational symmetry. That’s not聽the same as P-symmetry. If we flip the signs of聽all聽coordinates, we’re also swapping ‘up’ for ‘down’, so we’re not only turning around, but we’re also getting upside down. The difference between聽rotational symmetry and P-symmetry is shown below.up and down swap

As mentioned, we’ve talked about P-symmetry at length in other posts, and you can easily聽google聽a lot more on that. The question we want to examine here – just as a fun exercise – is the following:

How does that rotational聽symmetry work for a wavefunction?

The聽very first illustration in this post gave you the functional form of the聽elementary聽wavefunction 聽ei = ei(E路t p路x)/魔. We should actually use a bold type聽x聽= (x, y, z) in this formula but we’ll assume we’re talking something similar to that p vector: something moving in the x-direction only – or in the xy-plane聽only. The聽z-component doesn’t change.聽Now, you know that we can reduce all聽actual聽wavefunctions to some linear combination of such elementary wavefunctions by doing a Fourier聽decomposition, so it’s fine to look at the elementary聽wavefunction only – so we don’t make it too complicated here. Now think of the following.

The energy E in the聽ei = ei(E路t – p路x)/魔聽function is a scalar, so it doesn’t have any direction and we’ll measure it the same from both sides – as kinetic or potential energy or, more likely, by adding both. But… Well… Writing ei(E路t – p路x)/魔聽or ei(E路t +聽p路x)/魔聽is not the same, right? No, it’s not. However, think of it as follows: we won’t be changing the direction of time, right? So it’s OK to not聽change the sign of E. In fact, we can re-write the two expressions as follows:

  1. ei(E路t – p路x)/魔聽= ei(E/魔)路tei路(p/魔)路x
  2. ei(E路t +聽p路x)/魔聽= ei(E/魔)路tei路(p/魔)路x

The first wavefunction describes some particle going in the positivex-direction, while the second wavefunction describes some particle going in the negative x-direction, so… Well… That’s exactly聽what we see in those two reference frames, so there is no issue whatsoever. 馃檪 It’s just… Well… I just wanted to show the wavefunction聽does聽look different too when looking at something from another angle.

So why am I writing about this? Why am I being fussy? Well.. It’s just to show you that those transformations聽are actually quite natural – just as natural as it is to see some particle go in one direction in one reference frame and see it go in the other in the other. 馃檪 It also illustrates another point that I’ve been trying to make: the wavefunction is something聽real. It’s not just a figment of our imagination. The real and imaginary part of our wavefunction have a precise geometrical meaning – and I explained what that might be in my more speculative posts, which I’ve brought together in the Deep Blue聽page of this blog. But… Well… I can’t dwell on that here because… Well… You should read that page. 馃檪

The point to note is the following: we聽do聽have different wavefunctions in different reference frames, but these wavefunctions describe the same physical reality, and they also聽do respect the symmetries we’d expect them to respect, except… Well… The聽laws describing the聽weak聽force don’t, but聽I wrote about that a very聽long time ago, and it was聽not聽in the context of trying to explain the relatively simple basic laws of quantum mechanics. 馃檪 If you’re interested, you should check out my post(s) on that or, else, just google聽a bit. It’s really exciting stuff, but not something that will help you much to understand the basics, which is what we’re trying to do here. 馃檪

The second point to note is that those聽transformations聽of the wavefunction – or of quantum-mechanical states聽–聽which we go through when rotating our reference frame, for example – are really quite natural. There’s nothing special about them. We had such transformations in classical mechanics too! But… Well… Yes, I admit they do聽look聽complicated. But then that’s why you’re so fascinated and why you’re reading this blog, isn’t it? 馃檪

Post scriptum: It’s probably useful to be somewhat more precise on all of this. You’ll remember we visualized the wavefunction in some of our posts using the animation below. It uses a left-handed coordinate system, which is rather unusual but then it may have been made with a software which uses a left-handed coordinate system (like RenderMan, for example). Now the rotating arrow at the center moves with time and gives us the polarization of our wave. Applying our customary right-hand聽rule,you can see this beam is left-circularly polarized. [I know… It’s quite confusing, but just go through the motions here and be consistent.]AnimationNow, you know that ei路(p/魔)路x and ei路(p/魔)路x聽are each other’s complex conjugate:

  1. ei路k路x=聽cos(k路x) +聽isin(k路x)
  2. ei路k路x=聽cos(-k路x) +聽isin(-k路x) = cos(k路x) 鈭捖isin(k路x)

Their real part – the cosine function – is the same, but the imaginary part – the sine function – has the opposite sign.So, assuming the direction of propagation is, effectively, the x-direction, then what’s the polarization of the mirror image? Well… The wave will now go from right to left, and its polarization… Hmm…Well… What?聽

Well… If you can’t figure it out, then just forget about those signs and just imagine you’re effectively looking at the same thing聽from the backside. In fact, if you have a laptop, you can push the screen down and go around your computer. 馃檪 There’s no shame in that. In fact, I did that just to make sure I am not聽talking nonsense here. 馃檪 If you look at this beam from the backside, you’ll effectively see it go from right to left – instead of from what you see on this side, which is a left-to-right direction. And as for its polarization… Well… The angular momentum vector swaps direction too but the beam is still聽left-circularly polarized. So… Well… That’s consistent with what we wrote above. 馃檪 The real world is real, and axial vectors are as real as polar vectors. This real聽beam will only appear to be聽right-circularly polarized聽in a mirror. Now, as mentioned above, that mirror world is not聽our聽world. If it would exist – in some other Universe – then it would be made up of anti-matter. 馃檪

So… Well… Might it actually exist? Is there some other world made of anti-matter out there? I don’t know. We need to think about that reversal of ‘near’ and ‘far’ too: as mentioned, a mirror turns things inside out, so to speak. So what’s the implication of that? When we walk around聽something – or do a聽rotation聽– then the reversal between ‘near’ and ‘far’ is something physical: we go near to what was far, and we go away from what was near. But so how would we get into our mirror world, so to speak? We may say that this聽anti-matter world in the mirror is entirely possible, but then how would we get there? We’d need to turn ourselves, literally, inside out – like short of shrink to the zero point and then come back out of it to do that parity inversion along our line of sight. So… Well… I don’t see that happen, which is why I am a fan of the One World hypothesis. 馃檪 So聽I聽think聽the mirror world is just what it is: the mirror world. Nothing real. But… Then… Well… What do聽you聽think? 馃檪

Quantum-mechanical magnitudes

As I was writing about those rotations in my previous post聽(on electron orbitals), I suddenly felt I should do some more thinking on (1) symmetries and (2) the concept of quantum-mechanical聽magnitudes聽of vectors. I’ll write about the first topic (symmetries) in some other post. Let’s first tackle the latter concept. Oh… And for those I frightened with my last post… Well… This聽should really be an easy read. More of a short philosophical reflection about quantum mechanics. Not a technical thing. Something intuitive. At least I hope it will come out that way. 馃檪

First, you should note that the fundamental idea that quantities like energy, or momentum, may be quantized is a very natural one. In fact, it’s what the early Greek philosophers thought about Nature. Of course, while the idea of quantization comes naturally to us (I think it’s easier to understand than, say, the idea of infinity), it is, perhaps,聽not聽so easy to deal with it聽mathematically. Indeed, most mathematical ideas – like functions and derivatives – are based on what I’ll loosely refer to as聽continuum theory. So… Yes, quantization does yield some surprising results, like that formula for the magnitude of some vector J:Magnitude formulasThe JJ in the classical formula above is, of course, the equally classical vector dot product, and the formula itself is nothing but Pythagoras’ Theorem in three dimensions. Easy. I just put a + sign in front of the square roots so as to remind you we actually always have two聽square roots and that we should take the positive one. 馃檪

I will now show you how we get that quantum-mechanical formula. The logic behind it is fairly straightforward but, at the same time… Well… You’ll see. 馃檪 We know that a quantum-mechanical variable – like the spin of an electron, or the angular momentum of an atom – is not continuous but聽discrete: it will have some value聽m聽= j,聽j-1,聽j-2, …, -(j-2), -(j-1), –j.聽Our聽j聽here is the maximum聽value of the magnitude of the component of our vector (J) in the direction of measurement, which – as you know – is usually written as Jz. Why? Because we will usually choose our coordinate system such that our聽z-axis is aligned accordingly. 馃檪 Those values j,聽j-1,聽j-2, …, -(j-2), -(j-1), –j are separated by one unit. That unit would be Planck’s quantum of action 魔 鈮 1.0545718脳10鈭34聽N路m路s – by the way, isn’t it amazing we can actually measure such tiny stuff in some experiment? 馃檪 – if聽Jwould happen to be the angular momentum, but the approach here is more general – action聽can express itself in various ways 馃檪 – so聽the unit doesn’t matter: it’s just the unit, so that’s just one. 馃檪 It’s easy to see that this separation implies j聽must be some integer or half-integer. [Of course, now you might think the values of a series like 2.4, 1.4, 0.4, -0.6, -1.6 are also separated by one unit, but… Well… That would violate the most basic symmetry requirement so… Well… No. Our j聽has to be an integer or a half-integer. Please also note that the number of possible values for m聽is equal to聽2j+1, as we’ll use that in a moment.]

OK. You’re familiar with this by now and so I should not repeat the obvious. To make things somewhat more real, let’s assume聽j聽= 3/2, so m聽=聽 3/2, 1/2, -1/2 or +3/2. Now, we don’t know anything about the system and, therefore, these four values are all equally likely. Now, you may not聽agree with this assumption but… Well… You’ll have to agree that, at this point, you can’t come up with anything else that would make sense, right? It’s just like a classical situation: Jmight point in any direction, so we have to give all聽angles聽an equal probability.聽[In fact, I’ll show you – in a minute or so – that you actually have a point here: we should think some more about this assumption – but so that’s for later. I am asking you to just go along with this story as for now.]

So the聽expected聽value of聽Jz聽is E[Jz] is equal to E[Jz] = (1/4)路(3/2)+(1/4)路(1/2)+(1/4)路(-1/2)+(1/4)路(-3/2) = 0. Nothing new here. We just multiply probabilities with all of the possible values to get an expected value. So we get zero here because our values are distributed symmetrically around the zero point. No surprise. Now, to calculate a magnitude, we don’t need Jz聽but Jz2. In case you wonder, that’s what this squaring business is all about: we’re abstracting away from the direction聽and so we’re going to square聽both positive as well as negative values to then add it all up and take a square root.聽Now, the expected value of Jz2聽is equal to E[Jz] = (1/4)路(3/2)2+(1/4)路(1/2)2+(1/4)路(-1/2)2+(1/4)路(-3/2)2聽= 5/4 = 1.25. Some positive聽value.

You may note that it’s a bit larger than the average of the absolute聽value of our variable, which is equal to (|3/2|+|1/2|+|-1/2|+|-3/2|)/4 = 1, but that’s just because the squaring favors larger values 馃檪 Also note that, of course, we’d also get some positive value if Jz聽would be a continuous variable over the [-3/2, +3/2] interval, but I’ll let you聽think聽about what聽positive value we’d get for Jz2聽assuming Jz聽is uniform distributed over the [-3/2, +3/2] interval, because that calculation is actually not聽so straightforward as it may seem at first. In any case, these considerations are not very relevant to our story here, so let’s move on.

Of course, our聽z-direction was random, and so we get the same thing for whatever direction. More in particular, we’ll also get it for the聽x– and聽y-directions: E[Jx] = E[Jy] = E[Jz] = 5/4. Now, at this point it’s probably good to give you a more generalized formula for these quantities. I think you’ll easily agree to the following one:magnitude squared formulaSo now we can apply our classical JJ =聽Jx2聽+聽Jy2聽+聽Jz2聽formula to these quantities by calculating the expected value of J =聽JJ, which is equal to:

E[JJ] = E[Jx2] + E[Jy2] + E[Jz2] = 3路E[Jx2] = 3路E[Jy2] = 3路E[Jz2]

You should note we’re making use of the E[X+聽Y] = E[X]+ E[Y] property here: the expected value of the sum of two variables is equal to the sum of the expected values of the variables, and you should also note this is true even if the individual variables would happen to be correlated – which might or might not be the case. [What do you think is the case here?]

For j聽= 3/2, it’s easy to see we get E[JJ] = 3路E[Jx] = 3路5/4 = (3/2)路(3/2+1) = j路(j+1). We should now generalize this formula for other values of j,聽 which is not聽so easy… Hmm… It obviously involves some formula for a series, and I am not good at that… So… Well… I just checked if it was true for聽j聽= 1/2 and聽j聽= 1 (please check that聽at least for yourself too!) and then I just believe the authorities on this聽for all other values of j. 馃檪

Now, in a classical聽situation, we know聽that聽JJ product will be the same for whatever direction Jwould聽happen to have, and so its expected value will be equal to its constant聽value JJ. So we can write:聽E[JJ] = JJ.聽So… Well… That’s why we write what we wrote above:Magnitude formulas

Makes sense, no?聽E[JJ] = E[Jx2+Jy2+Jz2] = E[Jx2]+E[Jy2]+E[Jz2] = j路(j+1) = JJ聽= J2, so J聽= +鈭歔j(j+1)], right?

Hold your horses, man!聽Think! What are we doing here, really? We didn’t calculate all that much above. We only found that E[Jx2]+E[Jy2]+E[Jz2] = E[Jx2+Jy2+Jz2] = 聽j路(j+1). So what?聽Well… That’s not聽a proof that聽the聽Jvector actually exists.


Yes. That Jvector might just be some theoretical concept. When everything is said and done, all we’ve been doing – or at least, we聽imagined聽we did – is those聽repeated measurements of Jx,聽Jy聽and聽Jz聽here – or whatever subscript you’d want to use, like J胃,蠁, for example (the example is not聽random, of course) – and so, of course, it’s only natural that we assume these things are the magnitude of the component (in the direction of measurement)聽of some聽real聽vector that is out there, but then… Well… Who knows? Think of what we wrote about the angular momentum in our previous post on electron orbitals. We聽imagine – or do like to think – that聽there’s some angular momentum vector Jout聽there, which we think of as being “cocked” at some angle, so its projection onto the z-axis gives us those discrete values for聽m聽which, for j聽= 2, for example, are equal to 0, 1 or 2 (and -1 and -2, of course) – like in the illustration below. 馃檪cocked angle 2But… Well… Note聽those weird angles: we get something close to 24.1掳 and then another value close to 54.7掳. No symmetry here. 馃槮聽The table below gives some more values for larger聽j. They’re easy to calculate – it’s, once again, just Pythagoras’ Theorem – but… Well… No symmetries here. Just weird values. [I am聽not聽saying the formula for these angles is聽not聽straightforward. That formula is easy enough:聽胃 = sin-1(m/鈭歔j(j+1)]). It’s just… Well… No symmetry. You’ll see why that matters in a moment.]CaptureI skipped the half-integer values for聽j聽in the table above so you might think they might make it easier to come up with some kind of sensible explanation for the angles. Well… No. They don’t. For example, for聽j聽= 1/2 and m =聽卤 1/2, the angles are 卤35.2644掳 – more or less, that is. 馃檪 As you can see, these angles do聽not聽nicely cut up our circle in equal pieces, which triggers the obvious question: are these angles really equally聽likely? Equal angles do聽not聽correspond to equal distances on the聽z-axis (in case you don’t appreciate the point, look at the illustration below). 聽angles distance

So… Well… Let me summarize the issue on hand as follows: the idea of the angle of the J聽vector being randomly distributed is not compatible with the idea of those Jz聽values being equally spaced and equally likely. The latter idea – equally spaced and equally likely聽Jz聽values – relates to different possible states聽of the system being equally likely, so… Well… It’s just a different idea. 馃槮

Now there is another thing which we should mention here. The maximum value of the聽z-component of our聽Jvector is always smaller than聽that quantum-mechanical magnitude, and quite significantly so for small聽j, as shown in the table below. It is only for larger values of聽j聽that the ratio of the two starts to converge to 1. For example, for聽j聽= 25, it is about 1.02, so that’s only 2% off.聽convergenceThat’s why physicists tell us that, in quantum mechanics, the angular momentum is never “completely along the z-direction.” It is obvious that this actually challenges the idea聽of a very聽precise direction in quantum mechanics, but then that shouldn’t surprise us, does it? After, isn’t this what the Uncertainty Principle is all about?

Different聽states, rather than different聽directions… And then Uncertainty because… Well… Because of discrete variables that won’t split in the middle. Hmm… 馃槮

Perhaps. Perhaps I should just accept all of this and go along with it… But… Well… I am really not satisfied here, despite Feynman’s assurance that that’s OK:聽鈥淯nderstanding of these matters comes very slowly, if at all. Of course, one does get better able to know what is going to happen in a quantum-mechanical situation鈥攊f that is what understanding means鈥攂ut one never gets a comfortable feeling that these quantum-mechanical rules are ‘natural’.鈥

I do want to get that comfortable feeling – on聽some sunny day, at least. 馃檪聽And so I’ll keep playing with this, until… Well… Until I give up. 馃檪 In the meanwhile, if you’d聽feel you’ve got some better or some more intuitive聽explanation for all of this, please do let me know. I’d be very grateful to you. 馃檪

Post scriptum: Of course, we would all want to believe that聽J聽somehow exists because… Well… We want to explain聽those states somehow, right? I, for one, am not happy with being told to just accept things and shut up. So let me add some remarks here. First, you may think that the narrative above should distinguish between polar and axial vectors. You’ll remember polar vectors are the聽real聽vectors, like a radius vector聽r, or a force F, or velocity or (linear) momentum.聽Axial vectors (also known as pseudo-vectors) are vectors like the angular momentum vector: we sort of聽construct聽them from… Well… From real聽vectors. The angular momentum L, for example, is the vector cross聽product of the radius vector rand the linear momentum vector p: we write L = rp.聽In that sense, they’re a figment of our imagination. But then… What’s real and unreal? The magnitude of L, for example, does correspond to something real, doesn’t it? And its direction does give us the direction of circulation, right? You’re right.聽Hence, I think polar and axial vectors are both real – in whatever sense you’d want to define real. Their reality is just different, and that’s reflected in their mathematical behavior: if you change the聽direction聽of the axes of your reference frame, polar vectors will change sign too, as opposed to axial vectors: they don’t swap sign. They do something else, which I’ll explain in my next post, where I’ll be talking symmetries.

But let us, for the sake of argument, assume whatever I wrote about those angles applies to axial聽vectors only. Let’s be even more specific, and say it applies to the angular momentum vector only. If that’s the case, we may want to think of a聽classical聽equivalent for the mentioned lack of a precise direction: free nutation. It’s a complicated thing – even more complicated than the phenomenon of聽precession, which we should be familiar with by now. Look at the illustration below (which I took from an article of a physics professor from Saint Petersburg), which shows both precession as well as nutation. Think of the movement of a spinning top when you release it: its axis will, at first, nutate聽around the axis of precession, before it settles in a more steady precession.nutationThe nutation is caused by the gravitational force field, and the nutation movement usually dies out quickly because of聽dampening聽forces (read: friction). Now, we don’t think of gravitational fields when analyzing angular momentum in quantum mechanics, and we shouldn’t. But there is something else we may want to think of. There is also a phenomenon which is referred to as聽free nutation, i.e. a nutation that is聽not聽caused by an external force field. The Earth, for example, nutates slowly because of a gravitational pull from the Sun and the other planets – so that’s聽not聽a free nutation – but, in addition to this, there’s an even smaller wobble – which聽is聽an example of free nutation – because the Earth is not exactly spherical. In fact, the Great Mathematician, Leonhard Euler, had already predicted this, back in 1765, but it took another 125 years or so before an astronomist, Seth Chandler, could finally experimentally confirm and measure it. So they named this wobble the Chandler wobble (Euler already has too many things named after him). 馃檪

Now I don’t have much backup here –聽none, actually 馃檪 – but why wouldn’t we imagine our electron would also sort of nutate freely because of… Well… Some symmetric asymmetry – something like the slightly elliptical shape of our Earth. 馃檪 We may then effectively imagine the angular momentum vector as continually changing direction between a minimum and a maximum angle – something like what’s shown below, perhaps, between 0 and 40 degrees. Think of it as a rotation within a rotation, or an oscillation within an oscillation – or a standing wave within a standing wave. 馃檪wobblingI am not sure if this approach would solve the problem of our angles and distances – the issue of whether we should think in equally likely聽angles or equally likely聽distances聽along the z-axis, really – but… Well… I’ll let you play with this. Please do send me some feedback if you think you’ve found something. 馃檪

Whatever your solution is, it is likely to involve the equipartition theorem and harmonics, right? Perhaps we can, indeed, imagine standing waves within standing waves, and then standing waves within standing waves. How far can we go? 馃檪

Post scriptum 2: When re-reading this post, I was thinking I should probably do something with the following idea. If we’ve got a sphere, and we’re thinking of some vector pointing to some point on the聽surface聽of that sphere, then we’re doing something which is referred to as point picking on the surface of a sphere, and the probability distributions – as a function of the polar and azimuthal angles聽胃 and 蠁 – are quite particular. See the article on the Wolfram site on this, for example. I am not sure if it’s going to lead to some easy explanation of the ‘angle problem’ we’ve laid out here but… Well… It’s surely an element in the explanation. The key idea here is shown in the illustration below: if the direction of our momentum in three-dimensional space is really random, there may still be more of a chance of an orientation towards the equator, rather than towards the pole. So… Well… We need to study the math of this. 馃檪 But that’s for later.density

Re-visiting electron orbitals

One of the pieces I barely gave a glance when reading Feynman鈥檚 Lectures over the past few years, was the derivation of the non-spherical electron orbitals for the hydrogen atom. It just looked like a boring piece of math – and I thought the derivation of the s-orbitals – the spherically聽symmetrical ones – was interesting enough already. To some extent, it is 鈥 but there is so much more to it. When I read it now, the derivation of those p-, d-, f– etc.orbitals聽brings all of the weirdness of quantum mechanics together and, while doing so, also provides for a deeper understanding of all of the ideas and concepts we’re trying to get used to. In addition, Feynman鈥檚 treatment of the matter is actually much shorter than what you鈥檒l find in other textbooks, because鈥 Well鈥 As he puts it, he takes a shortcut. So let鈥檚 try to follow the bright mind of our Master as he walks us through it.

You鈥檒l remember 鈥 if not, check it out again 鈥 that we found the spherically symmetric solutions for Schr枚dinger鈥檚 equation for our hydrogen atom. Just to be make sure, Schr枚dinger鈥檚 equation is a differential equation 鈥 a condition we impose on the wavefunction for our electron 鈥 and so we need to find the functional form for the wavefunctions that describe the electron orbitals. [Quantum math is so confusing that it’s often good to regularly think of what it is that we’re actually trying to do. :-)] In fact, that functional form gives us a whole bunch of solutions 鈥 or wavefunctions 鈥 which are defined by three quantum numbers: n, l, and m. The parameter n corresponds to an energy level (En), l is the orbital (quantum) number, and m is the z-component of the angular momentum. But that doesn鈥檛 say much. Let鈥檚 go step by step.

First, we derived those spherically symmetric solutions 鈥 which are聽referred to as s-states 鈥 assuming this was a state with zero (orbital) angular momentum, which we write as l聽= 0. [As you know, Feynman does not incorporate the spin of the electron in his analysis, which is, therefore, approximative only.] Now what exactly is a state with zero angular momentum? When everything is said and done, we are聽effectively trying to describe some electron orbital here, right? So that’s an amplitude for the electron to be聽somewhere, but then we also know it always moves. So, when everything is said and done, the electron is some circulating negative charge, right? So there is always some angular momentum and, therefore, some magnetic moment, right?

Well… If you聽google聽this question on Physics Stack Exchange, you’ll get a lot of聽mumbo jumbo聽telling you that you shouldn’t think of the electron actually orbiting around. But… Then… Well… A lot of that聽mumbo jumbo聽is contradictory. For example, one of the academics writing there does聽note that, while we shouldn’t think of an electron as some particle, the orbital is still a distribution which gives you the probability of actually finding the electron at some point (x,y,z). So… Well… It is some聽kind of circulating charge – as a point, as a cloud or as whatever.聽The only reasonable answer – in my humble opinion – is that l聽= 0 probably means there is no net聽circulating charge, so the movement in this or that direction must balance the movement in the other. One may note, in this regard, that聽the phenomenon of electron capture in nuclear reactions suggests electrons do travel through the nucleus for at least part of the time, which is entirely coherent with the wavefunctions for s-states – shown below – which tell us that the most probable (x, y, z) position for the electron is right at the center – so that’s where the nucleus is. There is also a non-zero probability for the electron to be at the center for the other orbitals (p,聽d, etcetera).s-statesIn fact, now that I’ve shown this graph, I should quickly explain it. The three graphs are the spherically symmetric wavefunctions for the first three energy levels. For the first energy level – which is conventionally written as聽n = 1,聽not聽as n = 0 – the amplitude approaches zero rather quickly. For n = 2 and n = 3, there are zero-crossings: the curve passes the r-axis. Feynman calls these zero-crossing radial nodes. To be precise, the number of zero-crossings for these s-states is n聽鈭 1, so there’s none for n聽= 1, one for n聽= 2, two for聽n聽= 3, etcetera.

Now, why is the amplitude – apparently – some real-valued聽function here? That’s because we’re actually not聽looking at聽蠄(r, t) here but at the 蠄(r) function which appears in the following break-up聽of the actual wavefunction 蠄(r, t):

蠄(r, t) = ei路(E/魔)路t路蠄(r)

So聽蠄(r) is more of an envelope聽function for the actual wavefunction, which varies both in space聽as well as in time. It’s good to remember that: I would have used another symbol, because聽蠄(r, t) and 蠄(r) are two different beasts, really – but then physicists want you to think, right? And Mr. Feynman would surely聽want you to do that, so why not inject some confusing notation from time to time? 馃檪 So for聽n聽= 3, for example, 蠄(r) goes from positive to negative and then to positive, and these areas are separated by radial nodes. Feynman put it on the blackboard like this:radial nodesI am just inserting it to compare this concept of radial nodes with the concept of a nodal plane, which we’ll encounter when discussing p-states in a moment, but I can already tell you what they are now: those聽p-states are symmetrical in one direction only, as shown below, and so we have a nodal plane instead of a radial node. But so I am getting ahead of myself here… 馃檪nodal planesBefore going back to where I was, I just need to add one more thing. 馃檪 Of course, you know that we’ll take the square of the absolute value of our amplitude to calculate a probability (or the聽absolute square – as we abbreviate it), so you may wonder why the sign is relevant at all. Well… I am not quite sure either but there’s this concept of orbital parity聽which you may have heard of. 聽The orbital parity tells us what will happen to the sign if we calculate the value for 蠄 for 鈭r rather than for r. If 蠄(鈭r) = 蠄(r), then we have an聽even聽function – or even orbital parity. Likewise, if 蠄(鈭r) = 鈭捪(r), then we’ll the function odd聽– and so we’ll have an odd聽orbital parity. The orbital parity is always equal to (-1)l聽= 卤1. The exponent聽l聽is that angular quantum number, and聽+1, or + tout court,聽means even, and -1 or just 鈭 means odd. The angular quantum number for those p-states is l聽= 1, so that works with the illustration of the nodal plane. 馃檪 As said, it’s not聽hugely important but I might as well mention in passing – especially because we’ll re-visit the topic of聽symmetries聽a few posts from now. 馃檪

OK. I said I would talk about states with some angular momentum(so聽l聽鈮 0)聽and so it’s about time I start doing that. As you know, our orbital angular momentum l聽is measured in units of 魔 (just like the total angular momentum J, which we’ve discussed ad nauseam聽already). We also know that if we’d measure its component聽along any direction – any聽direction聽really, but physicists will usually make sure that the z-axis of their reference frame coincides with, so we call it the z-axis 馃檪 – then we will find that it can only have one of a discrete set of values m路魔=聽l路魔, (l-1)路魔, …, -(l-1)路魔, –l路魔. Hence, l聽just takes the role of our good old quantum number聽j聽here, and m is just Jz. Likewise, I’d like to introduce las the equivalent of聽J, so we can easily talk about the angular momentum聽vector. And now that we’re here, why not write聽m聽in bold type too, and say that聽mis the z-component itself – i.e. the whole聽vector聽quantity, so that’s the direction and the magnitude.

Now, we do need to note one crucial difference between j聽and聽l, or between聽J and聽l: our聽j could be an integer聽or聽a half-integer. In contrast,聽l聽must be some integer. Why? Well… If聽l聽can be zero, and the values of聽l聽must be separated by a full unit, then聽l must be 1, 2, 3 etcetera. 馃檪 If this simple answer doesn’t satisfy you, I’ll refer you to Feynman’s, which is also short but more elegant than mine. 馃檪聽Now, you may or may not remember that the quantum-mechanical equivalent of the magnitude of a vector quantity such as lis to be calculated as 鈭歔l路(l+1)]路魔, so if l聽= 1, that magnitude will be 鈭2路魔 鈮 1.4142路魔, so that’s – as expected – larger than the maximum value for m, which is +1. As you know, that leads us to think of that z-component聽mas a projection of聽l. Paraphrasing Feynman, the limited set of values for m imply that the angular momentum is always “cocked” at some angle. For l聽= 1, that angle is either +45掳 or, else, 鈭45掳, as shown below.cocked angleWhat if聽l = 2? The magnitude of聽l聽is then equal to 鈭歔2路(2+1)]路魔 = 鈭6路魔 鈮 2.4495路魔. How do we relate that to those “cocked” angles? The values of聽m聽now range from -2 to +2, with a聽unit聽distance in-between. The illustration below shows the angles. [I didn’t mention 魔 any more in that illustration because, by now, we should know it’s our unit of measurement – always.]

cocked angle 2Note we’ve got a bigger circle here (the radius is about 2.45 here, as opposed to a bit more than 1.4 for m = 0). Also note that it’s not聽a nice cake with perfectly equal pieces. From the graph, it’s obvious that the formula for the angle is the following:angle formulaIt’s simple but intriguing. Needless to say, the sin聽鈭1function is the inverse sine, also known as the arcsine. I’ve calculated the values for all聽m聽for聽l = 1, 2, 3, 4 and聽5 below. The most interesting values are the angles for聽m聽= 1 and聽m聽=聽l.聽As the graphs underneath show, for聽m聽= 1, the values start approaching the zero angle for very large聽l, so there’s not much difference any more between m聽= 卤1 and m聽= 1 for large values of l. What about the聽m聽=聽l case? Well… Believe it or not,聽if聽l聽becomes really large, then these angles do approach 90掳. If you don’t remember how to calculate limits, then just calculate聽胃 for some huge value for聽l聽and聽m. For l聽=聽m聽= 1,000,000, for example, you should find that 胃 = 89.9427…掳. 馃檪angles

graphIsn’t this fascinating? I’ve actually never seen this in a textbook – so it might be an original contribution. 馃檪 OK. I need to get back to the grind: Feynman’s derivation of聽non-symmetrical electron orbitals. Look carefully at the illustration below. If聽mis really the projection of some angular momentum that’s “cocked”, either at a zero-degree or, alternatively, at 卤45潞 (for the聽l聽= 1 situation we show here) – a projection on the z-axis, that is –聽then the value of聽m聽(+1, 0 or -1) does actually correspond to some idea of the orientation聽of the聽space in which our electron is circulating. For聽m聽= 0, that space – think of some torus or whatever other space in which our electron might circulate – would have some alignment with the z-axis. For m聽= 卤1, there is no such alignment.聽m = 0

The interpretation is tricky, however, and the illustration on the right-hand side above is聽surely聽too much of a simplification: an orbital is definitely聽not聽like a planetary orbit. It doesn’t even look like a torus. In fact, the illustration in the bottom right corner, which shows the probability density, i.e. the space in which we are actually likely to find the electron, is a picture that is much more accurate – and it surely does not聽resemble a planetary orbit or some torus. However, despite that, the聽idea that, for m聽= 0, we’d have some alignment of聽the聽space in which our electron moves with the聽z-axis聽is not wrong. Feynman expresses it as follows:

“Suppose m is zero, then there can be some non-zero amplitude to find the electron on the聽z-axis at some distance r. We’ll call this amplitude Fl(r).”

You’ll say: so what? And you’ll also say that illustration in the bottom right corner suggests the electron is actually circulating聽around聽the z-axis, rather than聽through聽it. Well… No. That illustration does not show any circulation. It only shows a probability density. No suggestion of any actual movement or circulation. So the idea is valid: if聽m聽= 0, then the implication is that, somehow, the space of聽circulation of current聽around聽the direction of the angular momentum vector (J), as per the well-known right-hand rule, will include the z-axis. So the idea of that electron orbiting聽through聽the z-axis for m聽= 0聽is essentially correct, and the corollary is… Well… I’ll talk about that in a moment.

But… Well… So what?What’s so special about that Fl(r) amplitude? What can we do with that? Well…聽If聽we would find a way to calculate Fl(r), then we know everything. Huh? Everything?Yes. The reasoning here is quite complicated, so please bear with me as we go through it.

The first thing you need to accept, is rather weird. The thing we said about the non-zero amplitudes to find the electron somewhere on the z-axis for the m = 0 state – which, using Dirac’s bra-ket notation, we’ll write as |l,聽m聽= 0鈱 – has a very categorical corollary:

The amplitude to find an electron whose state m is聽not聽equal to zero on the z-axis (at some non-zero distance r) is zero. We can only find an electron on聽the z-axis unless the聽z-component of its angular momentum (m) is zero.聽

Now, I know this is hard to swallow, especially when looking at those 45掳 angles for Jin our illustrations, because these suggest the actual circulation of current may聽also include at least part of聽the z-axis. But… Well… No. Why not? Well… I have no good answer here except for the usual one which, I admit, is quite unsatisfactory: it’s quantum mechanics,聽not聽classical mechanics. So we have to look at the聽mand聽mvectors, which are pointed along the z-axis itself for m = 卤1 and, hence, the circulation we’d associate with those momentum vectors (even if they’re the zcomponent聽only) is around聽the z-axis. Not through or聽on聽it. I know it’s a really poor argument, but it’s consistent with our picture of the actual聽electron orbitals – that picture in terms of probability densities, which I copy below. For m = 鈭1, we have the聽yz-plane as the nodal plane between the two聽lobes聽of our distribution, so no amplitude to find the electron on the聽z-axis (nor would we find it on the聽y-axis, as you can see). Likewise, for聽m = +1, we have the xz-plane as the nodal plane. Both nodal planes include the聽z-axis and, therefore, there’s zero聽probability on that axis.聽p orbitals

In addition, you may also want to note the聽45掳 angle we associate with m聽= 卤1 does sort of聽demarcate the lobes聽of the distribution by defining a three-dimensional聽cone聽and… Well… I know these arguments are rather intuitive, and so you may聽refuse to accept them. In fact, to some extent, I聽refuse to accept them. 馃檪 Indeed, let me say this loud and clear:聽I really want to understand this in a better way!聽

But… Then… Well… Such better understanding may never come. Feynman’s warning, just before he starts explaining the Stern-Gerlach experiment and the quantization of angular momentum, rings very true here: “Understanding of these matters comes very slowly, if at all. Of course, one does get better able to know what is going to happen in a quantum-mechanical situation鈥攊f that is what understanding means鈥攂ut one never gets a comfortable feeling that these quantum-mechanical rules are 鈥渘atural.鈥 Of course they are, but they are not natural to our own experience at an ordinary level.” So… Well… What can I say?

It is now time to pull the rabbit out of the hat.聽To understand what we’re going to do next, you need to remember that our amplitudes – or wavefunctions – are always expressed聽with regard to a specific frame of reference, i.e. some specific choice of an x-, y– and z-axis. If we change the reference frame – say, to some new set of x’-, y’– and z’-axes – then we need to re-write our amplitudes (or wavefunctions) in terms of the new reference frame. In order to do so, one should use a set of transformation rules. I’ve written several posts on that – including a very basic one, which you may want to re-read (just click the link here).

Look at the illustration below. We want to calculate the amplitude to find the electron at some point in space. Our reference frame is the聽x, y, z frame and the polar coordinates (or聽spherical聽coordinates, I should say) of our point are the radial distance聽r, the polar angle 胃 (theta),聽and the azimuthal angle 蠁 (phi). [The illustration below – which I copied from Feynman’s expos茅聽– uses a聽capital letter聽for聽phi, but I stick to the more usual or more modern convention here.]

change of reference frame

In case you wonder why we’d use polar coordinates rather than Cartesian coordinates… Well… I need to refer you to my other post on the topic of electron orbitals, i.e. the one in which I explain how we get the聽spherically symmetric聽solutions: if you have radial (central) fields, then it’s easier to solve stuff using polar coordinates – although you wouldn’t think so if you think of that monster聽equation that we’re actually trying to solve here:

new de

It’s really Schr枚dinger’s equation for the situation on hand (i.e. a hydrogen atom, with a radial or central Coulomb field because of its positively charged nucleus), but re-written in terms of polar coordinates. For the detail, see the mentioned post. Here, you should just remember we got the spherically symmetric solutions assuming the derivatives of the wavefunction with respect to 胃 and 蠁 – so that’s the 鈭傁/鈭偽 and 鈭傁/鈭傁 in the equation abovewere zero. So now we don’t assume these partial derivatives to be zero: we’re looking for聽states with an angular dependence, as Feynman puts it somewhat enigmatically. […] Yes. I know. This post is becoming very long, and so you are getting impatient.聽Look at the illustration with the (r, 胃, 蠁) point, and let me quote Feynman on the line of reasoning now:

“Suppose we have the atom in some |l,聽m鈱 state, what is the amplitude to find the electron at the angles 胃 and 蠁聽and the distance r聽from the origin? Put a new z-axis, say z’, at that angle (see the illustration above), and ask: what is the amplitude that the electron will be at the distance r聽along the new z’-axis? We know that it cannot be found along z’聽unless its聽z’-component of angular momentum, say m’, is zero. When m’聽is zero, however, the amplitude to find the electron along z’聽is Fl(r).聽Therefore, the result is the product of two factors. The first is the amplitude that an atom in the state |l,聽m鈱燼long the z-axis will be in the state |l,聽m’聽= 0鈱with respect to the z’-axis. Multiply that amplitude by Fl(r) and you have the amplitude 蠄l,m(r) to find the electron at (r, 胃, 蠁) with respect to the original axes.”

So what is he telling us here? Well… He’s going a bit fast here. 馃檪 Worse, I think he may actually not have chosen the right words here, so let me try to rephrase it. We’ve introduced the Fl(r) function above: it was the amplitude, for m = 0, to find the electron on the聽z-axis at some distance r. But so here we’re obviously in the x’, y’, z’聽frame and so Fl(r) is the amplitude for m’聽= 0,聽聽it’s the amplitude to find the electron on the聽z-axis at some distance r along the聽z’-axis. Of course, for this amplitude to be non-zero, we must be in the |l,聽m’聽= 0鈱 state, but are we? Well… |l,聽m’聽= 0鈱 actually gives us the amplitude聽for that. So we’re going to聽multiply聽two amplitudes here:

Fl(r)路|l,聽m’聽= 0鈱

So this amplitude is the product of two amplitudes as measured in the the x’, y’, z’聽frame. Note it’s symmetric: we may also write it as |l,聽m’聽= 0鈱稦l(r). We now need to sort of translate that into an amplitude as measured in the x, y, z聽frame. To go from x, y, z to x’, y’, z’, we first rotated around the z-axis by the angle 蠁, and then聽rotated around the聽new聽y’-axis by the angle 胃. Now, the order of rotation matters: you can easily check that by taking a non-symmetrical object in your hand and doing those rotations in the two different sequences: check what happens to the orientation of your object. Hence, to go back we should聽first聽rotate about the聽y’-axis by the angle聽鈭捨, so our z’-axis folds into the old聽z-axis,聽and then rotate about the z-axis by the angle 鈭捪.

Now, we will denote the transformation聽matrices that correspond to these rotations as Ry’(鈭捨) and Rz(鈭捪) respectively. These transformation matrices are complicated beasts. They are surely聽not聽the easy rotation matrices that you can use for the coordinates themselves. You can click this link to see how they look like for l聽= 1. For larger l, there are other formulas, which Feynman derives in another chapter聽of his Lectures聽on quantum mechanics. But let’s move on. Here’s the grand result:

The amplitude for our wavefunction聽l,m(r) – which denotes the amplitude for (1) the atom to be in the state that’s characterized by the quantum numbers l聽and聽mand聽– let’s not forget – (2) find the electron at r –聽note the bold type:聽r聽= (x, y, z) – would be equal to:

l,m(r) =聽鈱l, m|Rz(鈭捪) Ry’(鈭捨)|l,聽m’聽= 0鈱稦l(r)

Well… Hmm… Maybe. […] That’s聽not聽how Feynman writes it. He writes it as follows:

l,m(r) =聽鈱l, 0|Ry(胃) Rz(蠁)|l,聽m鈱稦l(r)

I am not quite sure what I did wrong. Perhaps the two expressions are equivalent. Or perhaps – is it possible at all? – Feynman made a mistake? I’ll find out. [P.S: I re-visited this point in the meanwhile: see the P.S. to this post. :-)] The point to note is that we have some combined聽rotation matrix聽Ry(胃) Rz(蠁). The elements聽of this matrix are algebraic functions of 胃 and 蠁, which we will write as Yl,m(胃, 蠁), so we write:

a路Yl,m(胃, 蠁) = 鈱l, 0|Ry(胃) Rz(蠁)|l,聽m

Or a路Yl,m(胃, 蠁) =聽鈱l, m|Rz(鈭捪) Ry’(鈭捨)|l,聽m’聽= 0鈱, if Feynman would have it wrong and my聽line of reasoning above would be correct聽– which is obviously聽not聽so likely. Hence, the 蠄l,m(r) function is now written as:

l,m(r) =聽a路Yl,m(胃, 蠁)路Fl(r)

The coefficient a聽is, as usual, a normalization coefficient so as to make sure the surface under the probability density function is 1.聽As mentioned above, we get these聽Yl,m(胃, 蠁) functions from combining those rotation matrices. For l聽= 1, and聽m聽= -1, 0, +1, they are:spherical harmonics聽A more complete table is given below:spherical harmonics 2So, yes, we’re done. Those equations above give us those wonderful shapes for the electron orbitals, as illustrated below (credit for the illustration goes to an interesting site of the UC Davis school).electron orbitalsBut… Hey!Wait a moment!聽We only have these Yl,m(胃, 蠁) functions here. What about Fl(r)?

You’re right. We’re not quite there yet, because we don’t have a functional form for Fl(r). Not yet, that is. Unfortunately, that聽derivation is another lengthy development – and that derivation actually聽is聽just tedious math only. Hence, I will refer you to Feynman for that. 馃檪聽Let me just insert one more thing before giving you The Grand Equation, and that’s a explanation of how we get those nice graphs. They are so-called polar graphs. There is a nice and easy article on them on the website of the University of Illinois, but I’ll summarize it for you. Polar graphs use a polar coordinate grid, as opposed to the Cartesian (or rectangular) coordinate grid that we’re used to. It’s shown below.聽

The origin is now referred to as the聽pole聽– like in North or South Pole indeed. 馃檪 The straight lines from the pole (like the diagonals, for example, or the axes themselves, or any line in-between) measure the distance from the pole which, in this case, goes from 0 to 10, and we can connect the equidistant points by a series of circles – as shown in the illustration also. These lines from the pole are defined by some angle – which we’ll write as 胃 to make things easy 馃檪 – which just goes from 0 to 2蟺 = 0 and then round and round and round again. The rest is simple: you’re just going to graph a function, or an equation – just like you’d graph y = ax + b聽in the Cartesian plane – but it’s going to be a polar equation. Referring back to our p-orbitals, we’ll want to graph the cos2胃 = 蟻 equation, for example, because that’s going to show us the shape of that probability density function for聽l聽= 1 and聽m聽= 0. So our graph is going to connect the (胃, 蟻) points for which the angle (胃) and the distance from the pole (蟻) satisfies the cos2胃 = 蟻 equation. There is a really nice widget聽on the WolframAlpha site that produces those graphs for you. I used it to produce the graph below, which shows the 1.1547路cos2胃 = 蟻 graph (the聽1.1547 coefficient is the normalization coefficient聽a).Now, you’ll wonder why this is a curve, or a curved line. That widget even calculates its length: it’s about 6.374743 units long. So why don’t we have a surface聽or a聽volume聽here?聽We didn’t specify any value for 蟻, did we? No, we didn’t. The widget calculates those values from the equation. So… Yes. It’s a valid question: where’s the distribution? We were talking about some electron cloud or something, right?

Right. To get that cloud – those probability densities really – we聽need that Fl(r) function. Our cos2胃 = 蟻 is, once again, just some kind of envelope function: it marks a space but doesn’t fill it, so to speak. 馃檪 In fact, I should now give you the complete聽description, which has all of the possible states of the hydrogen atom – everything! No separate pieces anymore. Here it is. It also includes聽n. It’s The Grand Equation:The ak聽coefficients in the formula for聽蟻Fn,l(蟻) are the solutions to the equation below, which I copied from Feynman’s text on it all. I’ll also refer you to the same text to see how you actually get solutions out of it, and what they then actually represent. 馃檪We’re done. Finally!

I hope you enjoyed this. Look at what we’ve achieved. We had this differential equation (a simple diffusion equation, really, albeit in the complex space), and then we have a central Coulomb field and the rather simple concept of quantized (i.e. non-continuous or discrete) angular momentum. Now see what magic comes out of it! We聽literally聽constructed the atomic structure out of it, and it’s all wonderfully elegant and beautiful.

Now聽I聽think that’s amazing, and if you’re reading this, then I am sure you’ll find it as amazing as I do.

Note: I did a better job in explaining the intricacies of actually representing those orbitals in a later post. I recommend you have a look at it聽by clicking the link here.

Post scriptum on the transformation matrices:

You must find the explanation for that 鈱l, 0|Ry(胃) Rz(蠁)|l,聽m鈱稦l(r) product highly unsatisfactory, and it is. 馃檪 I just wanted to make you think – rather than just superficially read through it. First note that Fl(r)路|l,聽m’聽= 0鈱 is聽not聽a product of two amplitudes: it is the product of an amplitude with a聽state. A state is a vector in a rather special vector space – a聽Hilbert聽space (just a nice word to throw around, isn’t it?). The point is: a state vector is written as some linear combination of base聽states. Something inside of me tells me we may look at the three p-states as base states, but I need to look into that.

Let’s first calculate the聽Ry(胃) Rz聽matrix to see if we get those formulas for the angular dependence of the amplitudes. It’s the product of the聽Ry(胃) and Rz聽matrices, which I reproduce below.

Note that this product is non-commutative because… Well… Matrix products generally are non-commutative. 馃檪 So… Well… There they are: the second row gives us those functions, so聽I聽am wrong, obviously, and Dr. Feynman is right. Of course, he is. He is聽always聽right – especially because his聽Lectures聽have gone through so many revised editions that all errors must be out by now. 馃檪

However, let me – just for fun – also calculate my聽Rz(鈭捪) Ry’(鈭捨) product. I can do so in two steps: first I calculate Rz(蠁) Ry’(胃), and then I substitute the angles 蠁 and 胃 for 鈥撓 and 鈥撐, remembering that cos(鈥撐) = cos(伪) and sin(鈥撐) = 鈥sin(伪). I might have made a mistake, but I got this:The functions look the same but… Well… No. The聽ei聽and ei聽are in the wrong place (it’s just one聽minus sign – but it’s crucially different). And then these functions should not be in a column. That doesn’t make sense when you write it all out. So Feynman’s expression is, of course, fully correct. But so how do聽we interpret that 鈱l, 0|Ry(胃) Rz(蠁)|l,聽m鈱 expression then? This amplitude probably answers the following question:

Given that our atom is in the |l,聽m鈱 state, what is the amplitude for it to be in the 鈱l, 0| state in the x’, y’, z’ frame?

That makes sense – because we did start out with the assumption that our atom was in the the |l,聽m鈱 state, so… Yes. Think about it some more and you’ll see it all makes sense: we can – and should – multiply this amplitude with the Fl(r) amplitude.

OK. Now we’re really聽done with this. 馃檪

Note: As for the 鈱 | and 聽| 鈱 symbols to denote a state, note that there’s not much difference: both are state vectors, but a state vector that’s written as an end state – so that’s like 鈱 桅 | – is a 1脳3 vector (so that’s a column聽vector), while a vector written as | 桅 鈱 is a 3脳1 vector (so that’s a row聽vector). So that’s why 鈱l, 0|Ry(胃) Rz(蠁)|l,聽m鈱 does give us some number. We’ve got a (1脳3)路(3脳3)路(3脳1) matrix product here – but so it gives us what we want: a 1脳1 amplitude. 馃檪

The state(s) of a photon

While hurrying to try to understand the things I wanted to understand most – like Schr枚dinger’s equation and, equally important, its solutions explaining the weird shapes of electron orbitals – I skipped some interesting bits and pieces. Worse, I skipped two or three of Feynman’s聽Lectures聽on quantum mechanics entirely. These include Chapter 17 – on symmetry and conservation laws – and Chapter 18 – on angular momentum. With the benefit of hindsight, that was not the right thing to do. If anything, doing all聽of the聽Lectures would, at the very聽least, ensure I would have more than an ephemeral grasp of it all. So… In this and the next post, I want to tidy up and go over everything I skipped so far. 馃檪

We’ve written a lot on how quantum mechanics applies to both bosons as well as fermions. For example, we pointed out – in very much detail – that the mathematical聽structure of the聽electromagnetic wave – light! 馃檪 – is quite similar to that of the ubiquitous wavefunction. Equally fundamental – if not more – is the fact that light also arrives in lumps – little light-particles which we call photons. It’s the photoelectric effect, which Einstein explained in 1905 by… Well… By telling us that light consists of quanta – photons – whose energy must be high enough so as to be able to dislodge an electron. It’s what got him his Nobel Prize. [Einstein never got a Nobel Prize for his relativity theory, which is – arguably – at least as important. There’s a lot of controversy around that but, in any case, that’s history.]

So it shouldn’t surprise you that there’s an equivalent to the spin of an electron. With spin, we refer to the angular momentum of a quantum-mechanical system – an atom, a nucleus, an electron, whatever – which, as you know, can only be one of a set of discrete values when measured along some direction, which we usually refer to as the z-direction. More formally, we write that the z-component of the angular moment J is equal to

Jz聽= j路魔, (j-1)路魔, (j-2)路魔, …, -(j-2)路魔, -(j-1)路魔, –j路魔

The聽j聽in this expression is the so-called聽spin聽of the system. For an electron, it’s equal to聽卤1/2, which we referred to as “up” and “down” states respectively because of obvious reasons: one state points upwards – more or less, that is (we know the angular momentum will actually precess around the direction of the magnetic field) – while the other points downwards.

We also know that the聽magnetic energy聽of an electron in a (weak) magnetic field – which, as you know, we conveniently assume to be pointing in the same z-direction, so Bz聽= B – will be equal to:

Umag聽= g路渭z路B路j聽=聽卤 2路渭z路B路(1/2)=聽卤 渭z路B = 卤 B路(qe路魔)/(2m)

In short, the magnetic energy is proportional to the magnetic field, and the constant of proportionality is the so-called Bohr magneton qe路魔/2m. So far, so good. What’s the analog for a photon?

Well… Let’s first discuss the equivalent of a Stern-Gerlach apparatus for photons. That would be a polarizing material, like a piece of calcite, for example. Now, it is, unfortunately,聽much more聽difficult to explain how a polarizing material works than to explain how a Stern-Gerlach apparatus works. [If you thought the workings of that (hypothetical) Stern-Gerlach filter were difficult to understand, think again.] We actually have different types of polarizers – some complicated, some easy. We’ll take the easy ones: linear ones. In addition, the phenomenon of polarization itself is a bit intricate. The phenomenon is well described in Chapter 33 of Feynman’s first聽Volume聽of聽Lectures, out of which I copied the two illustrations below the next paragraph.

Of course, to make sure you think about whatever is that you’re reading, Feynman now chooses the z-direction such that it coincides with the聽direction of propagation聽of the electromagnetic radiation. So it’s now the聽x– and聽y-direction that we’re looking at. Not聽the z-direction any more.聽As usual, we forget about the聽magnetic聽field vector B聽and so we think of the oscillating聽electric聽field vector E only. Why can we forget about B? Well… If we have E, we know B. Full stop. As you know, I think B is pretty essential in the analysis too but… Well… You’ll see all textbooks on physics quickly forget about B when describing light. I don’t want to do that, but… Well… I need to move on. [I’ll come back to the matter – sideways – at the end of this post. :-)]

So we know the electric field vector E may oscillate in a plane (so that’s up and down and back again) but – interestingly enough – its direction may also rotate around the z-axis (again, remember the z-axis is the direction of propagation). Why? Well… Because E has an x– and a y-component (no z-component!), and these two components may oscillate in phase or out of phase, and so all of the combinations below are possible.Linear polarizationElliptical polarizationTo make a long story short, light comes in two varieties: linearly polarized and elliptically polarized. Of course, elliptically may be circularly – if you’re lucky! 馃檪

Now, a (linear) polarizer has an聽optical axis, and only light whose E vector is oscillating along that axis will go through. […] OK. That’s not true: the component along the optical axis聽of some E聽pointing in some other direction will go through too! I’ll show how that works in a moment. But so all the rest is absorbed, and the absorbed energy just heats up the polarizer (which, of course, then radiates heat back out).

In any case, if the optical axis happens to be our x-axis, then we know that the light that comes through will be x-polarized, so that corresponds to the rather peculiar Ex聽= 1 and Ey聽= 0 notation. [This notation refers to coefficients we’ll use later to resolve states into base states – but don’t worry about it now.] Needless to say, you shouldn’t confuse the electric field vector E with the energy of our photon, which we denote as聽E. No bold letter here. No subscript. 馃檪

Pfff… This introduction is becoming way too long. What about our photon? We want to talk about聽one聽photon only and we’ve already written over a page and haven’t started yet. 馃檪

Well… First, we must note that we’ll assume the light is perfectly聽monochromatic, so all photons will have an energy that’s equal to E = h路f, so the energy is proportional to the frequency聽of our light, and the constant of proportionality is Planck’s constant. That’s Einstein’s relation, not a de Broglie聽relation. Just remember: we’re talking聽definite聽energy states here.

Second – and much more importantly – we may define two base states for our photon, |x鈱 and聽|y鈱 respectively, which correspond to the classical linear聽x– and y-polarization. So a photon聽can聽be in state聽|x鈱 or |y鈱 but, as usual, it is much more likely to be in some state that is some linear聽combination of these two base states.

OK. Now we can start playing with these ideas. Imagine a polarizer – or polaroid, as Feynman calls it – whose optical axis is tilted – say, it’s at an angle聽胃 from the x-axis, as shown below.聽Classically, the light that comes through will be polarized in the聽x’-direction, which we associate with that angle聽胃. So we say the photons will be in the |x‘鈱 state.聽linear combinationSo far, so good. But what happens if we have聽two聽polarizers, set up as shown below, with the optical axis of the first one at an angle聽胃, which is, say, equal to 30掳? Will any light get through?two polarizers

Well? No answer? […] Think about it. What happens classically? […] No answer? Let me tell you. In a classical analysis, we’d say that only the x-component of the light that comes through the first polarizer would get through the second one. Huh?Yes. It is聽not聽all or nothing in a聽classical analysis. This is where the magnitude聽of E comes in, which we’ll write as E0, so as to not聽confuse it with the energy聽E. [I know you’ll confuse it anyway but… Well… I need to move on or I won’t get anywhere with this story.] So if E0聽is the (maximum) magnitude (or amplitude – in the classical sense of the word, that is) of E as the light leaves the first polarizer, then its x-component will be equal to E0cos胃. [I don’t need to make a drawing here, do I?] Of course, you know that the intensity of the light will be proportional to the square of the (maximum) field, which is equal to E02cos2胃 = 0.75路E02聽for 胃 = 30掳.

So our classical theory says that only 3/4 of the energy that we were sending in will get through. The rest (1/4) will be absorbed. So how do we model that quantum-mechanically?聽It’s amazingly simple. We’ve already associated the |x‘鈱 state with the photons coming out of the first polaroid, and so now we’ll just say that this |x‘鈱 state is equal to the following linear combination of the |x鈱 and |y鈱 base states:

|x‘鈱 = cos胃路|x鈱 + sin胃路|y

Huh?Yes. As Feynman puts it, we should think our |x‘鈱 beam聽of photons can, somehow, be resolved聽into |x鈱 and |y鈱 beams. Of course, we’re talking amplitudes here, so we’re talking 鈱x|x‘鈱 and 鈱y|x‘鈱 amplitudes here, and the absolute square of those amplitudes will give us the probability that a photon in the |x‘鈱 state gets into the |x鈱 and |y鈱 state respectively. So how do we calculate that? Well… If |x‘鈱 = cos胃路|x鈱 + sin胃路|y鈱, then we can obviously write the following:

x|x‘鈱 = cos胃路鈱x|x鈱 + sin胃路鈱x|y

Now, we know that聽鈱x|y鈱 = 0, because |x鈱 and |y鈱 are base states. Because of the same reason, 鈱x|x鈱 = 1. That’s just an implication of the definition of base聽states: 鈱i|j鈱 = 未ij. So we get:

x|x‘鈱 = cos

Lo and behold! The absolute square of that is equal to cos2胃, so each of these photons have an (average) probability of 3/4 to get through. So if we were to have like 10 billion photons, then some 7.5 billion of them would get through. As these photons are all associated with a definite聽energy – and they go through as聽one whole, of course (no such thing as a 3/4 photon!) – we find that 3/4 of all of the energy goes through. The quantum-mechanical theory gives the same result as the classical theory – as it should, in this case at least!

Now that’s all good for linear polarization. What about elliptical or circular polarization? Hmm… That’s a bit more complicated, but equally feasible. If we denote the state of a photon with a right-hand circular polarization (RHC) as |R鈱 and, likewise, the state of a photon with a left-hand circular polarization (LHC) as |L鈱, then we can write these as the following linear combinations of our base states |x鈱 and |y鈱:linear combination RHC and LHCThat’s where those coefficients under illustrations (c) and (g) come in, although I think they’ve got the sign of i (the imaginary unit) wrong. 馃檪 So how does it work? Well… That 1/鈭2 factor is – obviously – just there to make sure everything’s normalized, so all probabilities over all states add up to 1. So that is taken care of and now we just need to explain how and why we’re adding |x鈱 and |y鈱. For |R鈱, the amplitudes must be the same but with a phase difference of 90掳. That corresponds to the sine and cosine function, which are the same except for a phase difference of 蟺/2 (90掳), indeed: sin(蠁 + 蟺/2) = cos蠁. Now, a phase shift of 90掳 corresponds to a multiplication with the imaginary unit i. Indeed, i聽=聽ei路蟺/2聽and, therefore, it is obvious that ei路蟺/2ei路蠁聽= ei路(蠁 +聽蟺/2).

Of course, if we can write RHC and LHC states as a linear combination of the base states |x鈱 and |y鈱, then you’ll believe me if I say that we can write any聽polarization state – including non-circular elliptical ones – as a linear combination of these base states. Now, there are two or three other things I’d like to point out here:

1. The RHC and LHC states can be used as base states themselves – so they satisfy all of the conditions for a set of base states. Indeed, it’s easy to add and then subtract the two equations above to get the following:new base setAs an exercise, you should verify the right and left polarization states effectively satisfy the conditions for a set of base states.

2. We can also rotate the xy-plane around the z-axis (as mentioned, that’s the direction of propagation of our beam) and use the resulting |x‘鈱 and |y‘鈱 states as base states. In short, we can effectively, as Feynman puts it, “You can resolve light into x– and y– polarizations, or into x’– and y’-polarizations, or into right and left polarizations as a basis.” These pairs are always orthogonal and also satisfy the other conditions we’d impose on a set of base states.

3. The last point I want to make here is much more enigmatic but, as far as I am concerned – by far – the most interesting of all of Feynman’s聽Lecture聽on this topic. It’s actually just a footnote, but I am very excited about it. So… Well… What is it?

Well… Feynman does the calculations to show how a circularly polarized photon looks like when we rotate the coordinates around the z-axis, and shows the phase聽of the right and left polarized states effectively keeps track of the x– and聽y-axes, so all of our “right-hand” rules don’t get lost somehow. He compares this analysis to an analysis he actually did – in a much earlier Lecture聽(in Chapter 5) – for spin-one particles. But, of course, here we’ve been analyzing the photon as a two-state system, right?

So… Well… Don’t we have a contradiction here? If photons are spin-one particles, then they’re supposed to be analyzed in terms of聽three聽base states, right? Well… I guess so… But then Feynman adds a footnote – with a聽very聽important remark:

“The photon is a spin-one particle which has, however, no ‘zero’-state.”

Why I am noting that? Because it confirms my theory about photons – force-particles – being different from matter-particles not only because of the different rules for adding amplitudes, but also because we get聽two聽wavefunctions for the price of one and, therefore, twice聽the energy for every oscillation!聽And so we’ll also have a distance of聽two聽Planck units between the equivalent of the “up” and “down” states of the photon, rather than one Planck unit, like what we have for the angular momentum for an electron.

I described the gist of my argument in my e-book, which you’ll find under another tab of this blog, and so I’ll refer you there. However, in case you’re interested, the summary of the summary is as follows:

  1. We can think of a photon having some energy that鈥檚 equal to E = p = m聽(assuming we choose our time and distance units such that聽c = 1), but that energy would be split up in an electric and a magnetic wavefunction respectively:聽蠄E聽and 蠄B.
  2. Now, Schr枚dinger鈥檚 equation would then apply to both聽wavefunctions, but the E, p and m in those two wavefunctions are the same and not the same: their numerical聽value is the same (pE聽=EE聽= mE聽= pB聽=EB聽= mB), but they鈥檙e conceptually聽different. [They must be: I showed that, if they aren’t, then we get a phase and group velocity for the wave that doesn鈥檛 make sense.]

It is then easy to show that – using the B聽= iE relation between the magnetic and the electric field vectors – we find a聽composite聽wavefunction for our photon which we can write as:

E + B =聽蠄E聽+ 蠄B聽= E聽+聽iE聽=聽鈭2路ei(p路x/2 鈭 E路t/2 + 蟺/4)聽=聽鈭2路ei(蟺/4)ei(p路x/2 鈭 E路t/2)聽=聽鈭2路ei(蟺/4)E

The whole thing then becomes:

蠄聽=聽蠄E聽+ 蠄B聽= 鈭2路ei(p路x/2 鈭 E路t/2 + 蟺/4)聽= 鈭2路ei(蟺/4)ei(p路x/2 鈭 E路t/2)聽

So we’ve got a 鈭2 factor here in front of our combined聽wavefunction for our photon which, knowing that the energy is proportional to the square of the amplitude gives us twice聽the energy we’d associate with a聽regular聽amplitude… [With “regular”, I mean the wavefunction for matter-particles – fermions, that is.] So鈥 Well鈥 That little footnote of Feynman seems to confirm I really am on to something. Nice!聽Very聽nice, actually! 馃檪

Davidson’s function

This post has got聽nothing聽to do with quantum mechanics. It’s just… Well… My son – who’s preparing for his entrance examinations for engineering studies – sent me a message yesterday asking me to quickly explain Davidson’s function – as he has to do some presentation on it as part of a class assignment. As I am an economist – and Davidson’s function is used in transport economics – he thought I would be able to help him out quickly, and I was. So I just thought it might be interesting to quickly jot down my explanation as a post in this blog. It won’t help you with quantum mechanics but, if anything, it may help you think about functional forms and some related topics.

In his message, he sent me the function – copied below – and some definitions of the variables which he got from some software package he had seen or used – at least that’s what he told me. 馃檪Davidson functionSo… This function tells us that the dependent variable is the travel time t, and that it is seen as a function of some independent variable x and some parameters t0, c and 蔚.聽My son defined the variable聽x as the flow聽(of vehicles) on the road, and c as the capacity of the road. To be precise, he wrote the formula that was to be used for c聽as聽follows:capacityWhat about a formula for聽x? Well… He said that was the聽actual聽flow of vehicles, but he had no formula for it. As for聽t0, that was the travel time “at free speed.” Finally, he said 蔚 was a 鈥減aram猫tre de sensibilit茅 de congestion.鈥 Sorry for the French, but that’s the language of his school, which is located in some town in southern Belgium. In English, we might translate it as a congestion sensitivity coefficient. And so that’s what he struggled most with – or so he said.

So that got us started. I immediately told him that, if you write something like聽c聽鈭 x, then you’d better make sure聽c聽and聽x聽have the same physical聽dimension. The formula above tells us that聽c聽is the number of vehicles that you can park on that road. Bumper to bumper. So I told him that’s a rather weird definition of capacity. It’s definitely聽not聽the dimension of flow: the flow should be some number聽per second or – much more likely in transport economics – per minute or per hour. So I told him that he should double-check those definitions of x and c, and that I’d get back to him to explain the formula itself after I had聽googled聽and read some articles on it. So I did that, and so here’s the full explanation I gave him.

While there’s some pretty awesome theory behind (queuing theory and all that), which transportation gurus take very seriously – see, for example, the papers written by Rahmi Ak莽elik聽– a quick look at it all reveals that Davidson’s function is, essentially, just a specific聽functional form聽that we聽impose聽on some real-life problem. So I’d call it an empirical聽function: there’s some theory behind, but it’s more based on experience than pure theory. Of course, sound logic is – or should be – applied to both empirical as well as to purely theoretical functions, but… Well… It’s a different approach than, say, modeling the dynamics of quantum-mechanical state changes. 馃檪 Just note, for example, that we might just as well have tried something else – some exponential function. Something like this, for example:function alternativeDavidson’s function is, quite simply, just nicer and easier than the one above, because the function above is not linear. It could be quadratic (尾 = 2), or whatever, but surely not聽linear.聽In contrast, Davidson’s function is linear and, therefore, easy to fit onto actual traffic data using the simplest of simple linear regression models – and, speaking from experience, most engineers and economists in a real-life job can barely handle even that! 馃檪

So just look at聽that x/(cx) factor as measuring the congestion or saturation, somehow. We’ll denote it by s.聽If you can sort of accept that, then you’ll agree that Davidson’s function tells us that the extra time that’s needed to drive from some place聽a to some place聽b聽along our road will be directly proportional to:

  1. That congestion factor x/(cx), about which I’ll write more in a moment;
  2. The free-speed聽or聽free-flow聽travel time t0聽– which I’ll call the聽free-flow聽travel time from now on, rather than the free-speed travel time, because there’s no such thing as free speed in reality: we have speed limits – or safety limits, or scared moms in the car, whatever – and, a more authoritative argument, the literature on Davidson’s function also talks about free flow rather than free speed;
  3. That epsilon factor (蔚), which – of all the stuff I presented so far – mystified my son most.

So the formula for the extra travel time that’s needed is, obviously, equal to:function 1So we have a very simple linear聽functional form for the extra travel time, and we can easily estimate the actual聽value of our 蔚 parameter using actual traffic data in a simple linear regression. The data analysis toolkit of MS Excel will do stuff like this – if you have the data, of course – so you don’t need a sophisticated statistical software package here.

So that’s it, really: Davidson’s function is, effectively, just nice and easy to work with. […] Well… […] Of course, we still need to聽define聽what x and c actually聽are. And what’s that so-called聽free flow (or free speed?)聽travel time? Well… The free-flow travel time is, obviously, the time you need to go from聽a聽to聽b聽at the free-flow speed. But what’s the free-flow speed? My friend’s Maserati is faster than my little Santro. 馃檪 And are we allowed to go faster than the maximum authorized speed? Interesting questions.

So that’s where the analysis becomes interesting, and why we need better definitions of x and c. If聽c聽is some density – what my son’s rather non-sensical formula seems to imply – we may want to express it聽per unit distance. Per kilometer, for example. So we should probably re-define聽c more simply:聽as the number of lanes divided by the average length of the vehicles that are using it. We get that by dividing the c above by the length of the road – so we divide the length of the road by the length of the road, which gives 1. 馃檪 You may think that鈥檚 weird, because we get something like 3/5 = 0.6… So… What? Well鈥 Yes. 0.6 vehicles per meter, so that鈥檚 600 vehicles per kilometer! Does that sound OK? I think it does. So let’s express that聽capacity聽(c)聽as a maximum聽density聽– for the time being, at least.

Now, none of those cars can move, of course: they are all standing still. Bumper to bumper. It鈥檚 only when we decrease the density that they’re able to move. In fact, you can – and should – visualize the process: the first car moves and opens a space of, say, one or two meter, and then the second one, and so and so on – till all cars are moving with a few meter in-between them. So the density will obviously decrease and, as a result, we’re getting some聽flow聽of vehicles here. If there’s three meter between them, for example, then the density goes down to 3/8 vehicles per meter, so that’s 375 vehicles per kilometer. Still a lot, and you’ll have to agree that – with only 3 meters between them – they’ll probably only move very聽slowly!

You get the idea. We can now define聽x聽as a density too – some density聽x聽that is聽smaller聽than the maximum density聽c. Then that x/(cx) factor – measuring the saturation – obviously makes a lot of sense. The graph below shows how it looks like for c = 5. [The value of 5 is just random, and its order of magnitude doesn’t matter either: we can always re-scale from m to km, or from seconds to minutes and what have you. So don’t worry about it.] Look at this example: when聽x聽is small – like 1 or 2 only – then聽x/(5鈭x) doesn’t increase all that much. So that means we add little to the travel time. Conversely, when x approaches c聽= 5 – so that’s the limit (as you can see, the x聽= 5 line is a (vertical)聽asymptote of the function) – then the travel time becomes huge and starts approaching infinity. So… Well… Yes. That’s when all cars are standing still – bumper to bumper.聽asymptoteBut so what’s the free-flow speed? Is it the maximum speed of my friend’s Maserati -which is like 275 km/h? Well… I don’t think my friend ever drove that fast, so probably not. What else? Think about it. What should we choose here? The obvious choice is the speed limit: 120 km/h, or 90 km/h, or 60 km/h – or whatever. Why? Because you don’t want a ticket, I guess… In any case, let’s analyze that question later. Let’s first look at something else.

Of course, you’ll want to keep some distance between you and the car in front of you when driving at relatively high speeds, and that’s the crux of the analysis really. You may or, more likely, you may not remember that your driving instructor told you to always measure the safety distance between you and the car(s) in front in聽seconds聽rather than in meter. In Belgium, we’re told to stay two seconds away from the car in front of us. So when it passes a light pole, we’ll count “twenty-one, twenty-two” and… Well… If we pass that same light pole while聽we’re still counting those two seconds, then we’d better keep some more distance. It’s got to do with reaction time: when the car in front of you slams the brakes, you need some time to react, and then that car might also have better brakes than yours, so you want to build in some extra safety margin in case you don’t slow down as fast as the car in front of you. So that two-seconds rule is not about the breaking distance really – or not about the breaking distance alone. No. It’s more about the reaction time. In any case, the point is that you’ll want to measure the safety distance in聽time rather than in meter.聽Capito? OK… Onwards…

Now, 120 km/h amounts to 120,000/3,600 = 33.333 meter per second. So the safety聽distance here is almost 67 meter! If the maximum authorized velocity is only 90 km/h, then the safety distance shrinks to 2 脳 (90,000/3,600) = 50 meter. For a maximum authorized velocity of 60 km/h, the safety distance would be equal to 33.333 meter. Both are much larger distances than the average length of the vehicles and, hence, it鈥檚 basically the safety distance 鈥 not the length of the vehicle 鈥 that we need to consider! Let’s quickly calculate the related densities:

  • For a three-lane highway, with all vehicles traveling at 120 km/h and keeping their safety distance, the density will be equal to 3路1,000/66.666… = 45 vehicles per kilometer of highway, so that’s 15 vehicles per lane.
  • If the travel speed is 90 km/h, then the density will be equal to 60 vehicles per km (20 vehicles per lane).
  • Finally, at 60 km/h, the density will be 90 vehicles per km (30 vehicles per lane).

Note that our two-seconds rule implies a linear relation between the safety distance and the maximum authorized speed. You can also see that the relation between the density and the maximum authorized speed is inversely聽proportional: if we halve聽the speed, the density doubles.

Now, you can easily come up with some more formulas, and play around a bit. For example, if we denote the security distance by d, and the mentioned two seconds as td聽– so that’s the time (t) that defines the security distance d – then d is, logically, equal to: d = td鈭檝max. But rather than trying to find more formulas and play with them, let’s think about that concept of flow now. If we would want to define聽the capacity 鈥 or the actual flow 鈥 in terms of the number of vehicles that are passing along any point along this highway, how should we calculate that?

Well… The flow is the number of vehicles that will pass us in one hour, right? So if vmax is 120 km/h, then 鈥 assuming full capacity 鈥 all the vehicles on the next 120 km of highway will all pass us, right? So that makes 45 vehicles per km times 120 km = 5,400 vehicles – per hour, of course. Hence, the flow is just the product of the density聽times the speed.

Now, look at this: if vmax is equal to 90 km/h, then we鈥檒l have 60 vehicles per km times 90 km = 鈥 Well… It鈥檚 鈥 interestingly enough 鈥 the same number: 5,400 vehicles per hour. Let鈥檚 calculate for vmax = 60 km/h鈥 Security distance is 33.333 meter, so we can have 90 vehicles on each km of highway which means that, over one hour, 90 times 60 = 5,400 vehicles will pass us! It’s, once again, the same number: 5,400!聽Now that鈥檚 a very interesting conclusion. Let me highlight it:

If we assume the vehicles will keep some fixed time distance between them (e.g. two seconds), then the capacity of our highway – expressed as some number of vehicles passing along it per time unit – does not depend on the velocity.

So the capacity – expressed as a flow rather than as a density – is just a fixed number: x聽vehicles per hour. The density affects only the (average) speed of all those vehicles. Hence, increasing densities are associated with lower speeds, and higher travel times, but they don’t change the capacity.

It’s really a rather remarkable conclusion, even if the relation between the density聽and the flow聽is easily understood – both mathematically and, more importantly, intuitively. For example, if the density goes down to 60 vehicles per km of highway, then they will only be able to move at a speed of 90 km/h, but we鈥檒l still have that flow of 5,400 vehicles per hour 鈥 which we can look at as the capacity but聽expressed as some flow rather than as a density.聽Lower densities allow for even higher speeds: we calculated above that a density of 45 vehicles per km would allow them to drive at a maximum speed of 120 km/h, so travel time would be reduced even more, but we’d still have 5,400 vehicles per hour! So… Well… Yes. It all makes sense.

Now what happens if the density is even lower, so we could – theoretically – drive safely, or not so safely, at some speed that’s way above the speed limit? If we have enough cars – say 30 vehicles per km, but all driving more聽than 120 km/h, while respecting the two-seconds rule – we’d still have the same flow: 5,400 vehicles per hour. And travel time would go down. But so we can think聽of lower densities and higher speeds but, again, there’s got to be some limit here – a speed limit, safety considerations, a limit to what our engine or our car can do, and, finally, there’s the speed of light too. 馃檪 I am just joking, of course, but I hope you see the point. At some point, it doesn’t matter whether or not the density goes down even further: the travel time should hit some minimum. And it’s that minimum – the lowest possible travel time – that you’d probably like to define as t0.

As mentioned, the minimum travel time is associated with some maximum speed, and – after some consideration of the possible candidates for the maximum speed – you’ll agree the speed limit is a better candidate than the 275 km/h limit of my friends’ Maserati Quattroporte.聽Likewise, you would probably also like to define x0聽as the (maximum) density at the speed limit.

What we’re saying here is that – in theory at least – our t = t(x) function should start with a linear section, between x = 0 and x = x0. That linear section defines a density 0 < x < x0聽which is compatible with us driving at the speed limit – say, 120 km/h – and, hence, with us only needing the time t = t0聽to arrive at our destination. Only when x聽becomes larger than x0, we’ve got to reduce speed – below the speed limit (say, 120 km/h) – to keep the flow going while keeping an appropriate safety distance. A reduction of speed implies a increase聽in travel time, of course. So that’s what’s illustrated in the graph below.

graphTo be specific, if the speed limit is 120 km/h, then – assuming you don’t want to be caught speeding – the minimum travel time will always be equal to 30 seconds per km, even if you’re alone on the highway. Now, as long as the density is less than 45 vehicles per km, you can keep that travel time the same, because you can do your 120 km/h while keeping the safety distance. But if the density increases, above 45 vehicles per km, then stuff starts slowing down because everyone is uncomfortable with the shorter distance between them and the car in front. As the density goes up even more – say, to 60 vehicles per km – we can only do 90 km/h, and so the travel time will then be equal to 40 seconds per km. And then it goes to 90 vehicles per km, so speed slows down to 60 km/h, and so that’s a travel time of 60 seconds per km. Of course, you’re smart – very聽smart – and so聽you’ll immediately say this implies that聽the second聽section of our graph should be linear too, like this:

graph 2You’re right. But then… Well… That doesn’t work with our limit for x, which is c. As I pointed out, c聽is an聽absolute聽maximum density: you just can’t聽park聽any more cars on that highway – unless you fold them up or so. 馃檪 So what’s the conclusion? Well… We may think of the Davidson function as a primitive combination of both shapes, as shown below.

graph 3

I call it a聽primitive聽approximation, because that Davidson function (so that’s the green聽smooth curve above) is not聽a precise (linear or non-linear) combination of the two functions we presented (I am talking about the blue broken line and the smooth red curve here). It’s just… Well… Some primitive聽approximation. 馃檪 Now you can write some very complicated papers – as other authors do – to sort of try to explain this shape, but you’ll find yourself fiddling with聽variable聽time distance rules and other hypotheses that may or may not make sense. In short, you’re likely to introduce聽other聽logical inconsistencies when trying to refine the model. So my advice is to just accept the Davidson’s function as some easy聽empirical聽fit to some real-life situation, and think of what the parameters actually聽do聽– mathematically speaking, that is. How do they change the聽shape聽of our graph?

So we’re now ready to explain that聽epsilon聽factor (蔚) by looking at what it does, indeed. Please try an online graphing tool with a slider (e.g. – just type something like a + bx in the function box, and you’ll see the sliders appear – so you can see how the function changes for different parameter values. The two graphs below, for example, which I made using that graphing tool, show you the function t = 2 + 2鈭櫸碘垯x/(10鈭x) for 蔚 = 0.5 and 蔚 = 10 respectively. As you can see, both functions start at t = 2 and have the same asymptote at x = c = 10. However, you’ll agree that they look very different聽– and that’s because of the value of the 蔚 parameter. For 蔚 = 0.5, the travel time does not increase all that much 鈥 initially at least. Indeed, as you can see, t is equal to 3 if the density is half of the capacity (t = 3 for x = 5 = c/2). In contrast, for 蔚 = 10, we have immediate saturation, so to speak: travel time goes through the roof almost immediately! For example, for聽x聽= 3, t 鈮 10.6, so while the density is less than a third of the capacity, the associated travel time is already more than five聽times the free-flow travel time!

Now I have a tricky question for you: does it make sense to allow 蔚 to take on values larger than one? Think about it. 馃檪 In any case, now you’ve seen聽what the 蔚 factor does from a math point of view. So… Well…聽I’ll conclude here by just noting that it does, indeed, make聽sense to refer to 蔚聽as a 鈥減aram猫tre de sensibilit茅 de congestion鈥, because that鈥檚 what it is: a congestion sensitivity coefficient. Indeed, it鈥檚 not the congestion or saturation聽parameter itself (that鈥檚 a term we should reserve term for the x/(cx) factor), but a congestion sensitivity coefficient alright!

Of course, you will still want some theoretical interpretation. Well鈥 To be honest, I can’t give you one. I don’t want to get lost in all of those theoretical excursions on Davidson’s function, because… Well… It’s no use. That 蔚 is just what it is: it鈥檚 a proportionality coefficient that we are imposing upon our functional form for our travel-time function. You can sum it up as follows:

If x/(cx) is the congestion parameter (or variable, I should say), then it goes from 0 to 鈭 (infinity) when the traffic flow (x) goes x = 0 to x = c聽(full capacity). So, yes, we can call the x/(cx) factor the congestion or saturation variable and write it as聽s = x/(cx). And then we can refer to 蔚 as the 鈥減aram猫tre de sensibilit茅 de congestion鈥, because it is a measure not of the congestion itself, but of the sensitivity of the travel time to the congestion.

If you’d absolutely want some mathematical formula for it, then you could use this one, which you get from re-writing 螖t = t0路蔚路s as聽螖t/t0聽= 蔚路s:

鈭(螖t/t0)/鈭s = 蔚

But… Frankly. You can stare at this formula for a long while – it’s a derivative alright, and you know what derivatives stand for – but you’ll probably learn nothing much from it. [Of course, please write me if you don’t agree, Vincent!] I just looked at聽those two graphs, and note how their form changes as a function of 蔚. Perhaps you have some brighter idea about it!

So… Well… I am done. You should now fully聽understand Davidson’s function. Let me write it down once more:

formula 3

Again, as mentioned, its main advantage is its linearity. Because of its linearity, it is easy to actually estimate the parameters: it’s just a simple linear regression 鈥 using actual travel times and actual congestion measurements 鈥 and so then we can estimate the value of 蔚 and see if it works.聽Huh? How do we see聽if it works?聽Well… I told you already: when everything is said and done, Davidson鈥檚 function is just one of the many models for the actual reality, so it tries to explain how travel time increases because of congestion. There are other models, which come with other functions 鈥 but they are more complicated, and so are the functions that come with them (check out that paper from聽Rahmi Ak莽elik, for example). Only reality can tell us what model is the best fit to whatever it is that we鈥檙e trying to model. So that’s why I call Davidson’s function an empirical聽function, and so you should check it against reality. That’s when a statistical software package might be handy: it allows you to test the聽fit聽of various functional – linear and non-linear – forms against a real-life data set.

So that’s it.聽I tasked my son to go through this post and correct any errors – only typos, I hope! – I may have made. I hope he’ll enjoy this little exercise. 馃檪

Comments on the MIT’s Stern-Gerlach lab experiment

In my previous post, I noted that I’d go through the MIT’s documentation on the Stern-Gerlach experiment that their undergrad students have to do, because we should now – after 175 posts on quantum physics 馃檪 – be ready to fully understand what is said in there. So this post is just going to be a list of comments. I’ll organize it section by section.

Theory of atomic beam experiments

The theory is known – and then it isn’t, of course. The key idea is that individual atoms behave like little magnets. Why? In the simplest and most naive of models, it’s because the electrons somehow circle around the nucleus. You’ve seen the illustration below before. Note that current is, by convention, the flow of positive charge, which is, of course, opposite to the electron flow. You can check the direction by applying the right-hand rule:聽if you curl the fingers of your right hand in the direction of the current in the loop (so that’s opposite to v), your thumb will point in the direction of the magnetic moment (渭).orbital-angular-momentumSo the electron orbit – in whatever way we’d want to visualize it – gives聽us L, which we refer to as the orbital angular momentum. We know the electron is also supposed to spin about its own axis – even if we know this planetary model of an electron isn’t quite correct. So that gives us a spin angular momentum S.聽In the so-called vector model of the atom, we simply add the two to get the total聽angular momentum J = L + S.

Of course, now you’ll say: only hydrogen has one electron, so how does it work with multiple electrons? Well… Then we have multiple聽orbital angular momentum li聽which are to be added to give a total orbital angular momentum L. Likewise, the electrons spins si聽can also be added to give some total spin angular momentum S. So we write:

J = L + S聽with L =聽危ili聽and S聽=聽危isi

Really? Well… If you’d聽google聽this to double-check – check the Wikipedia article on it, for example – then you’ll find this additivity property is valid only for relatively light atoms (Z 鈮 30) and only if any external magnetic field is weak enough. The way individual orbital聽and聽spin聽angular momenta have to be combined so as to arrive at some total L, S and J is referred to a coupling scheme: the additivity rule above is referred to as LS coupling, but one may also encounter LK coupling, or jj聽coupling, or other stuff. The US National Institute of Standards and Technology (NIST) has a nice article on all these models聽– but we need to move on here. Just note that we do assume the LS coupling scheme applies to our potassium beam – because its atomic number (Z) is 19, and the external magnetic field is assumed to be weak enough.

The vector model of the atom describes the atom using angular momentum vectors. Of course, we know that a magnetic field will cause our atomic magnet to precess – rather than line up. At this point, the classical analogy between a spinning top – or a gyroscope – and our atomic magnet becomes quite problematic. First, think about the implications for L and S when assuming, 聽as we usually do, that J聽precesses nicely about an axis that is parallel to the magnetic field – as shown in the illustration below, which I took from Feynman’s treatment of the matter.聽precessionIf J is the sum of two other vectors L and S, then this has rather weird implications for the precession of L and S, as shown in the illustration below – which I took from the Wikipedia article on LS coupling. Think about: if L and S are independent, then the axis of precession for these two vectors should be just the same as for J, right? So their axis of precession should also be parallel to the magnetic field (B), so that’s the direction of the Jz聽component, which is just the z-axis of our reference frame here.375px-ls_couplingMore importantly, our classical model also gets into trouble when actually measuring the magnitude of Jz: repeated measurements will not yield聽some randomly distributed continuous variable, as one would classically expect. No. In fact, that’s what this experiment is all about: it shows that Jz聽will take only certain quantized values.聽That is what is shown in the illustration below (which once again assumes the magnetic field (B) is along the z-axis).聽vector-modelI copied the illustration above from the HyperPhysics site, because I found it to be enlightening and intriguing at the same time. First, it also shows this rather weird implication of the vector model: if J continually changes direction because of its precession in a weak magnetic field, then L and S must, obviously, also continually change direction. However, this illustration is even more intriguing than the Wikipedia illustration because it assumes the axis of precession of L and聽S and L聽actually the same!

So what’s going on here? To better understand what’s going on, I started to read the whole HyperPhysics article on the vector model, which also includes the illustration below, with the following comments: “When orbital angular momentum L and electron spin S are combined to produce the total angular momentum of an atomic electron, the combination process can be visualized in terms of a vector model. Both the orbital and spin angular momenta are seen as precessing about the direction of the total angular momentum J. This diagram can be seen as describing a single electron, or multiple electrons for which the spin and orbital angular momenta have been combined to produce composite angular momenta S and L respectively. In so doing, one has made assumptions about the coupling of the angular momenta which are described by the LS coupling scheme which is appropriate for light atoms with relatively small external magnetic fields.”vector-model-2Hmm… What about those illustrations on the right-hand side – with the vector sums and those values for聽j聽and mj? I guess the idea may also be illustrated by the table below: combining different values for聽l (卤1) and聽s聽(卤1/2) gives four possible values, ranging from +3/2 to -1/2, for聽j聽=聽l聽+ s.tableHaving said that, the illustration raises a very fundamental question: the length of the sum of two vectors is definitely聽not聽the same as the sum of the聽length of the two vectors! So… Well… Hmm… Something doesn’t make sense here! However, I can’t dwell any longer on this. I just wanted to note you should not take all that’s published on those oft-used sites on quantum mechanics for granted. But so I need to move on. Back to the other illustration – copied once more below.vector-modelWe have that very special formula for the magnitude (J) of the angular momentum J:

J鈹= J =聽鈭(JJ) = 鈭歔j路(j+1)路魔2] = 鈭歔j路(j+1)]路魔聽

So if聽j聽= 3/2, then聽J is equal to聽鈭歔(3/2)路(3/2+1)]路魔 =聽鈭歔(15/4)路魔聽鈮 1.9635路魔, so that’s almost聽2魔. 馃檪 At the same time, we know that for j聽= 3/2, the possible values of Jz聽can only be +3魔/2, +魔/2, -魔/2, and -3魔/2. So that’s what’s shown in that half-circular diagram: the magnitude聽of J is larger聽than its z-component – always!

OK. Next. What’s that 3p3/2聽notation? Excellent question! Don’t think this 3p denotes an electron orbital, like 1s or 3d – i.e. the orbitals we got from solving Schr枚dinger’s equation. No. In fact, the illustration above is somewhat misleading because the correct notation is not 3p3/2聽but 3P3/2. So we have a capital聽P which is preceded by a superscript 3. This is the notation for the so-called聽term symbolfor a nuclear,聽atomic or molecular (ground)聽state which – assuming our LS coupling model is valid – because we’ve got other term symbols for other coupling models – we can write, more generally, as:


The J, L and S in this thing are the following:

1. The J is the total angular momentum聽quantum number,聽so it is – the notation gets even more confusing now – the聽j聽in the聽鈹J鈹= J =聽鈭(JJ) = 鈭歔j路(j+1)路魔2] = 鈭歔j路(j+1)]路魔聽expression. We know that number is 1/2 for electrons, but it may take on other values for nuclei, atoms or molecules. For example, it is 3/2 for nitrogen, and 2 for oxygen, for which the corresponding terms are 4S3/2聽and聽3P2聽respectively.

2. The S in the term symbol is the total spin quantum number, and 2S+1 itself is referred to as the fine-structure聽multiplicity. It is not聽an easy concept. Just remember that the fine structure describes the splitting of the spectral lines of atoms due to electron spin. In contrast, the gross聽structure energy levels are those we effectively get from solving Schr枚dinger’s equation聽assuming our electrons have no聽spin. 聽We also have a hyperfine structure, which is due to the existence of a (small) nuclear聽magnetic moment, which we do not聽take into consideration here, which is why the 4S3/2聽and聽3P2聽terms are sometimes being referred to as describing electronic聽ground states. In fact, the MIT lab document, which we are studying here, refers to the ground state of the potassium atoms in the beam as an聽electronic聽ground state, which is written up as聽2S1/2. So聽S聽is, effectively, equal to 1/2. [Are you still there? If so, just write it down: 2S+1 = 2 鈬捖S聽= 1/2. That means the following:聽our potassium atom behaves like an electron: its spin is either ‘up’ or, else, it is聽‘down’. There is no in-between.]

3. Finally, the L聽in the term symbol is the total orbital angular momentum quantum number but, rather than using a number, the values of L聽are often represented as S, P, D, F, etcetera. This L聽number is聽very聽confusing because – as mentioned above – one would think it represents those s, p, d, f, g,… orbitals. However, that is not the case. The difference may easily be illustrated by observing that a carbon atom, for example, has six electrons, which are distributed over the 1s, 2s and 2p orbitals (one pair each). However, its ground state only gets one聽L聽number:聽L = P. Hence, its value is 1. Of course, now you will wonder how we get that number.

Well… I wish I could give you an easy answer, but I can’t. For two聽p聽electrons – think of our carbon atom once again – we can have聽L聽= 0, 1 or 2, or S, P and D. They effectively correspond to different energy levels, which are related to the way these two electrons聽interact聽with each other. The phenomenon is referred to as angular momentum coupling. In fact, all of the numbers we discussed so far – J, S and L – are numbers resulting from angular momentum coupling. As Wikipedia puts it: “Angular momentum coupling refers to the construction of eigenstates of total聽angular momentum out of eigenstates of separate聽angular momentum.” [As you know, each eigenstate corresponds to an energy level, of course.]

Now that should clear some of the confusion on the聽2S+1LJ聽notation: the capital letters J, S and L refer to some聽total, as opposed to the quantum numbers you are used to, i.e.聽n, l, m and s, i.e. the so-called聽principal,聽orbital,聽magnetic聽and聽spin聽quantum number respectively. The lowercase聽letters are quantum numbers that describe an聽electron聽in an atom, while those capital letters denote quantum numbers describing the atom – or a molecule – itself.

OK. Onwards. But where were we? 馃檪 Oh… Yes. That聽J = L + S聽formula gives us some total聽electronic聽angular momentum, but we’ll also have some聽nuclear聽angular momentum, which our MIT paper denotes as I. Our聽vector modelof our potassium atom allows us, once again, to simply add the two to get the total聽angular momentum, which is written as聽F = J + I = L + S + I.聽This, then, explains why the MIT experiment writes the magnitude of the聽total聽angular momentum as:


Of course, here I don’t need to explain – or so I hope – why this quantum-mechanical formula for the calculation of the magnitude is what it is (or, equivalently, why the usual Euclidean metric – i.e. 鈭(x2聽+ y2聽+ z2) – is not聽to be used here. If you聽do聽need an explanation, you’ll need to go through the basics once again.

Now, the whole point, of course, is that the z-component of F聽can have only the discrete values that are specified by the Fz聽= mf路魔 equation, with聽mf聽– i.e. the (total) magnetic quantum number – having an equally discrete value equal to mf聽= 鈭f, 鈭(f鈭1), …, +(f+1), f.

For the rest, I probably shouldn’t describe the experiment itself: you know it. But let me just copy the set-up below, so it’s clear what it is that we’re expecting to happen. In addition, you’ll also need the illustration below because I’ll refer to those d1聽and d2聽distances shown in what follows.set-up

Note the MIT documentation does spell out some additional assumptions. Most notably, it says that the potassium atoms that emerge from the oven (at a temperature of 200掳) will be:

(1) almost exclusively in the ground electronic state,

(2) nearly equally distributed among the two (magnetic) sub-states characterized by f, and, finally,

(3) very nearly equally distributed among the hyperfine states, i.e. the states with the same聽f聽but with different聽mf.

I am just noting these assumptions because it is interesting to note that – according to the man or woman who wrote this paper – we would actually have states within states here. The paper states that the hyperfine splitting of the two sub-beams we expect to come out of the magnet can only be resolved by very advanced聽atomic beam techniques, so… Well… That’s聽not聽the apparatus that’s being used for this experiment.

However, it’s all a bit weird, because the paper notes that the rules for combining the electronic and nuclear angular momentum – using that聽F = J + I = L + S + I聽formula – imply that our quantum number聽f = i 卤 jcan be either 1聽or聽2. These two values would be associated with the following聽mf聽and mf聽force values:

f聽= 1聽鈬捖燜z聽=聽mf路魔 =聽鈭捘, 0 or +魔 (so we’d have聽three聽beams here)

f聽= 2 鈬捖燜z聽=聽mf路魔 =聽鈭2魔, 鈭捘, 0, +魔聽or +2魔 (so we’d have聽five聽beams here)

Neither聽of the two possibilities relates to the situation at hand – which assumes two聽beams only. In short, I think the man or women who wrote the theoretical introduction – an assistant professor, most likely (no disrespect here: that’s how far I聽progressed in economics – nothing more, nothing less)聽– might have made a mistake. Or perhaps he or she may have wanted to confuse us.

I’ll look into it over the coming days. As for now, all you need to know – please jot it down! – is that our potassium atom is fully described by 2S1/2. That shorthand notation has all the quantum number we need to know. Most importantly, it tells us S聽is, effectively, equal to 1/2. So… Well… That聽2S1/2聽notation tells us our potassium atom should behave like an electron: its spin is either ‘up’ or ‘down’. No in-between. 馃檪聽So we should have聽two聽beams. Not three or five. No fine or hyperfine sub-structures! 馃檪 In any case, the rest of the paper makes it clear the assumption is, effectively, that the angular momentum number is equal to聽j聽= 1/2. So… Two beams only. 馃檪

How to calculate the expected deflection

We know that the聽inhomogeneous magnetic field (B), whose direction is the z-axis, will result in a force, which we have calculated a couple of times already as being equal to:f1In case you’d want to check this, you can check one of my posts on this. I just need to make one horrifying remark on notation here: while the same symbol is used, the聽force聽Fz聽is, obviously, not聽to be confused with the z-component of the angular momentum F = J + I = L + S + I聽that we described above. Frankly, I hope that the MIT guys have corrected that in the meanwhile, because it’s really聽terribly聽confusing notation! In any case… Let’s move on.

Now, we assume the deflecting force is constant because of the rather particular design of the magnet pole pieces (see Appendix I of the paper). We can then use Newton’s Second Law (F = ma) to calculate the velocity in the z-direction, which is denoted by Vz聽(I am not sure why a capital letter is used here, but that’s not important, of course). That velocity is assumed to go from 0 to its final value Vz聽while our potassium atom travels between the two magnet poles but – to be clear – at any point in time, Vz聽will increase linearly聽– not exponentially – so we can write: Vz聽= a路t1, with t1聽the time that is needed to travel through the magnet. Now, the relevant mass is the mass of the atom, of course, which is denoted by M. Hence, it is easy to see that a聽= Fz/M =聽Vz/t1. Hence, we find that Vz聽= Fz路t1/M.

Now, the vertical distance traveled (z) can be calculated by solving the usual integral:聽z = 鈭0t1v(t)路dt = 鈭0t1a路t路dt = a路t12/2 = (Vz/t1)路t12/2 = Vz路t1/2. Of course, once our potassium atom comes out of the magnetic field, it will continue to travel upward or downward with the same velocity Vz, which adds Vz路t2聽to the total distance traveled along the z-direction. Hence, the formula for the deflection is, effectively, the one that you’ll find in the paper:

z =聽Vz路t1/2 +聽Vz路t2聽=聽Vz路(t1/2 +聽t2)

Now, the travel times depend on the velocity of our potassium atom along the y-axis, which is approximated by equating it with 鈹V鈹= V, because the y-component of the velocity is easily the largest – by far! Hence, t1聽= d1/V聽and t2聽= d2/V. Some more manipulation will then give you the expression we need, which is a formula for the deflection in terms of variables that we actually know:


Statistical mechanics

We now need to combine this with the Maxwell-Boltzmann distribution for the velocities we gave you in our previous post:formula-aThe next step is to use this formula so as to be able to calculate a distribution which would describe the intensity聽of the beam. Now, it’s easy to understand such聽intensity聽will be related to the聽flux聽of potassium atoms, and it’s equally easy to get that a flux is defined as the rate of flow per unit area. Hmm… So how does this get us the formula below?density-formula-2The tricky thing – of course – is the use of those聽normalized聽velocities because… Well… It’s easy to see that the right-hand side of the equation above – just forget about the d(V/V0聽) bit for a second, as we have it on both sides of the equation and so it cancels out anyway – is just density times velocity. We do have a product of the聽density聽of particlesand the聽velocity with which they emerge here – albeit a normalized velocity. But then… Who cares? The normalization is just a division by V0聽– or a multiplication by 1/V0, which is some constant. From a math point of view, it doesn’t make any difference: our variable is聽V/V0聽instead of V. It’s just like using some other unit. No worries here – as long as you use the new variable consistently everywhere. 馃檪

Alright. […] What’s next? Well… Nothing much. The only聽thing that we still need to explain now is that factor 2. It’s easy to see that’s just a normalization factor – just like that 4/鈭毾 factor in the first formula. So we get it from聽imposing聽the usual condition:densitySo… What’s next… Well… We’re almost there. 馃檪 As the MIT paper notes, the聽f(V) and聽I(V/V0) functions can be mapped聽to each other: the related聽transformation聽mapsa velocity distribution to an intensity distribution – i.e. a distribution of the聽deflection聽– and vice versa.

Now, the rest of the paper is just a lot of algebraic manipulations – distinguishing the case of a quantized Fz聽versus a continuous Fz. Here again, I must admit I am a bit shocked by the mix-up of concepts and symbols. The paper talks about a quantized deflecting force聽– while it’s obvious we should be talking a quantized angular momentum. The two concepts – and their units – are fundamentally different: the unit in which angular momentum is measured is the action unit: newton路meter路second (N路m路s). Force is just force: x聽newton.

Having said that, the mix-up does trigger an interesting philosophical question: what is quantized really? Force (expressed in N)? Energy (expressed in N路m)? Momentum (expressed in N路s)? Action (expressed in N路m路s, i.e. the unit of angular聽momentum)? Space? Time? Or space-time – related through the absolute聽speed of light (c)? Three factors (force, distance and time), six possibilities. What’s your guess?


What’s聽my聽guess? Well… The formulas tell us the only thing that’s quantized is action: Nature聽itself tells us we have to聽express it in terms of Planck units. However, because action is a product聽involving all of these factors, with different聽dimensions, the quantum-mechanical quantization of action can, obviously,聽express聽itself in various ways. 馃檪

Statistical mechanics re-visited

Quite a while ago – in June and July 2015, to be precise – I wrote a series of posts on statistical mechanics, which included digressions on thermodynamics, Maxwell-Boltzmann, Bose-Einstein and Fermi-Dirac statistics (probability distributions used in quantum mechanics), and so forth. I actually thought I had sort of exhausted the topic. However, when going through the documentation on that Stern-Gerlach experiment that MIT undergrad students need to analyze as part of their courses, I realized I did actually not present some very basic formulas that you’ll definitely need in order to actually understand that experiment.

One of those basic formulas is the one for the distribution of velocities of particles in some volume (like an oven, for instance), or in a particle beam – like the beam of potassium atoms that is used to demonstrate the quantization of the magnetic moment in the Stern-Gerlach experiment. In fact, we’ve got聽two聽formulas here, which are subtly – as subtle as the difference between v聽(boldface, so it’s a vector) and v (lightface, so it’s a scalar) 馃檪 – but fundamentally different:


Both functions are referred to as the Maxwell-Boltzmann density distribution, but the first distribution gives us the density for some v in the聽velocity聽space, while the second gives us the distribution density of the聽absolute value聽(or modulus) of the velocity, so that is the distribution density of the聽speed, which is just a scalar – without any direction. As you can see, the second formula includes a 4蟺路v2聽factor.

The question is: how are these formulas related to Boltzmann’s聽f(E) = C路e鈭抏nergy/kT聽Law? The answer is: we can derive all of these formulas – for the distribution of velocities, or of momenta – by clever substitutions. However, as evidenced by the two formulas above, these substitutions are not always straightforward. So let me quickly show you a few things here.

First note the two formulas above already include the e鈭抏nergy/kT聽function if we equate the energy E with the聽kinetic聽energy: E = K.E. = m路v2/2. Of course, if you’ve read those June-July 2015 posts, you’ll note that we derived Boltzmann’s Law in the context of a force field, like gravity, or an electric potential. For example, we wrote the law for the density (n = N/V) of gas in a gravitational field (like the Earth’s atmosphere) as n = n0e鈭扨.E./kT. In this formula, we only see the potential energy: P.E. = m路g路h, i.e. the product of the mass (m), the gravitational constant (g), and the height (h). However, when we’re talking the distribution of velocities – or of momenta – then the聽kinetic聽energy comes into play.

So that’s a first thing to note: Boltzmann’s Law is actually a whole set聽of laws. For example,聽the frequency distribution of particles in a system over various possible states, also involves the same exponential function: F(state)聽鈭澛e鈭扙/kT. E is just the聽total聽energy of the state here (which varies from state to state, of course), so we don’t distinguish between potential and kinetic energy here.

So what energy concept should we use in that Stern-Gerlach experiment? Because these potassium atoms in that oven – or when they come out of it in a beam – have kinetic energy only, our聽E = m路v2/2 substitution does the trick: we can say that the potential energy is taken to be zero, so that all energy is in the form of kinetic energy. So now we understand the e鈭抦路v2/2kT聽function in those聽f(v) and f(v) formulas. Now we only need to explain those complicated coefficients. How do we get these?

We get them through clever substitutions using equations such as:

fv(v)路dv聽 = fp(p)路dp

What are we writing here? We’re basically combining two normalization conditions: if fv(v) and fp(p) are proper probability density functions, then they must give us 1 when integrating over their domain. The domain of these two functions is, obviously, the velocity (v) and momentum (p) space. The velocity and momentum space are the same mathematical聽space, but they are obviously聽not聽the same聽physical聽space. But the two physical spaces are closely related: p = m路v, and so it’s easy to do the required transformation聽of variables. For example, it’s easy to see that, if E = m路v2/2, then E is also equal to E = p2/2m.

However, when doing these substitutions, things get tricky. We already noted that p and v are vectors, unlike E, or p and v – which are scalars,or聽magnitudes. So we write: p = (px, py, pz) and |p| = p, and聽v = (vx, vy, v z) and |v| = v. Of course, you also know how we calculate those magnitudes:


Note that this also implies the following: pp = p2聽= px2聽+ py2聽+pz2聽= p2. Trivial, right? Yes. But have a look now at the following differentials:

  • d3p
  • dp
  • dp = d(px, py, pz)
  • dpx路dpy路dpz

Are these the same or not? Now you need to think, right? That d3p and dp are different beasts is obvious: d3p聽is, obviously, some infinitesimal聽volume, as opposed to dp, which is, equally obviously, an (infinitesimal) interval. But what volume exactly? Is it the same as that dp = d(px, py, pz) volume, and is that the same as the dpx路dpy路dpz聽volume?

Fortunately, the volume聽differentials聽are, in fact, the same – so you can start breathing again. 馃檪 Let’s get going with that d3p聽notation for the time being, as you will find that’s the notation which is used in the Wikipedia article on the Maxwell-Boltzmann distribution聽– which I warmly recommend, because – for a change – it is a much easier read than other Wikipedia articles on stuff like this. Among other things, the mentioned article writes the following:

fE(E)路dE = fp(p)路d3p

What is this? Well… It’s just like that fv(v)路dv聽 = fp(p)路dp equation: it combines the normalization condition for both distributions. However, it’s much more interesting, because,聽on the left-hand side, we multiply a density with an (infinitesimal) interval聽(dE), while on the right-hand side we multiply with an (infinitesimal) volume (d3p). Now, the (infinitesimal) energy interval dE must, obviously, correspond with the (infinitesimal) momentum聽volume聽d3p. So how does that work?

Well… The mentioned Wikipedia article talks about the “spherical symmetry of the energy-momentum dispersion relation” (that dispersion relation is just E = |p|2/2m, of course), but that doesn’t make us all that wiser, so let’s try a more聽heuristic聽approach.聽You might remember the formula for the volume of a spherical聽shell, which is simply the difference between the volume of the outer sphere聽minus聽the volume of the inner sphere: V = (4蟺/3)路R3聽鈭 (4蟺/3)路r3聽= (4蟺/3)路(R3聽鈭 r3). Now, for a very thin shell of thickness 螖r, we can use the following first-order approximation:聽V = 4蟺路r2路螖r.聽In case you wonder, I hereby copy a nice explanation from the Physics Stack Exchange site:


Perfect. That’s all we need to know. We’ll use that first-order approximation to re-write d3p聽as:

d3p聽= dp = 4蟺路|p|2路d|p| = 4蟺路p2路dp

Note that we’ll have the same formula for d3v, of course: d3v聽= dv = 4蟺路|v|2路d|v| = 4蟺路v2路dv, and also note that we get that same 4蟺路v2聽factor which we mentioned when discussing the f(v) and f(v) formulas. That is not a coincidence, of course, but – as I’ll explain in a moment – it is聽not聽so easy to immediately relate the formulas. In any case, we’re now ready to relate dE and dp so we can re-write that d3p formula in terms of m, E and dE:


We are now – finally! – sufficiently armed to derive all of the formulas we want – or need. Let me just copy them from the mentioned Wikipedia article:




As said, you’ll encounter these formulas regularly – and so it’s good that you know how you can derive them. Indeed, the derivation is very straightforward and is done in the same article: the tips I gave you should allow you to read it in a couple of minutes only. Only the density function for velocities might cause you a bit of trouble – but only for a very short moment: just use the p = m路v equation to write聽d3p as聽d3p = 4蟺路p2路dp聽= 4蟺路m2路v2路m路dv = 4蟺路m3路v2路dv = m3路d3v, and you’re all set. 馃檪

Of course, you will recognize the formula for the distribution of velocities: it’s the聽f(v) we mentioned in the introduction. However, you’re more likely to need the f(v) formula (i.e. the probability density function for the speed) than the f(v) function.聽So how can we derive get the聽f(v) – i.e. that formula for the distribution of speeds,聽with the 4蟺路v2聽factor – from the f(v) formula?

Well… I wish I could give you an easy answer. In fact, the same Wikipedia article suggests it’s easy – but it’s not. It involves a transformation from Cartesian to polar coordinates: the volume element dvx路dvy路dvz聽is to be written as v2路sin胃路dv路d胃路d蠁. And then… Well… Have a look at this link. 馃檪 It involves a so-called聽Jacobian transformation matrix. If you want to know more about it, then I recommend you read some article on how to transform distribution functions: here’s a link to one of those, but you can easily google others. Frankly, as for now, I’d suggest you just accept the formula for f(v) as for now. 馃檪 Let me copy it from the same article in a slightly different form:density-formulaNow, the final thing to note is that you’ll often want to use so-called聽normalized velocities, i.e. velocities that are defined as a v/v0聽ratio, with v0聽the聽most probable聽speed, which is equal to聽鈭(2kT/m). You get that value by calculating the df(v)/dv derivative, and then finding the value v = v0聽for which df(v)/dv = 0. You should now be able to verify the formula that is used in the mentioned MIT version of the Stern-Gerlach experiment:mit-formulaIndeed, when you write it all out – note that 蟺/蟺3/2聽= 1/鈭毾 馃檪 – you’ll see the two formulas are effectively equivalent.聽Of course, by now you are completely formula-ed out, and so you probably don’t even wonder what that f(v)路dv product actually stands for. What does it聽mean, really? Now you’ll sigh: why would I even聽want聽to know that? Well… I want you to understand that MIT experiment. 馃檪 And you won’t if you don’t know what f(v)路dv actually represents.聽So think about it. […]

[…] OK. Let me help you once more. Remember the normalization condition once again: the integral of the whole thing – over the whole range of possible velocities – needs to add up to 1, so聽f(v)路dv is really the聽fraction聽of (potassium) atoms (inside the oven) with a velocity in the (infinitesimally small) dv interval. It’s going to be a聽tiny聽fraction, of course: just a tiny bit larger than zero. Surely聽not聽larger than 1, obviously. 馃檪 Think of integrating the function between two values – say v1聽and v2聽– that are pretty close to each other.

So… Well… We’re done as for now. So where are we now in terms of understanding the calculations in that description of that MIT experiment? Well… We’ve got the meat. But we need a lot of other ingredients now. We’ll want formulas for the聽intensity聽of the beam at some point along the axis measuring its聽deflection聽from its main direction. That axis is the聽z-axis. So we’ll want a formula for some I(z) function.

Deflection? Yes. There are a lot of steps to go through now. Here’s the set-up:set-upFirst, we’ll need some formula measuring the聽flux聽of (potassium) atoms coming out of the oven. And then… Well… Just have a look and try to make your way through the whole thing now – which is just what I want to do in the coming days, so I’ll give you some more feedback soon. 馃檪 Here I only wanted to introduce those formulas for the distribution of velocities and momenta, because you’ll need them in other contexts too.

So I hope you found this useful. Stuff like this all聽makes it somewhat more real, doesn’t it? 馃檪 Frankly, I think the math is at least聽as fascinating as the physics. We could have a closer look at those distributions, for example, by noting the following:

1. The probability density function for the momenta is the product of three normal distributions. Which ones? Well…聽聽The distribution of px, py聽and pz聽respectively: three normal distributions whose variance is equal to mkT. 馃檪

2. The fE(E) function is a chi-squared (蠂2) distribution with 3 degrees of freedom. Now, we have the equipartition theorem (which you should know – if you don’t, see my post on it), which tells us that this energy is evenly distributed among all three degrees of freedom. It is then relatively easy to show – if you know something about 蠂2聽distributions at least 馃檪 – that the energy per degree of freedom (which we’ll write as 蔚 below) will also be distributed as a chi-squared distribution with one degree of freedom:chi-square-2This holds true for any number of degrees of freedom. For example, a diatomic molecule will have extra degrees of freedom, which are related to its rotational and vibrational motion (I explained that in my June-July 2015 posts too, so please go there if you’d want to know more). So we can really use this stuff in, for example, the theory of the specific heat of gases. 馃檪

3. The function for the distribution of the velocities is also a product of three independent normally distributed variables – just like the density function for momenta. In this case, we have the vx, vy聽and vz聽variables that are normally distributed, with variance kT/m.

So… Well… I’m done – for the time being, that is. 馃檪聽Isn’t it a privilege to be alive and to be able to savor all these little wonderful intellectual excursions? I wish you a very nice day and hope you enjoy stuff like this as much as I do. 馃檪

The quantization of magnetic moments

You may not have many questions after a first read of Feynman鈥檚 Lecture on the Stern-Gerlach experiment聽and his more general musings on the quantization of the magnetic moment of an elementary particle. [At least I didn’t have all that many after my first reading, which I summarized in a previous post.]

However, a second, third or fourth reading should trigger some, I’d think. My key question is the following: what happens to that magnetic moment of a particle 鈥 and its spin聽[1] 鈥 as it travels through a homogeneous or inhomogeneous magnetic field? We know 鈥 or, to be precise, we assume 鈥 its spin is either 鈥渦p鈥 (Jz = +魔/2) or 鈥渄own鈥 (Jz = 鈭捘/2) when it enters the Stern-Gerlach apparatus, but then – when it’s moving in the field itself – we would expect that the magnetic field would, somehow,聽line up the magnetic moment, right?

Feynman says that it doesn’t: from all of the schematic drawings 鈥 and the subsequent discussion of Stern-Gerlach filters 鈥 it is obvious that the magnetic field 鈥 which we denote as B, and which we assume to be inhomogeneous聽[2] 鈥 should not result in a change of the magnetic moment. Feynman聽states it as follows: “The magnetic field produces a torque. Such a torque you would think is trying to line up the (atomic) magnet with the field, but it only causes its precession.”

[…] OK. That’s too much information already, I guess. Let’s start with the basics. The key to a good understanding of this discussion is the force formula:


We should first explain this formula before discussing the obvious question: over what time 鈥 or over what distance 鈥 should we expect this force to pull the particle up or down in the magnetic field? Indeed, if the force ends up aligning the moment, then the force will disappear!

So let鈥檚 first explain the formula. We start by explaining the energy U. U is the potential energy of our particle, which it gets from its magnetic moment and its orientation in the magnetic field B. To be precise, we can write the following:


Of course, 渭 and B are the magnitudes of and B respectively, and 胃 is the angle between and B: if the angle 胃 is zero, then Umag will be negative. Hence, the total energy of our particle (U) will actually be less than what it would be without the magnetic field: it is the energy when the magnetic moment of our particle is fully lined up with the magnetic field. When the angle is a right angle (胃 = 卤蟺/2), then the energy doesn’t change (Umag = 0). Finally, when 胃 is equal to 蟺 or 鈭捪, then its energy will be more than what it would be outside of the magnetic field. [Note that the angle 胃 effectively varies between 鈥撓 and 蟺 鈥 not between 0 and 2蟺!]anglesOf course, we may already note that, in quantum mechanics, Umag will only take on a very limited set of values. To be precise, for a particle with spin number j = 1/2, the possible values of Umag will be limited to two values only. We will come back to that in a moment. First that force formula.

Energy is force over a distance. To be precise, when a particle is moved from point a to point b, then its change in energy can be written as the following line integral:


Note that the minus sign is there because of the convention that we鈥檙e doing work against the force when increasing the (potential) energy of that what we’re moving. Also note that F鈭檇s product is a vector (dot) product: it is, obviously, equal to Ft times ds, with Ft the magnitude of the tangential component of the force. The equation above gives us that force formula:


Feynman calls it the principle of virtual work, which sounds a bit mysterious 鈥 but so you get it by taking the derivative of both sides of the energy formula.

Let me now get back to the real聽mystery of quantum mechanics, which tells us that the magnetic moment 鈥 as measured along our z-axis 鈥 will only take one of two possible values. To be precise, we have the following formula for 渭z:


This is a formula you just have to accept for the moment. It needs a bit of interpretation, and you need to watch out for the sign. The g-factor is the so-called Landg-factor: it is equal to 1 for a so-called pure orbital moment, 2 for a so-called pure spin moment,聽and some number in-between in reality, which is always some mixture of the two: both the electron鈥檚 orbit around the nucleus as well as the electron鈥檚 rotation about its own axis contribute to the total angular momentum and, hence, to the total magnetic moment of our electron. As for the other factors, m and qe are, of course, the mass and the charge of our electron, and Jz is either +魔/2 or 鈭捘/2. Hence, if we know g, we can easily calculate the two possible values for 渭z.

Now, that also means we could 鈥 theoretically 鈥 calculate the two possible values of that angle 胃. For some reason, no handbook in physics ever does that. The reason is probably a good one: electron orbits, and the concept of spin itself, are not like the orbit and the spin of some planet in a planetary system. In fact, we know that we should not think of electrons like that at all: quantum physicists tell us we may only think of it as some kind of weird cloud around a center. That cloud has a density which is to be calculated by taking the absolute square of the quantum-mechanical amplitude of our electron.

In fact, when thinking about the two possible values for 胃, we may want to remind ourselves of another peculiar consequence of the fact that the angular momentum 鈥 and, hence, the magnetic moment 鈥 is not continuous but quantized: the magnitude of the angular momentum J is not 聽J = 鈭(JJ) = 鈭欽2聽in quantum mechanics but J = 鈭(JJ) = 鈭歔j路(j+1)路魔2] = 鈭歔j路(j+1)]路魔. For our electron, j = 1/2 and, hence, the magnitude of J is equal to J = 鈭歔(1/2)鈭(3/2)]鈭 魔 = 鈭(3/4)鈭櫮 鈮 0.866鈭櫮. Hence, the magnitude of the angular momentum is larger than the maximum value of Jz 鈥 and not just a little bit, because the maximum value of 魔 is 魔/2! That leads to that weird conclusion: in quantum mechanics, we find that the angular momentum is never completely along any one direction聽[3]! In fact, this conclusion basically undercuts the very idea of the angular momentum 鈥 and, hence, the magnetic moment 鈥 of having any precise direction at all! [This may sound spectacular, but there is actually a classical equivalent to the idea of the angular momentum having聽no聽precisely defined direction: gyroscopes may not only precess, but聽nutate聽as well. Nutation refers to a kind of wobbling聽around the direction of the angular momentum. For more details, see the post I wrote after my聽first聽reading of Feynman’s Lecture on the quantization of magnetic moments. :-)]

Let’s move on. So if, in quantum mechanics, we cannot associate the magnetic moment – or the angular momentum – with some specific direction, then how should we imagine it? Well鈥 I won鈥檛 dwell on that here, but you may want to have a look at another post of mine, where I develop a metaphor for the wavefunction which may help you to sort of understand what it might be. The metaphor may help you to think of some oscillation in two directions 鈥 rather than in one only 鈥 with the two directions separated by a right angle. Hence, the whole thing obviously points in some direction but it鈥檚 not very precise. In any case, I need to move on here.

We said that the magnetic moment will take one of two values only, in any direction along which we鈥檇 want to measure it. We also said that the (maximum) value along that direction 鈥 any direction, really 鈥 will be smaller than the magnitude of the moment. [To be precise, we said that for the angular momentum, but the formulas above make it clear the conclusions also hold for the magnetic moment.] So that means that the magnetic moment is, in fact, never fully aligned with the magnetic field. Now, if it is not aligned 鈥 and, importantly, if it also does not line up 鈥 then it should precess. Now, precession is a difficult enough concept in classical mechanics, so you may think it鈥檚 going to be totally abstruse in quantum mechanics. Well鈥 That is true 鈥 to some extent. At the same time, it is surely not unintelligible. I will not repeat Feynman鈥檚 argument here, but he uses the classical formulas once more to calculate an angular velocity and a precession frequency 鈥 although he doesn鈥檛 explain what they might actually physically represent. Let me just jot down the formula for the precession frequency:


We get the same factors: g, qe and m. In addition, you should also note that the precession frequency is directly proportional 聽to the strength of the magnetic field, which makes sense. Now, you may wonder: what is the relevance of this? Can we actually measure any of this?

We can. In fact, you may wonder about the if I inserted above: if we can measure the Land茅 g-factor鈥 Can we? We can. It鈥檚 done in a resonance experiment, which is referred to as the Rabi molecular-beam method 鈥 but then it might also be just an atomic beam, of course!

The experiment is interesting, because it shows the precession is 鈥 somehow 鈥 real. It also illustrates some other principles we have been describing above.

The set-up looks pretty complicated. We have a series of three magnets. The first magnet is just a Stern-Gerlach apparatus: a magnet with a very sharp edge on one of the pole tips so as to produce an inhomogeneous magnetic field. Indeed, a homogeneous magnetic field implies that 鈭侭/鈭倆 = 0 and, hence, the force along the z-direction would be zero and our atomic magnets would not be displaced.

The second magnet is more complicated. Its magnetic field is uniform, so there are no vertical forces on the atoms and they go straight through. However, the magnet includes an extra set of coils that can produce an alternating horizontal field as well. I鈥檒l come back to that in a moment. Finally, the third magnet is just like the first one, but with the field inverted. Have a look at it:


It may not look very obvious but, after some thinking, you鈥檒l agree that the atoms can only arrive at the detector if they follow the trajectories a and/or聽b. In fact, these trajectories are the only possible ones because of the slits S1 and S2.

Now what鈥檚 the idea of that horizontal field B鈥 in magnet 2? In a classical situation, we could change the angular momentum 鈥 and the magnetic moment 鈥 by applying some torque about the z-axis. The idea is shown in Figure (a) and (b) below.


Figure (a) shows 鈥 or tries to show 鈥 some rotating field B鈥 鈥 one that is always at right angles to both the angular momentum as well as to the (uniform) B field. That would be effective. However, Figure (b) shows another arrangement that is almost equally effective: an oscillating field that sort of pulls and pushes at some frequency 蠅. Classically, such fields would effectively change the angle of our gyroscope with respect to the z-axis. Is it also the case quantum-mechanically?

It turns out it sort of works the same in quantum mechanics. There is a big difference though. Classically, 渭z would change gradually, but in quantum mechanics it cannot: in quantum mechanics, it must jump suddenly from one value to the other, i.e. from +魔/2 to 鈭捘/2, or the other way around. In other words, it must flip up or down. Now, if an atom flips, then it will, of course, no longer follow the (a) or (b) trajectories: it will follow some other path, like a鈥 or b鈥, which make it crash into the magnet. Now, it turns out that almost all atoms will flip if we get that frequency 蠅 right. The graph below shows this 鈥榬esonance鈥 phenomenon: there is a sharp drop in the 鈥檆urrent鈥 of atoms if 蠅 is close or equal to 蠅p.


What鈥檚 蠅p? It鈥檚 that precession frequency for which we gave you that formula above. To make a long story short, from the experiment, we can calculate the Land茅 g-factor for that particular beam of atoms 鈥 say, silver atoms聽[4]. So鈥 Well鈥 Now we know it all, don鈥檛 we?

Maybe. As mentioned when I started this post, when going through all of this material, I always wonder why there is no magnetization effect: why would an atom remain in the same state when it crosses a magnetic field? When it鈥檚 already aligned with the magnetic field 鈥 to the maximum extent possible, that is 鈥 then it shouldn鈥檛 flip, but what if its magnetic moment is opposite? It should lower its energy by flipping, right? And it should flip just like that. Why would it need an oscillating B鈥聽field?

In fact, Feynman does describe how the聽magnetization phenomenon can be analyzed – classically and聽quantum-mechanically, but he does that for bulk materials: solids, or liquids, or gases – anything that involves聽lots聽of atoms that are kicked around because of the thermal motions. So that involves statistical mechanics – which I am sure you’ve skipped so far. 馃檪 It is a beautiful argument – which ends with an equally beautiful formula, which tells us the magnetization (M) of a material – which is defined as the聽netmagnetic moment聽per unit volume聽– has the same direction as the magnetic field (B) and a magnitude M that is proportional the magnitude of B:

f6The 渭 in this formula is the magnitude of the magnetic moment of the individual atoms and so… Well… It’s just like the formula for the electric polarization P, which we described in some other post. In fact, the formula for P and M are same-same but different, as they would say in Thailand. 馃檪 But this wonderful story doesn’t answer our question. The magnetic moment of an individual particle should not stay what it is: if it doesn’t change because of all the kicking around as a result of thermal聽motions, then… Well… These little atomic magnets should line up. That means atoms with their spin “up” should go into the “spin-down” state.

I don’t have an answer to my own question as for now. I suspect it’s got to do with the strength of the magnetic field: a Stern-Gerlach apparatus involves a weak聽magnetic field. If it’s too strong, the atomic magnets must flip. Hence, a more advanced analysis should probably include that flipping effect. When quickly googling – just now – I found an MIT lab exercise on it, which also provides a historical account of the Stern-Gerlach experiment itself. I skimmed through it – and will read all of it in the coming days – but let me just quote this from the historical background section:

“Stern predicted that the effect would be be just barely observable. They had difficulty in raising support in the midst of the post war financial turmoil in Germany. The apparatus, which required extremely precise alignment and a high vacuum, kept breaking down. Finally, after a year of struggle, they obtained an exposure of sufficient length to give promise of an observable silver deposit. At first, when they examined the glass plate they saw nothing. Then, gradually, the deposit became visible, showing a beam separation of 0.2 millimeters! Apparently, Stern could only afford cheap cigars with a high sulfur content. As he breathed on the glass plate, sulfur fumes converted the invisible silver deposit into visible black silver sufide, and the splitting of the beam was discovered.”

Isn’t this funny? And聽great聽at the same time? 馃檪 But… Well… The point is: the paper for that MIT lab exercise makes me realize Feynman does cut corners when explaining stuff – and some corners are more significant than others. I note, for example, that they talk about interference peaks rather than “two distinct spots on the glass plate.” Hence, the analysis is somewhat more sophisticated than Feynman pretends it to be. So, when everything is said and done,聽Feynman’s Lectures may indeed be reading for undergraduate students only. Is it time to move on?

[1] The magnetic moment 鈥 as measured in a particular coordinate system 鈥 is equal to = 鈭抔路[q/(2m)]路J. The factor J in this expression is the angular momentum, and the coordinate system is chosen such that its z-axis is along the direction of the magnetic field B. The component of J along the z-axis is written as Jz. This z-component of the angular momentum is what is, rather loosely, being referred to as the spin of the particle in this context. In most other contexts, spin refers to the spin number j which appears in the formula for the value of Jz, which is Jz = j鈭櫮, (j鈭1)鈭櫮, (j鈭2)鈭櫮,鈥, (鈭j+2)鈭櫮, (鈭j+1), 鈭j鈭櫮. Note the separation between the possible values of Jz is equal to 魔. Hence, j itself must be an integer (e.g. 1 or 2) or a half-integer (e.g. 1/2). We usually look at electrons, whose spin number j is 1/2.

[2] One of the pole tips of the magnet that is used in the Stern-Gerlach experiment has a sharp edge. Therefore, the magnetic field strength varies with z. We write:聽鈭侭/鈭倆 鈮 0.

[3] The z-direction can be any direction, really.

[4] The original experiment was effectively done with a beam of silver atoms. The lab exercise which MIT uses to show the effect to physics students involves potassium atoms.

Feynman’s Lecture on Superconductivity

The ultimate challenge for students of Feynman’s iconic Lectures聽series is, of course, to understand his final one: A Seminar on Superconductivity. As he notes in his introduction to this formidably dense piece, the text does not聽present the detail of each and every step in the development and, therefore,聽we’re not supposed to immediately understand everything. As Feynman puts it: we should just believe (more or less) that things would come out if we would聽be able to go through each and every step. Well… Let’s see. It took me one long maddening day to figure out the first formula:f1It says that the amplitude for a particle to go from a聽to聽b聽in a vector potential (think of a classical magnetic field) is the amplitude for the same particle to go from聽a聽to聽b when there is no field (A = 0) multiplied by the exponential of the line integral of the vector potential times the electric charge divided by Planck’s constant.

Of course, after a couple of hours, I recognized the formula for the magnetic effect on an amplitude, which I described in my previous post, which tells us that a magnetic field will shift聽the phase of the amplitude of a particle with an amount equal to:


Hence, if we write 鈱b|a鈱 for A = 0 as 鈱b|aA = 0聽= Cei, then 鈱b|a鈱in Awill, naturally, be equal to 鈱b|a鈱in A = Cei(胃+蠁)聽= Ceiei聽=聽鈱b|aA = 0聽ei, and so that explains it. 馃檪 Alright… Next.

The Schr枚dinger equation in an electromagnetic field

Feynman then jots down Schr枚dinger’s equation for the same particle (with charge聽q) moving in an electromagnetic field that is characterized not only by a vector potential聽A聽but also by the (scalar) potential聽桅:


Now where does聽that聽come from? We know the standard formula in an聽electric聽field, right? It’s the formula we used to find the energy states of electrons in a hydrogen atom:

i路魔路鈭傁/鈭倀 = 鈭(1/2)路(魔2/m)鈭2蠄 + V路蠄

Of course, it is easy to see that we replaced V by q路桅, which makes sense: the potential of a charge in an electric field is the product of the charge (q) and the (electric) potential (桅), because 桅 is, obviously, the potential energy of the unit聽charge. It’s also easy to see we can re-write 鈭捘2路鈭2蠄 as [(魔/i)路鈭嘳路[(魔/i)路鈭嘳蠄 because (1/i)路(1/i) = 1/i2聽= 1/(鈭1) = 鈭1. 馃檪 Alright. So it’s just that 鈭抭路A聽term in the (魔/i)鈭 鈭 q路Aexpression that we need to explain now.

Unfortunately, that explanation is聽not聽so easy. Feynman basically re-derives Schr枚dinger’s equation using his trade-mark聽historical argument – which did not聽include any magnetic field –聽with聽a vector potential. The re-derivation is rather annoying, and I didn’t have the courage to go through it myself, so you should – just like me – just believe Feynman when he says that, when there’s a vector potential – i.e. when there’s a magnetic field – then that 魔/i)路鈭 operator聽– which is the momentum operator– ought to be replaced by a new momentum operator:


So… Well… There we are… 馃檪 So far, so good.

Local conservation of probability

The title of this section in Feynman’s Lecture (yes, still the same Lecture – we’re not switching topics here) is the聽equation of continuity for probabilities. I find it brilliant, because it confirms聽my聽interpretation of the wave function as describing some kind of energy flow. Let me quote Feynman on his endeavor here:

“An important part of the Schr枚dinger equation for a single particle is the idea that the probability to find the particle at a position is given by the absolute square of the wave function. It is also characteristic of the quantum mechanics that probability is conserved in a local sense. When the probability of finding the electron somewhere decreases, while the probability of the electron being elsewhere increases (keeping the total probability unchanged), something must be going on in between. In other words, the electron has a continuity in the sense that if the probability decreases at one place and builds up at another place, there must be some kind of flow between. If you put a wall, for example, in the way, it will have an influence and the probabilities will not be the same. So the conservation of probability alone is not the complete statement of the conservation law, just as the conservation of energy alone is not as deep and important as the local conservation of energy.聽If energy is disappearing, there must be a flow of energy to correspond. In the same way, we would like to find a 鈥渃urrent鈥 of probability such that if there is any change in the probability density (the probability of being found in a unit volume), it can be considered as coming from an inflow or an outflow due to some current.”

This is it, really ! The wave function does represent some kind of energy flow – between a so-called ‘real’ and a so-called ‘imaginary’ space, which are to be defined in terms of directional versus rotational energy, as I try to point out – admittedly: more by appealing to intuition than to mathematical rigor – in that post of mine on the meaning of the wavefunction.

So what is the聽flow聽– or probability聽currentas Feynman refers to it? Well… Here’s the formula:


Huh?聽Yes. Don’t worry too much about it right now. The essential point is to understand what this current – denoted by J – actually stands for:


So what’s next? Well… Nothing. I’ll actually refer you to Feynman now, because I can’t improve on how聽he聽explains how聽pairs of electrons聽start behaving when temperatures are low enough to render Boltzmann’s Law irrelevant: the kinetic energy that’s associated with temperature聽can no longer break up electron pairs if temperature comes close to the zero point.

Huh? What? Electron pairs? Electrons are not supposed to form pairs, are they? They carry the same charge and are, therefore, supposed to repel each other. Well… Yes and no. In my post on the electron orbitals in a hydrogen atom – which just presented Feynman’s presentation聽on the subject-matter in a, hopefully, somewhat more readable format – we calculated electron orbitals neglecting spin. In Feynman’s words:

“We make another聽approximation by forgetting that the electron has spin. […] The non-relativistic Schr枚dinger equation disregards magnetic effects. [However] Small magnetic effects [do] occur because, from the electron鈥檚 point-of-view, the proton is a circulating charge which produces a magnetic field. In this field the electron will have a different energy with its spin up than with it down. [Hence] The energy of the atom will be shifted a little bit from what we will calculate. We will ignore this small energy shift. Also we will imagine that the electron is just like a gyroscope moving around in space always keeping the same direction of spin. Since we will be considering a free atom in space the total angular momentum will be conserved. In our approximation we will assume that the angular momentum of the electron spin stays constant, so all the rest of the angular momentum of the atom鈥攚hat is usually called 鈥渙rbital鈥 angular momentum鈥攚ill also be conserved. To an excellent approximation the electron moves in the hydrogen atom like a particle without spin鈥攖he angular momentum of the motion is a constant.”

To an excellent approximation… But… Well… Electrons in a metal do form pairs, because they can give up energy in that way and, hence, they are more stable聽that way. Feynman does not go into the details here – I guess because that’s way beyond the undergrad level – but refers to the Bardeen-Coopers-Schrieffer (BCS) theory instead – the authors of which got a Nobel Prize in Physics in 1972 (that’s a decade or so after Feynman wrote this particular聽Lecture), so I must assume the theory is well accepted now. 馃檪

Of course, you’ll shout now: Hey! Hydrogen is not a metal!Well… Think again: the latest breakthrough in physics is making hydrogen behave like a metal. 馃檪 And I am really talking the latest聽breakthrough: Science聽just published the findings of this experiment last month! 馃檪 馃檪 In any case, we’re not talking hydrogen here but superconducting materials, to which – as far as we know – the BCS theory does apply.

So… Well… I am done. I just wanted to show you why it’s important to work your way through Feynman’s last Lecture聽because… Well… Quantum mechanics does explain everything – although the nitty-gritty of it (the Meissner effect, the London equation, flux quantization, etc.) are rather hard bullets to bite. 馃槮

Don’t give up ! I am struggling with the nitty-gritty too ! 馃檪

The Aharonov-Bohm effect

This title sounds very exciting. It is – or was, I should say – one of these things I thought I would never ever understand, until I started studying physics, that is. 馃檪

Having said that, there is – incidentally – nothing very special about the Aharonov-Bohm effect. As Feynman puts it: “The theory was known from the beginning of quantum mechanics in 1926. […] The implication was there all the time, but no one paid attention to it.”

To be fair, he also admits the experiment itself – proving聽the effect – is “very, very difficult”, which is why the first experiment that claimed to confirm the predicted effect was set up in 1960 only. In fact, some claim the results of that experiment were ambiguous, and that it was only in 1986, with the experiment of Akira Tonomura, that the Aharonov-Bohm effect was unambiguously demonstrated. So what is it about?

In essence, it proves the reality聽of the vector potential鈥攁nd of the (related) magnetic field. What do we mean with a real聽field? To put it simply, a聽real聽field cannot聽act on some particle from a distance through some kind of spooky ‘action-at-a-distance’: real fields must be specified at the position of the particle itself聽and describe what happens there. Now you’ll immediately wonder: so what’s a聽non-real field? Well… Some field that does act through some kind of spooky聽‘action-at-a-distance.’ As for an example… Well… I can’t give you one because we’ve only been discussing real fields so far. 馃檪

So it’s about what a magnetic (or an electric) field does in terms influencing motion and/or quantum-mechanical amplitudes. In fact, we discussed this matter 聽quite a while ago (check my 2015 post on it). Now, I don’t want to re-write that post, but let me just remind you of the essentials. The two聽equations for the magnetic field (B) in Maxwell’s set of four equations (the two others specify the electric field聽E) are: (1)聽B = 0 and (2)聽c2B = j/蔚0聽+ 鈭E/ 鈭倀. Now, you can temporarily forget about the second equation, but you should note that the B= 0 equation is always聽true (unlike the聽E= 0 expression, which is true for electrostatics only, when there are no moving charges). So it says that the聽divergence聽of B is zero,聽always.

Now, from our posts on vector calculus, you may or may not remember that the divergence of the curl of a vector field is always zero. We wrote:聽div聽(curl聽A) = 鈥(A) = 0, always. Now, there is another theorem that we can now apply, which says the following: if the divergence of a vector field, say D, is zero 鈥 so if D = 0, then聽D will be thecurl of some other vector field聽C, so we can write:聽D =聽C. When we now apply this to our聽B= 0 equation, we can confidently state the following:聽

If B聽= 0, then there is an聽A such that B =聽A

We can also write this as follows:B = 路(A) = 0 and, hence, B =聽A.Now, it鈥檚 this vector field A聽that is referred to as the (magnetic)vector聽potential, and so that鈥檚 what we want to talk about here. As a start, it may be good to write out all of the components of our聽B =聽A vector:

formula for B

In that 2015 post, I answered the question as to why we’d need this new vector field in a way that wasn’t very truthful: I just said that, in many situations, it would be聽more convenient – from a mathematical point of view, that is – to聽first find聽A, and then calculate the derivatives above to get B.

Now, Feynman says the following about this argument in his Lecture聽on the topic: “It is true that in many complex problems it is easier to work with聽A, but it would be hard to argue that this ease of technique would justify making you learn about one more vector field. […] We have introduced聽A聽because it does have an important physical significance: it is a real physical field.”聽Let us follow his argument here.

Quantum-mechanical interference effects

Let us first remind ourselves of the quintessential electron interference experiment illustrated below. [For a much more modern rendering of this experiment, check out the 聽Tout Est Quantique聽video聽on it. It’s much more amusing than my rather dry expos茅 here, but it doesn’t give you the math.]


We have electrons, all of (nearly) the same energy, which leave the source – one by one – and travel towards a wall with two narrow slits. Beyond the wall is a backstop with a movable detector which measures the rate, which we call I, at which electrons arrive at a small region of the backstop at the distance x from the axis of symmetry. The rate (or intensity)聽I聽is proportional to the probability that an individual electron that leaves the source will reach that region of the backstop. This probability has the complicated-looking distribution shown in the illustration, which we understand is due to the interference of two amplitudes, one from each slit. So we associate the two trajectories聽with two amplitudes, which Feynman writes as A1ei1and A2ei2聽respectively.

As usual, Feynman abstracts away from the time variable here because it is, effectively, not relevant: the interference pattern depends on distances and angles only. Having said that, for a good understanding, we should – perhaps – write our two wavefunctions as A1ei(蠅t +1)聽and A2ei(蠅t +2)聽respectively. The point is: we’ve got聽two聽wavefunctions – one for each trajectory – even if it’s only one electron going through the slit: that’s the mystery of quantum mechanics. 馃檪 We need to add these waves so as to get the interference effect:

R = A1ei(蠅t +1)聽+聽A2ei(蠅t +2)聽= [A1ei1+聽A2ei2]路ei蠅t

Now, we know we need to take the absolutesquare聽of this thing to get the intensity – or probability (before normalization). The absolute square of a product, is the product of the absolute squares of the factors, and we also know that the absolute square of any complex number is just the product of the same number with its complex conjugate. Hence, the absolute square of the ei蠅t聽factor is equal to |ei蠅t|2聽= ei蠅tei蠅t聽= e0聽= 1. So the time-dependent factor doesn’t matter: that’s why we can always abstract away from it. Let us now take the absolute square of the聽[A1ei1+聽A2ei2] factor, which we can write as:

|R|2聽= |A1ei1+聽A2ei2|2聽= (A1ei1+聽A2ei2)路(A1ei1+聽A2ei2)

= A12聽+ A22聽+ 2路A1A2路cos(桅1鈭捨2) = A12聽+ A22聽+ 2路A1A2路cos未 with 未 = 桅1鈭捨2

OK. This is probably going a bit quick, but you should be able to figure it out, especially when remembering that ei桅聽+聽ei桅聽= 2路cos桅 and cos桅 = cos(鈭捨). The point to note is that the intensity is equal to the sum of the intensities of both waves plus a correction factor, which is equal to 2路A1A2路cos(桅1鈭捨2) and, hence, ranges from 鈭2路A1A2聽to +2路A1A2. Now, it takes a bit of geometrical wizardry to be able to write the phase difference 未 = 桅1鈭捨2聽as

未 = 2蟺路a/位 = 2蟺路(x/L)路d/位

鈥攂ut it can be done. 馃檪 Well… […] OK. 馃檪 Let me quickly help you here by copying another diagram from Feynman – one he uses to derive the formula for the phase difference on arrival between the signals from two oscillators.聽A1聽and聽A2聽are equal here (A1聽=聽A2聽= A) so that makes the situation below somewhat simpler to analyze. However, instead, we have the added complication of a phase difference (伪) at the origin – which Feynman refers to as an intrinsic relative phase.聽triangle

When we apply the geometry shown above to our electron passing through the slits, we should, of course, equate 伪 to zero.聽For the rest, the picture is pretty similar as the two-slit picture. The distance聽a聽in the two-slit – i.e. the difference in the path lengths for the two trajectories of our electron(s) – is, obviously, equal to the d路sin胃 factor in the oscillator picture. Also, because L is huge as compared to聽x, we may assume that trajectory 1 and 2 are more or less parallel and, importantly, that the triangles in the picture – small and large – are rectangular. Now, trigonometry tells us that sin胃 is equal to the ratio of the opposite side of the triangle and the hypotenuse (i.e. the longest side of the rectangular triangle). The opposite side of the triangle is x聽and, because聽x聽is very, very聽small as compared to L, we may approximate the length of the hypotenuse with L. [I know鈥攁 lot of approximations here, but… Well… Just go along with it as for now…] Hence, we can equate sin胃 to x/L and, therefore, a聽=聽dx/L. Now we need to calculate the phase difference. How many wavelengths do we have in聽a? That’s simple:聽a/位, i.e. the total distance divided by the wavelength. Now these wavelengths correspond to 2蟺路a/位 radians聽(one cycle corresponds to one wavelength which, in turn, corresponds to 2蟺 radians). So we’re done. We’ve got the formula: 未 = 桅1鈭捨2聽= 2蟺路a/位 = 2蟺路(x/L)路d/位.

Huh?聽Yes. Just think about it. I need to move on.The point is: when x聽is equal to zero, the two waves are in phase, and the probability will have a maximum. When 未 = 蟺, then the waves are out of phase and interfere destructively聽(cos蟺 = 鈭1), so the intensity (and, hence, the probability) reaches a minimum.

So that’s pretty obvious – or should聽be pretty obvious if you’ve understood some of the basics we presented in this blog. We now move to the non-standard stuff, i.e. the Aharonov-Bohm effect(s).

Interference in the presence of an electromagnetic field

In essence, the Aharonov-Bohm effect is nothing special: it is just a law –聽two聽laws, to be precise – that tells us how the聽phase聽of our wavefunction changes because of the presence of a magnetic and/or electric field. As such, it is聽not聽very different from previous analyses and presentations, such as those showing how amplitudes are affected by a potential 鈭 such as an electric potential, or a gravitational field, or a magnetic field聽鈭 and how they relate to a classical analysis of the situation (see, for example, my November 2015 post on this topic). If anything, it’s just a more systematic approach to the topic and – importantly – an approach centered around the use of the vector potential A (and the electric potential 桅). Let me give you the formulas:



The first formula tells us that聽the phase of the amplitude for our electron (or whatever charged particle) to arrive at some location via some trajectory is changed by an amount that is equal to the integral of the vector potential along the trajectory times the charge of the particle over Planck’s constant.聽I know that’s quite a mouthful but just read it a couple of times.

The second formula tells us that, if there’s an electrostatic field, it will produce a phase change given by the negative of the聽time聽integral of the (scalar) potential 桅.

These two expressions – taken together – tell us what happens for any electromagnetic field, static or dynamic. In fact, they are really the (two) law(s) replacing the F聽=聽q(E聽+聽vB) expression in classical mechanics.

So how does it work? Let me further follow Feynman’s treatment of the matter鈥攚hich analyzes what happens when we’d have some magnetic field in the two-slit experiment (so we assume there’s no electric field: we only look at some magnetic field). We said 桅1聽was the phase of the wave along trajectory 1, and聽桅2聽was the phase of the wave along trajectory 2. Without聽magnetic field, that is, so B = 0. Now, the (first) formula above tells us that, when the field is switched on, the new聽phases will be the following:



Hence, the phase聽difference聽未 = 桅1鈭捨2聽will now be equal to:


Now, we can combine the two integrals into one that goes forward along trajectory 1 and comes back along trajectory 2. We’ll denote this path as 1-2 and write the new integral as follows:


Note that we’re using a notation here which suggests that the 1-2 path is聽closed, which is… Well… Yet another approximation of the Master. In fact, his assumption that the new 1-2 path is closed proves to be essential in the argument that follows the one we presented above, in which he shows that the inherent arbitrariness in our choice聽of a vector potential function doesn’t matter, but… Well… I don’t want to get too technical here.

Let me conclude this post by noting we can re-write our grand formula above in terms of the flux of the magnetic field B:


So… Well… That’s it, really. I’ll refer you to Feynman’s Lecture on this matter for a detailed description of the 1960 experiment itself, which involves a magnetized iron whisker that acts like a tiny solenoid鈥攕mall enough to match the tiny scale of the interference experiment itself. I must warn you though: there is a rather long discussion in that Lecture聽on the ‘reality’ of the magnetic and the vector potential field which – unlike Feynman’s usual approach to discussions like this – is rather philosophical and partially misinformed, as it assumes there is zero聽magnetic field outside of a solenoid. That’s true for infinitely long solenoids, but not聽true for real-life solenoids: if we have some聽A, then we must also have some B, and vice versa. Hence, if the magnetic field (B) is a real field (in the sense that it cannot聽act on some particle from a distance through some kind of spooky ‘action-at-a-distance’), then the vector potential聽A is an equally real field鈥攁nd vice versa. Feynman admits as much as he concludes his rather lengthy philosophical excursion with the following conclusion (out of which I already quoted one line in my introduction to this post):

“This subject has an interesting history. The theory we have described was known from the beginning of quantum mechanics in 1926. The fact that the vector potential appears in the wave equation of quantum mechanics (called the Schr枚dinger equation) was obvious from the day it was written. That it cannot be replaced by the magnetic field in any easy way was observed by one man after the other who tried to do so. This is also clear from our example of electrons moving in a region where there is no field and being affected nevertheless. But because in classical mechanics A did not appear to have any direct importance and, furthermore, because it could be changed by adding a gradient, people repeatedly said that the vector potential had no direct physical significance鈥攖hat only the magnetic and electric fields are 鈥渞eal鈥 even in quantum mechanics. It seems strange in retrospect that no one thought of discussing this experiment until 1956, when Bohm and Aharonov first suggested it and made the whole question crystal clear. The implication was there all the time, but no one paid attention to it. Thus many people were rather shocked when the matter was brought up. That鈥檚 why someone thought it would be worthwhile to do the experiment to see if it was really right, even though quantum mechanics, which had been believed for so many years, gave an unequivocal answer. It is interesting that something like this can be around for thirty years but, because of certain prejudices of what is and is not significant, continues to be ignored.”

Well… That’s it, folks! Enough for today! 馃檪

An interpretation of the wavefunction

This is my umpteenth post on the same topic. 馃槮 It is obvious that this search for a sensible聽interpretation is consuming me. Why? I am not sure. Studying physics is frustrating. As a leading physicist puts it:

“The聽teaching of quantum mechanics these days usually聽follows the same dogma: firstly, the student is told about the failure of classical physics at聽the beginning of the last century; secondly, the heroic confusions of the founding fathers聽are described and the student is given to understand that no humble undergraduate student聽could hope to actually understand quantum mechanics for himself; thirdly, a deus ex聽machina arrives in the form of a set of postulates (the Schr枚dinger equation, the collapse聽of the wavefunction, etc); fourthly, a bombardment of experimental verifications is given,聽so that the student cannot doubt that QM is correct; fifthly, the student learns how to聽solve the problems that will appear on the exam paper, hopefully with as little thought as聽possible.”

That’s obviously not the way we want to understand quantum mechanics. [With we,聽I mean, me, of course, and you, if you’re reading this blog.]聽Of course, that doesn’t mean I don’t believe Richard Feynman, one of the greatest physicists ever, when he tells us no one, including himself, understands physics quite the way we’d聽like聽to understand it. Such statements should not prevent us from trying聽harder. So let’s look for better聽metaphors.聽The animation below shows the two components of the archetypal wavefunction 鈥 a simple sine and cosine. They’re the same function actually, but their phases differ by 90 degrees (蟺/2).


It makes me think of a V-2 engine with the pistons at a 90-degree angle. Look at the illustration below, which I took from a rather simple article on cars and engines that has nothing to do with quantum mechanics. Think of the moving pistons as harmonic oscillators, like springs.


We will also think of the聽center of each cylinder as the zero point: think of that point as a point where – if we’re looking at one cylinder alone – the internal and external pressure balance each other, so the piston would not move… Well… If it weren’t for the other piston, because the second piston is聽not at the center聽when the first is. In fact, it is easy to verify and compare the following positions of both pistons, as well as the associated dynamics of the situation:

Piston 1

Piston 2

Motion of Piston 1

Motion Piston 2



Compressed air will push piston down

Piston moves down against external pressure



Piston moves down against external pressure

External air pressure will push piston up



External air pressure will push piston up

Piston moves further up and compresses the air



Piston moves further up and compresses the air

Compressed air will push piston down

When the pistons move, their linear motion will be described by a sinusoidal function: a sine or a cosine. In fact, the 90-degree V-2 configuration ensures that the linear motion of the two pistons will be exactly the same, except for a phase difference of 90 degrees. [Of course, because of the sideways motion of the connecting rods, our sine and cosine function describes the linear motion only approximately, but you can easily imagine the idealized聽limit situation. If not, check Feynman’s description of the harmonic oscillator.]

The question is: if we’d have a set-up like this, two springs – or two harmonic oscillators – attached to a shaft through a crank, would this really work as a perpetuum mobile? We obviously talk energy聽being transferred back and forth between the rotating shaft and the moving pistons… So… Well… Let’s model this: the total聽energy, potential聽and聽kinetic, in each harmonic oscillator is constant. Hence, the piston only delivers or receives聽kinetic聽energy from the rotating mass of the shaft.

Now, in physics, that’s a bit of an oxymoron: we don’t think of negative or positive kinetic (or potential) energy in the context of oscillators. We don’t think of the direction of energy. But… Well… If we’ve got two聽oscillators, our picture changes, and so we may have to adjust our thinking here.

Let me start by giving you an authoritative derivation of the various formulas involved here, taking the example of the physical spring as an oscillator鈥攂ut the formulas are basically the same for聽any harmonic oscillator.

energy harmonic oscillator

The first formula is a general description of the motion of our oscillator. The coefficient in front of the cosine function (a)聽is the maximum amplitude. Of course, you will also recognize 蠅0聽as the聽natural聽frequency of the oscillator, and聽螖 as the phase factor, which takes into account our t = 0 point. In our case, for example, we have two oscillators with a phase difference equal to 蟺/2 and, hence, 螖 would be 0 for one oscillator, and 鈥撓/2 for the other. [The formula to apply here is sin胃 = cos(胃 鈥 蟺/2).] Also note that we can equate our 胃 argument to 蠅0路t.聽Now, if聽a聽= 1 (which is the case here), then these formulas simplify to:

  1. K.E. = T = m路v2/2 =聽m路蠅02路sin2(胃 + 螖) = m路蠅02路sin2(蠅0路t + 螖)
  2. P.E. = U = k路x2/2 = k路cos2(胃 + 螖)

The coefficient k in the potential energy formula characterizes the force: F = 鈭択路x. The minus sign reminds us our oscillator wants to return to the center point, so the force pulls back. From the dynamics involved, it is obvious that k must be equal to m路蠅02., so that gives us the famous T + U = m路蠅02/2 formula or, including a聽once again, T + U = m路a2路蠅02/2.

Now, if we normalize聽our functions by equating k to one (k = 1), then聽the motion of聽our first oscillator is given by the cos胃 function, and its kinetic energy will be equal to聽sin2胃. Hence, the (instantaneous)聽change聽in kinetic energy at any point in time will be equal to:

d(sin2胃)/d胃 = 2鈭檚in胃鈭檇(sin胃)/dt = 2鈭檚in胃鈭檆os胃

Let’s look at the second oscillator now. Just think of the second piston going up and down in our V-twin engine. Its motion is given by the聽sin胃 function which, as mentioned above, is equal to cos(胃鈭捪 /2). Hence, its kinetic energy is equal to聽sin2(胃鈭捪 /2), and how it聽changes聽– as a function of 胃 – will be equal to:

2鈭檚in(胃鈭捪 /2)鈭檆os(胃鈭捪 /2) =聽= 鈭2鈭檆os胃鈭檚in胃 = 鈭2鈭檚in胃鈭檆os胃

We have our perpetuum mobile! While transferring kinetic energy from one piston to the other, the rotating shaft moves at constant speed. Linear motion becomes circular motion, and vice versa, in a frictionless Universe. We have the metaphor we were looking for!

Somehow, in this beautiful interplay between linear and circular motion, energy is being borrowed from one place to another, and then returned. From what place to what place? I am not sure. We may call it the real and imaginary energy space respectively, but what does that mean? One thing is for sure, however: the interplay between the real and imaginary part of the wavefunction describes how energy propagates through space!

How exactly? Again, I am not sure. Energy is, obviously, mass in motion – as evidenced by the E = m路c2聽equation, and it may not have any direction (when everything is said and done, it’s a scalar聽quantity without direction), but the energy in a linear motion is surely different from that in a circular motion, and our metaphor suggests we need to think somewhat more along those lines. Perhaps we will, one day, able to聽square this circle. 馃檪

Schr枚dinger’s equation

Let’s analyze the interplay between the real and imaginary part of the wavefunction through an analysis of Schr枚dinger’s equation, which we write as:

i路魔鈭欌垈蠄/鈭倀 = 鈥(魔2/2m)鈭欌垏2蠄 + V路蠄

We can do a quick dimensional analysis of both sides:

  • [i路魔鈭欌垈蠄/鈭倀] = N鈭檓鈭檚/s = N鈭檓
  • [鈥(魔2/2m)鈭欌垏2蠄] = N鈭檓3/m2 = N鈭檓
  • [V路蠄] = N鈭檓

Note the dimension of the ‘diffusion’ constant聽魔2/2m: [魔2/2m] = N2鈭檓2鈭檚2/kg = N2鈭檓2鈭檚2/(N路s2/m) = N鈭檓3. Also note that, in order for the dimensions to come out alright, the dimension of V 鈥 the potential 鈥 must be that of energy. Hence, Feynman鈥檚 description of it as the potential energy 鈥 rather than the potential tout court聽鈥 is somewhat confusing but correct: V must equal the potential energy of the electron. Hence, V is not the conventional (potential) energy of the unit charge (1 coulomb). Instead, the natural unit of charge is used here, i.e. the charge of the electron itself.

Now, Schr枚dinger鈥檚 equation 鈥 without the V路蠄 term 鈥 can be written as the following pair of equations:

  1. Re(鈭傁/鈭倀) = 鈭(1/2)鈭(魔/m)鈭Im(鈭2蠄)
  2. Im(鈭傁/鈭倀) = (1/2)鈭(魔/m)鈭Re(鈭2蠄)

This closely resembles the propagation mechanism of an electromagnetic wave as described by Maxwell’s equation for free space (i.e. a space with no charges), but E and B are vectors, not scalars. How do we get this result. Well… 蠄 is a complex function, which we can write as a + i鈭檅. Likewise, 鈭傁/鈭倀 is a complex function, which we can write as c + i鈭檇, and 鈭2蠄 can then be written as e + i鈭檉. If we temporarily forget about the coefficients (魔, 魔2/m and V), then Schr枚dinger鈥檚 equation – including V路蠄 term – amounts to writing something like this:

i鈭(c + i鈭檇) = 鈥(e + i鈭檉) + (a + i鈭檅) 鈬 a + i鈭檅 = i鈭檆 鈭 d + e+ i鈭檉 聽鈬 a = 鈭抎 + e and b = c + f

Hence, we can now write:

  1. V鈭Re(蠄) = 鈭捘р垯Im(鈭傁/鈭倀) + (1/2)鈭( 魔2/m)鈭Re(鈭2蠄)
  2. V鈭Im(蠄) = 魔鈭Re(鈭傁/鈭倀) + (1/2)鈭( 魔2/m)鈭Im(鈭2蠄)

This simplifies to the two equations above for V = 0, i.e. when there is no potential (electron in free space). Now we can bring the Re and Im operators into the brackets to get:

  1. V鈭Re(蠄) = 鈭捘р垯鈭Im (蠄)/鈭倀 + (1/2)鈭( 魔2/m)鈭欌垏2Re(蠄)
  2. V鈭Im(蠄) = 魔鈭欌垈Re(蠄)/鈭倀 + (1/2)鈭( 魔2/m)鈭欌垏2Im(蠄)

This is very interesting, because we can re-write this using the quantum-mechanical energy operator H = 鈥(魔2/2m)鈭欌垏2 + V路 (note the multiplication sign after the V, which we do not have 鈥 for obvious reasons 鈥 for the 鈥(魔2/2m)鈭欌垏2 expression):

  1. H[Re (蠄)] = 鈭捘р垯鈭Im(蠄)/鈭倀
  2. H[Im(蠄)] = 魔鈭欌垈Re(蠄)/鈭倀

A dimensional analysis shows us both sides are, once again, expressed in N鈭檓. It鈥檚 a beautiful expression because 鈥 if we write the real and imaginary part of 蠄 as r鈭檆os胃 and r鈭檚in胃, we get:

  1. H[cos胃] = 鈭捘р垯鈭俿in胃/鈭倀 = E鈭檆os胃
  2. H[sin胃] = 魔鈭欌垈cos胃/鈭倀 = E鈭檚in胃

Indeed, 胃聽= (E鈭檛 鈭 px)/魔 and, hence, 鈭捘р垯鈭俿in胃/鈭倀 = 魔鈭檆os胃鈭橢/魔 = E鈭檆os胃 and 魔鈭欌垈cos胃/鈭倀 = 魔鈭檚in胃鈭橢/魔 = E鈭檚in胃.聽 Now we can combine the two equations in one equation again and write:

H[r鈭(cos胃 + i鈭檚in胃)] = r鈭(E鈭檆os胃 + i鈭檚in胃) 鈬 H[蠄] = E鈭櫹

The operator H 鈥 applied to the wavefunction 鈥 gives us the (scalar) product of the energy E and the wavefunction itself. Isn’t this strange?

Hmm… I need to further verify and explain this result… I’ll probably do so in yet another post on the same topic… 馃檪

Post scriptum: The symmetry of our V-2 engine – or perpetuum mobile聽– is interesting: its cross-section has only one axis of symmetry. Hence, we may associate some angle with it, so as to define its orientation in the two-dimensional cross-sectional plane. Of course, the cross-sectional plane itself is at right angles to the crankshaft axis, which we may also associate with some angle in three-dimensional space. Hence, its geometry defines two orthogonal directions which, in turn, define a spherical coordinate system, as shown below.


We may, therefore, say that three-dimensional space is actually being implied聽by聽the geometry of our V-2 engine. Now that聽is聽interesting, isn’t it? 馃檪