If you made it here, it means you’re totally fed up with all of the *easy* stories on quantum mechanics: diffraction, double-slit experiments, imaginary gamma-ray microscopes,… ** You’ve had it!** You now

*know*what quantum mechanics is all about, and you’ve realized all these thought experiments never answer the tough question:

*where did Planck find that constant (h) which pops up everywhere?*And how did he find that Planck relation which seems to underpin all and everything in quantum mechanics?

If you don’t know, that’s because you’ve skipped the blackbody radiation story. So let me give it to you here. What’s blackbody radiation?

**Thermal equilibrium of radiation**

That’s what the blackbody radiation problem is about: thermal equilibrium of radiation.

**Huh?**

Yes. Imagine a box with gas inside. You’ll often see it’s described as a furnace, because we heat the box. Hence, the box, and everything inside, acquires a certain temperature, which we then assume to be constant. The gas inside will absorb energy and start emitting radiation, because the gas atoms or molecules are atomic oscillators. Hence, we have electrons getting excited and then jumping up and down from higher to lower energy levels, and then again and again and again, thereby emitting photons with a certain energy and, hence, light of a certain frequency. To put it simply: we’ll find light with various frequencies in the box and, in thermal equilibrium, we should have some distribution of the intensity of the light according to the frequency: what kind of radiation do we find in the furnace? Well… Let’s find out.

The assumption is that the box walls send light back, or that the box has mirror walls. So we assume that all the radiation keeps running around in the box. Now that implies that the atomic oscillators not only *radiate* energy, but also *receive* energy, because they’re constantly being illuminated by radiation that comes straight back at them. If the temperature of the box is kept constant, we arrive at a situation which is referred to as thermal equilibrium. In Feynman’s words: “After a while there is a great deal of light rushing around in the box, and although the oscillator is radiating some, the light comes back and returns some of the energy that was radiated.”

OK. That’s easy enough to understand. However, the actual analysis of this equilibrium situation is what gave rise to the ‘problem’ of blackbody radiation in the 19th century which, as you know, led Planck and Einstein to develop a quantum-mechanical view of things. It turned out that the classical analysis predicted a distribution of the intensity of light that didn’t make sense, and no matter how you looked at it, it just didn’t come out right. Theory and experiment did not agree. Now, *that *is something *very *serious in science, as you know, because it means your theory isn’t right. In this case, it was disastrous, because it meant *the whole of classical theory *wasn’t right.

To be frank, the analysis is not all that easy. It involves all that I’ve learned so far: the math behind oscillators and interference, statistics, the so-called kinetic theory of gases and what have you. I’ll try to summarize the story but you’ll see it requires quite an introduction.

**Kinetic energy and temperature**

The kinetic theory of gases is part of what’s referred to as statistical mechanics: we look at a gas as a large number of inter-colliding atoms and we describe what happens in terms of the collisions between them. As Feynman puts it: “Fundamentally, we assert that the gross properties of matter should be explainable in terms of the motion of its parts.” Now, we can do a lot of intellectual gymnastics, analyzing one gas in one box, two gases in one box, two gases in one box with a piston between them, two gases in two boxes with a hole in the wall between them, and so on and so on, but that would only distract us here. The rather remarkable conclusion of such exercises, which you’ll surely remember from your high school days, is that:

- Equal volumes of different gases, at the same pressure and temperature, will have the same number of molecules.
- In such view of things, temperature is actually nothing but the
*mean*kinetic energy of those molecules (or atoms if it’s a monatomic gas).

So we can actually measure temperature in terms of the kinetic energy of the molecules of the gas, which, as you know, equals mv^{2}/2, with m the mass and v the velocity of the gas molecules. Hence, we’re tempted to define some absolute measure of temperature *T* and simply write:

*T* = 〈mv^{2}/2〉

The 〈 and 〉 brackets denote the mean here. To be precise, we’re talking the root mean square here, aka as the quadratic mean, because we want to average some magnitude of a varying quantity. Of course, the mass of different gases will be different – and so we have 〈m_{1}v_{1}^{2}/2〉 for gas 1 and 〈m_{2}v_{2}^{2}/2〉 for gas 2 – but that doesn’t matter: we can, actually, imagine measuring temperature in *joule*, the unit of energy, including kinetic energy. Indeed, the units come out alright: 1 joule = 1 kg·(m^{2}/s^{2}). For historical reasons, however, T is measured in different units: degrees Kelvin, centigrades (i.e. degrees Celsius) or, in the US, in Fahrenheit. Now, we can easily go from one measure to the other as you know and, hence, here I should probably just jot down the so-called ideal gas law–because we need that law for the subsequent analysis of blackbody radiation–and get on with it:

PV = NkT

However, now that we’re here, let me give you an inkling of how we derive that law. A classical (Newtonian) analysis of the collisions (you can find the detail in Feynman’s *Lectures*, I-39-2) will yield the following equation: P = (2/3)n〈mv^{2}/2〉, with n the number of atoms or molecules per *unit *volume. So the *pressure* of a gas (which, as you know, is the *force* (of a gas on a piston, for example) *per unit area*: P = F/A) is also equal to the mean kinetic energy of the gas molecules multiplied by (2/3)n. If we multiply that equation by V, we get PV = N(2/3)〈mv^{2}/2〉. However, we know that equal volumes of volumes of different gases, at the same pressure and temperature, will have the same number of molecules, so we have PV = N(2/3)〈m_{1}v_{1}^{2}/2〉 = N(2/3)〈m_{2}v_{2}^{2}/2〉, which we write as PV = NkT with kT = (2/3)〈m_{1}v_{1}^{2}/2〉 = (2/3)〈m_{2}v_{2}^{2}/2〉.

In other words, that factor of proportionality k is the one we have to use to convert the temperature as measured by 〈mv^{2}/2〉 (i.e. the mean kinetic energy expressed in *joules*) to T (i.e. the temperature expressed in the measure we’re used to, and that’s degrees Kelvin–or Celsius or Fahrenheit, but let’s stick to Kelvin, because that’s what’s used in physics). Vice versa, we have 〈mv^{2}/2〉 = (3/2)kT. Now, that constant of proportionality k is equal to k = 1.38×10^{–23 }joule per Kelvin (J/K). So if T is (absolute) temperature, expressed in Kelvin (K), our definition says that the mean molecular kinetic energy is (3/2)kT.

That k factor is a physical constant referred to as the Boltzmann constant. If it’s one of these constants, you may wonder why we don’t integrate that 3/2 factor in it? Well… That’s just how it is, I guess. In any case, it’s rather convenient because we’ll have 2/3 factors in other equations and so these will cancel out with that 3/2 term. However, I am digressing way too much here. I should get back to the main story line. However, before I do that, I need to expand on one more thing, and that’s a small lecture on how things look like when we also allow for *internal motion*, i.e. the rotational and vibratory motions of the atoms *within *the gas molecule. Let me first re-write that PV = NkT equation as

PV = NkT = N(2/3)〈m_{1}v_{1}^{2}/2〉 = (2/3)U = 2U/3

For monatomic gas, that U would only be the kinetic energy of the atoms, and so we can write it as U = (2/3)NkT. Hence, we have the grand result that the kinetic energy, for each atom, is equal to (3/2)kT, *on average that is*.

What about non-monatomic gas? Well… For complex molecules, we’d also have energy going into the rotational and vibratory motion of the atoms within the molecule, separate from what is usually referred to as the center-of-mass (CM) motion of the molecules themselves. Now, I’ll again refer you to Feynman for the detail of the analysis, but it turns out that, if we’d have, for example, a diatomic molecule, consisting of an A and B atom, the internal rotational and vibratory motion would, indeed, also absorb energy, and we’d have a *total* energy equal to (3/2)kT + (3/2)kT = 2×(3/2)kT = 3kT. Now, that amount (3kT) can be split over (i) the energy related to the CM motion, which must still be equal to (3/2)kT, and (ii) the average kinetic energy of the *internal* motions of the diatomic molecule *excluding *the bodily motion of the CM. Hence, the latter part must be equal to 3kT – (3/2)kT = (3/2)kT. So, for the diatomic molecule, the total energy happens to consist of two equal parts.

Now, there is a more general theorem here, for which I have to introduce the notion of the *degrees of freedom* of a system. Each atom can rotate or vibrate or oscillate or whatever in three independent directions–namely the three spatial coordinates x, y and z. These spatial dimensions are referred to as the *degrees of freedom *of the atom (in the kinetic theory of gases, that is), and if we have two atoms, we have 2×3 = 6 degrees of freedom. More in general, **the number of degrees of freedom of a molecule composed of r atoms is equal to ****3****r**. Now, it can be shown that the *total* energy of an r-atom molecule, including all internal energy as well as the CM motion, will be 3r×kT/2 = 3rkT/2 joules. Hence, for every independent direction of motion that there is, the average kinetic energy for that direction will be kT/2. [Note that ‘independent direction of motion’ is used, somewhat confusingly, as a synonym for degree of freedom, so we don’t have three but six ‘independent directions of motion’ for the diatomic molecule. I just wanted to note that because I do think it causes confusion when reading a textbook like Feynman’s.] Now, that *total* amount of energy, i.e. 3r(kT/2), will be split as follows according to the “theorem concerning the average energy of the CM motion”, as Feynman terms it:

- The kinetic energy for the CM motion of each molecule is, and will always be, (3/2)kT.
- The remainder, i.e. r(3/2)kT – (3/2)kT = (3/2)(r–1)kt, is
*internal*vibrational and rotational kinetic energy, i.e. the sum of*all*vibratory and rotational kinetic energy but*excluding*the energy of the CM motion of the molecule.

*Phew!* That’s quite something. And we’re not quite there yet.

**The analysis for photon gas**

*Photon gas?* What’s that? Well… Imagine our box is the gas in a *very* hot star, hotter than the sun. As Feynman writes it: “The sun is not hot enough; there are still too many atoms, but at still higher temperatures in certain very hot stars, we may neglect the atoms and suppose that the only objects that we have in the box are photons.” Well… Let’s just go along with it. We know that photons have no mass but they do have some very tiny momentum, which we related to the magnetic field vector, as opposed to the electric field. It’s tiny indeed. Most of the energy of light goes into the electric field. However, we noted that we can write p as p = E/*c*, with *c* the speed of light (3×10^{8}). Now, we had that P = (2/3)n〈mv^{2}/2〉 formula for gas, and we know that the momentum **p** is defined as **p** = m**v**. So we can substitute mv^{2 }by (mv)v = pv. So we get P = (2/3)n〈pv/2〉 = (1/3)n〈pv〉.

Now, the energy of photons is not quite the same as the kinetic energy of an atom or an molecule, i.e. mv^{2}/2. In fact, we know that, for photons, the speed v is equal to *c*, and p*c* = E. Hence, multiplying by the volume V, we get

PV = U/3

So that’s a formula that’s very similar to the one we had for gas, for which we wrote: PV = NkT = 2U/3. The only thing is that we don’t have a factor 2 in the equation but so that’s because of the different energy concepts involved. Indeed, the concept of the energy of a photon (E = p*c*) is different than the concept of *kinetic *energy. But so the result is very nice: we have a similar formula for the *compressibility* of gas and radiation. In fact, both PV = 2U/3 and PV = U/3 will usually be written, more generally, as:

PV = (γ – 1)U* *

Hence, this γ would be γ = 5/3 ≈ 1.667 for gas and 4/3 ≈ 1.333 for *photon *gas. Now, I’ll skip the detail (it involves a differential analysis) but it can be shown that this general formula, PV = (γ – 1)U, implies that PV^{γ }(i.e. the pressure times the volume raised to the power γ) must equal some constant, so we write:

PV^{γ }= C

So far so good. Back to our problem: blackbody radiation. What you should take away from this introduction is the following:

- Temperature is a measure of the average kinetic energy of the atoms or molecules in a gas. More specifically, it’s related to the mean kinetic energy of the CM motion of the atoms or molecules, which is equal to (3/2)kT, with k the Boltzmann constant and T the temperature expressed in Kelvin (i.e. the absolute temperature).
- If gas atoms or molecules have additional ‘degrees of freedom’, aka ‘independent directions of motion’, then each of these will absorb additional energy, namely kT/2.

**Energy and radiation**

The atoms in the box are atomic oscillators, and we’ve analyzed them before. What the analysis above added was that average *kinetic* energy of the atoms going around is (3/2)kT and that, if we’re talking molecules consisting of r atoms, we have a formula for their *internal *kinetic energy as well. However, as an oscillator, they also have energy separate from that kinetic energy we’ve been talking about alrady. How much? That’s a tricky analysis. Let me first remind you of the following:

- Oscillators have a natural frequency, usually denoted by the (angular) frequency ω
_{0}. - The sum of the potential and kinetic energy stored in an oscillator is a constant, unless there’s some damping constant. In that case, the oscillation dies out. Here, you’ll remember the concept of the Q of an oscillator. If there’s some damping constant, the oscillation will die out and the relevant formula is 1/Q = (dW/dt)/(ω
_{0}W) = γ/ω_{0}, with γ the damping constant (not to be confused with the γ we used in that PV^{γ }= C formula).

Now, for gases, we said that, for every independent direction of motion there is, the average kinetic energy for that direction will be kT/2. I admit it’s a bit of a stretch of the imagination but so that’s how the blackbody radiation analysis starts really: our atomic oscillators will have an *average* *kinetic energy* *equal to kT/2* and, hence, their *total* energy (kinetic *and *potential) should be twice that amount, according to the second remark I made above. So that’s kT. We’ll note the total energy as W below, so we can write:

W = kT

Just to make sure we know what we’re talking about (one would forget, wouldn’t one?), kT is the product of the Boltzmann constant (1.38×10^{–23 }J/K) and the temperature of the gas (so note that the product is expressed in *joule *indeed). Hence, that product is the average energy of our atomic oscillators in the gas in our furnace.

Now, I am not going to repeat all of the detail we presented on atomic oscillators (I’ll refer you, once again, to Feynman) but you may or may not remember that atomic oscillators do have a Q indeed and, hence, some damping constant γ. So we can use and re-write that formula above as

dW/dt = (1/Q)(ω_{0}W) = (ω_{0}W)(γ/ω_{0}) = γW, which implies γ = (dW/dt)/W

What’s γ? Well, we’ve calculated the Q of an atomic oscillator already: Q = 3λ/4πr_{0}. Now, λ = 2π*c*/ω_{0 }(we just convert the wavelength into (angular) frequency using λν = *c*) and γ = ω_{0}/Q, so we get γ = 4πr_{0}ω_{0}/[3(2π*c*/ω_{0})] = (2/3)r_{0}ω_{0}^{2}/*c.* Now, plugging that result back into the equation above, we get

dW/dt = γW = (2/3)(r_{0}ω_{0}^{2}kT)/*c*

Just in case you’d have difficulty following – I admit I did 🙂 – dW/dt is the average rate of radiation of light of (or near) frequency ω_{0}^{2}. I’ll let Feynman take over here:

Next we ask how much light must be shining on the oscillator. It must be enough that the energy absorbed from the light (and thereupon scattered) is just exactly this much. In other words, the emitted light is accounted for as *scattered* light from the light that is shining on the oscillator in the cavity. So we must now calculate how much light is scattered from the oscillator if there is a certain amount—unknown—of radiation incident on it. Let I(ω)dω be the amount of light energy there is at the frequency ω, within a certain range dω (because there is no light at *exactly* a certain frequency; it is spread all over the spectrum). So I(ω) is a certain *spectral distribution* which we are now going to find—it is the color of a furnace at temperature T that we see when we open the door and look in the hole. Now how much light is absorbed? We worked out the amount of radiation absorbed from a given incident light beam, and we calculated it in terms of a *cross section*. It is just as though we said that all of the light that falls on a certain cross section is absorbed. So the total amount that is re-radiated (scattered) is the incident intensity I(ω)dω multiplied by the cross section σ.

OK. That makes sense. I’ll not copy the rest of his story though, because this is a post in a blog, not a textbook. What we need to find is that I(ω). So I’ll refer you to Feynman for the details (these ‘details’ involve fairly complicated calculations, which are less important than the basic assumptions behind the model, which I presented above) and just write down the result:

This formula is *Rayleigh’s law*. [And, yes, it’s the same Rayleigh – *Lord *Rayleigh, I should say respectfully – as the one who invented that criterion I introduced in my previous post, but so this law and that criterion have nothing to do with each other.] This ‘law’ gives the intensity, or the distribution, of light in a furnace. Feynman says it’s referred to as blackbody radiation because “the hole in the furnace that we look at is black when the temperature is zero.” […] OK. Whatever. What we call it doesn’t matter. *The point is that this function tells us that the **intensity goes as the square of the frequency, which means that if we have a box at any temperature at all, and if we look at it, the X- and gamma rays will be burning out eyes out !* The graph below shows both the *theoretical* curve for two temperatures (T_{0 }and 2T_{0}), as derived above (see the solid lines), and then the *actual* curves for those two temperatures (see the dotted lines).

This is the so-called UV catastrophe: according to classical physics, an ideal black body at thermal equilibrium should emit radiation with infinite power. In reality, of course, it doesn’t: Rayleigh’s law is false. Utterly false. And so that’s where Planck came to the rescue, and he did so by assuming radiation is being emitted and/or absorbed in finite quanta: ** multiples of h**, in fact.

Indeed, Planck studied the *actual *curve and fitted it with another function. That function assumed the average energy of a harmonic oscillator was not just proportional with the temperature (T), but that it was also a function of the (natural) frequency of the oscillators. By fiddling around, he found a simple derivation for it which involved a very peculiar assumption. That assumption was that *the harmonic oscillator can take up energies only *ħω *at the time*, as shown below.

Hence, the assumption is that the harmonic oscillators can*not *take whatever (continous) energy level. No. The allowable energy levels of the harmonic oscillators are equally spaced: E_{n }= nħω. Now, the actual derivation is at least as complex as the derivation of Rayleigh’s law, so I won’t do it here. Let me just give you the key assumptions:

- The gas consists of a large number of atomic oscillators, each with their own natural frequency ω
_{0}. - The permitted energy levels of these harmonic oscillator are equally spaced and ħω
_{0 }apart. - The probability of occupying a level of energy E is P(E) = α
*e*^{–E/kT}.

All the rest is tedious calculation, including the calculation of the parameters of the model, which include ħ (and, hence, h, because h = 2πħ) and are found by matching the theoretical curves to the actual curves as measured in experiments. I’ll just mention one result, and that’s the average energy of these oscillators:

As you can see, the average energy does not only depend on the temperature T, but also on their (natural) frequency. So… Now you know where h comes from. As I relied so heavily on Feynman’s presentation here, I’ll include the link. As Feynman puts it: “This, then, was the first quantum-mechanical formula ever known, or ever discussed, and it was the beautiful culmination of decades of puzzlement. Maxwell knew that there was something wrong, and the problem was, what was *right*? Here is the quantitative answer of what is right instead of kT.”

So there you go. *Now *you know. 🙂 Oh… And in case you’d wonder: why the h? Well… Not sure. It’s said the h stands for *Hilfsgrösse*, so that’s some constant which was just supposed to help him out with the calculation. At that time, Planck did not suspect it would turn out to be one of the most fundamental physical constants. 🙂

**Post scriptum**: I went quite far in my presentation of the basics of the kinetic theory of gases. You may wonder now. I didn’t use that theoretical PV^{γ }= C relation, did I? And why all the fuss about photon gas? Well… That was just to introduce that PV^{γ }= C relation, so I could note, here, in this *post scriptum*, that it has a similar problem. The γ exponent is referred to as the specific heat ratio of a gas, and it can be calculated theoretically as well, as we did–well… Sort of, because we skipped the actual derivation. However, their theoretical value also differs substantially from actually measured values, and the problem is the same: one should not assume that a continuous value for 〈E〉. Agreement between theory and experiment can only be reached when the same assumptions as those of Planck are used: discrete energy levels, multiples of ħ and ω: E_{n }= nħω. Also, the specific functional form which Planck used to resolve the blackbody radiation problem is also to be used here. For more details, I’ll refer to Feynman too. I can’t say this is easy to digest, but then who said it would be easy? 🙂

The point to note is that the blackbody radiation problem wasn’t the only problem in the 19th century. As Feynman puts it: “One often hears it said that physicists at the latter part of the nineteenth century thought they knew all the significant physical laws and that all they had to do was to calculate more decimal places. Someone may have said that once, and others copied it. But a thorough reading of the literature of the time shows they were all worrying about something.” They were, and so Planck came up with something new. And then Einstein took it to the next level and then… Well… The rest is history. 🙂

Pingback: Fields and charges (II) | Reading Penrose