**Pre-script** (dated 26 June 2020): This post suffered from the removal of material by the the dark force. Its layout also got tampered with and I don’t have the time or energy to put everything back in order. It remains relevant, I think. Among other things, it shows how Planck’s constant was actually discovered—historically and experimentally. If anything, the removal of material will help you to think things through for yourself. 🙂

**Original post**:

My previous post was tough. Tough for you–if you’ve read it. But tough for me too. 🙂

The blackbody radiation problem is complicated but, when everything is said and done, what the analysis says is that the the ‘equipartition theorem’ in the kinetic theory of gases ‘theorem (or the ‘theorem concerning the average energy of the center-of-mass motion’, as Feynman terms it), is not correct. That equipartition theorem basically states that, in thermal equilibrium, energy is shared equally among all of its various forms. For example, the average kinetic energy per degree of freedom in the translation motion of a molecule should equal that of its rotational motions. That equipartition theorem is also quite precise: it also states that the mean energy, for each atom or molecule, for each degree of freedom, is kT/2. Hence, that’s the (average) energy the 19th century scientists also assigned to the atomic oscillators in a gas.

However, the discrepancy between the theoretical and empirical result of their work shows that adding atomic oscillators–as radiators and absorbers of light–to the system (a box of gas that’s being heated) is *not *just a matter of adding additional ‘degree of freedom’ to the system. It can’t be analyzed in ‘classical’ terms: the *actual *spectrum of blackbody radiation shows that these atomic oscillators do not absorb, on average, an amount of energy equal to kT/2. Hence, they are not just another ‘independent direction of motion’.

So what *are* they then? Well… Who knows? I don’t. But, as I didn’t quite go through the full story in my previous post, the least I can do is to try to do that here. It should be worth the effort. In Feynman’s words: “This was the first quantum-mechanical formula ever known, or discussed, and it was the beautiful culmination of decades of puzzlement.” And then it does *not *involve complex numbers or wave functions, so that’s another reason why looking at the detail is kind of nice. 🙂

**Discrete energy levels and the nature of h**

To solve the blackbody radiation problem, Planck assumed that the permitted energy levels of the atomic harmonic oscillator were equally spaced, at ‘distances’ ħω_{0 }apart from each other. That’s what’s illustrated below.

Now, I don’t want to make too many digressions from the main story, but this E_{n} = nħω_{0} formula obviously deserves some attention. First note it immediately shows why the dimension of ħ is expressed in *joule-seconds *(J·s), or *electronvolt-seconds *(J·s): we’re multiplying it with a frequency indeed, so that’s something expressed per second (hence, its dimension is s^{–1}) in order to get a measure of energy: joules or, because of the atomic scale, electronvolts. [The eV is just a (much) smaller measure than the *joule*, but it amounts to the same: 1 eV ≈ 1.6×10^{−19} J.]

One thing to note is that the equal spacing consists of distances equal to ħω_{0}, *not *of ħ. Hence, while h, or ħ (ħ is the constant to be used when the frequency is expressed in radians per second, rather than oscillations per second, so ħ = h/2π) is now being referred to as the quantum of action (*das elementare Wirkungsquantum* in German), Planck referred to it as as a *Hilfsgrösse *only (that’s why he chose the *h* as a symbol, it seems), so that’s an auxiliary constant only: the actual quantum of action is, of course, ΔE, i.e. the difference between the various energy levels, which is the *product* of ħ and ω_{0 }(or of h and ν_{0} if we express frequency in oscillations per second, rather than in angular frequency). Hence, Planck (and later Einstein) did *not* assume that an atomic oscillator emits or absorbs packets of energy as tiny as ħ or h, but packets of energy as big as ħω_{0 }or, what amounts to the same (ħω = (h/2π)(2πν) = hν), hν_{0}. Just to give an example, the frequency of sodium light (ν) is 500×10^{12 }Hz, and so its energy is E = hν. That’s not a lot–about 2 eV only– but it still packs 500×10^{12} ‘quanta of action’ !

Another thing is that ω (or ν) is a continuous variable: hence, the assumption of equally spaced energy *levels* does not imply that energy itself is a discrete variable: light can have any frequency and, hence, we can also imagine photons with any energy level: the only thing we’re saying is that the energy of a photon *of a specific color *(i.e. a specific frequency ν) will be a multiple of hν.

**Probability assumptions**

The second key assumption of Planck as he worked towards a solution of the blackbody radiation problem was that the probability (P) of occupying a level of energy E is P(E) = α*e*^{−E/kT}. OK… Why not? But what *is* this assumption really? You’ll think of some ‘bell curve’, of course. But… No. That wouldn’t make sense. Remember that the energy has to be positive. The general shape of this P(E) curve is shown below.

The highest probability density is near E = 0, and then it goes down as E gets larger, with kT determining the slope of the curve (just take the derivative). In short, this assumption basically states that higher energy levels are not so likely, and that *very *high energy levels are *very* unlikely. Indeed, this formula implies that the relative chance, i.e. the probability of being in state E_{1} relative to the chance of being in state E_{0}, is P_{1}/P_{0 }= *e*^{−(E1–E0)kT }= *e*^{−ΔE/kT}. Now, P_{1 }is n_{1}/N and P_{0 }is n_{0}/N and, hence, we find that n_{1 }must be equal to n_{0}*e*^{−ΔE/kT}. What this means is that the atomic oscillator is *less* likely to be in a higher energy state than in a lower one.

That makes sense, doesn’t it? I mean… I don’t want to criticize those 19th century scientists but… What were they thinking? Did they really imagine that infinite energy levels were as likely as… Well… More down-to-earth energy levels? I mean… A mechanical spring will break when you overload it. Hence, I’d think it’s pretty obvious those atomic oscillators cannot be loaded with just about anything, can they? Garbage in, garbage out: *of course*, that theoretical spectrum of blackbody radiation didn’t make sense!

Let me copy Feynman now, as the rest of the story is pretty straightforward:

Now, we have a lot of oscillators here, and each is a vibrator of frequency w_{0}. Some of these vibrators will be in the bottom quantum state, some will be in the next one, and so forth. What we would like to know is the average energy of all these oscillators. To find out, let us calculate the total energy of all the oscillators and divide by the number of oscillators. That will be the average energy per oscillator in thermal equilibrium, and will also be the energy that is in equilibrium with the blackbody radiation and that should go in the equation for the intensity of the radiation as a function of the frequency, instead of kT. [See my previous post: that equation is I(ω) = (ω^{2}kt)/(π^{2}*c*^{2}).]

Thus we let N_{0} be the number of oscillators that are in the ground state (the lowest energy state); N_{1} the number of oscillators in the state E_{1}; N_{2} the number that are in state E_{2}; and so on. According to the hypothesis (which we have not proved) that in quantum mechanics the law that replaced the probability *e*^{−P.E./kT} or *e*^{−K.E./kT} in classical mechanics is that the probability goes down as *e*^{−ΔE/kT}, where ΔE is the excess energy, we shall assume that the number N_{1} that are in the first state will be the number N_{0} that are in the ground state, times *e*^{−ħω/kT}. Similarly, N_{2}, the number of oscillators in the second state, is N_{2 }=N_{0}*e*^{−2ħω/kT}. To simplify the algebra, let us call *e*^{−ħω/kT }= x. Then we simply have N_{1} = N_{0}x, N_{2} = N_{0}x^{2}, …, N_{n }= N_{0}x^{n}.

The total energy of all the oscillators must first be worked out. If an oscillator is in the ground state, there is no energy. If it is in the first state, the energy is ħω, and there are N_{1} of them. So N_{1}ħω, or ħωN_{0}x is how much energy we get from those. Those that are in the second state have 2ħω, and there are N_{2} of them, so N_{2}⋅2ħω=2ħωN_{0}x^{2} is how much energy we get, and so on. Then we add it all together to get E_{tot }= N_{0}ħω(0+x+2x^{2}+3x^{3}+…).

And now, how many oscillators are there? Of course, N_{0} is the number that are in the ground state, N_{1} in the first state, and so on, and we add them together: N_{tot }= N_{0}(1+x+x^{2}+x^{3}+…). Thus the average energy is

Feynman concludes as follows: “This, then, was the first quantum-mechanical formula ever known, or ever discussed, and it was the beautiful culmination of decades of puzzlement. Maxwell knew that there was something wrong, and the problem was, what was *right*? Here is the quantitative answer of what is right instead of kT. This expression should, of course, approach kT as ω → 0 or as T → ∞.”

It does, *of course*. And so Planck’s analysis does result in a theoretical I(ω) curve that matches the observed I(ω) curve as a function of both temperature (T) and frequency (ω). But so what it is, then? What’s the equation describing the dotted curves? It’s given below:

I’ll just quote Feynman once again to explain the shape of those dotted curves: “We see that for a large ω, even though we have ω^{3 }in the numerator, there is an *e* raised to a tremendous power in the denominator, so the curve comes down again and does not “blow up”—we do not get ultraviolet light and x-rays where we do not expect them!”

**Is the analysis necessarily discrete?**

One question I can’t answer, because I just am not strong enough in math, is the question or whether or not there would be any other way to derive the actual blackbody spectrum. I mean… This analysis obviously makes sense and, hence, provides a theory that’s consistent and in accordance with experiment. However, the question whether or not it would be possible to develop another theory, *without having recourse to the assumption that energy levels in atomic oscillators are discrete and equally spaced with the ‘distance’ between equal to hν*_{0}, is not easy to answer. I surely can’t, as I am just a novice, but I can imagine smarter people than me have thought about this question. The answer must be negative, because I don’t know of any other theory: quantum mechanics obviously prevailed. Still… I’d be interested to see the alternatives that must have been considered.

**Post scriptum: **The “playing with the sums” is a bit confusing. The key to the formula above is the substitution of (0+x+2x^{2}+3x^{3}+…)/(1+x+x^{2}+x^{3}+…) by 1/[(1/x)–1)] = 1/[*e*^{ħω/kT}–1]. Now, the denominator 1+x+x^{2}+x^{3}+… is the Maclaurin series for 1/(1–x). So we have:

(0+x+2x^{2}+3x^{3}+…)/(1+x+x^{2}+x^{3}+…) = (0+x+2x^{2}+3x^{3}+…)(1–x)

= x+2x^{2}+3x^{3}… –x^{2}–2x^{3}–3x^{4}… = x+x^{2}+x^{3}+x^{4}…

= –1+(1+x+x^{2}+x^{3}…) = –1 + 1/(1–x) = –(1–x)+1/(1–x) = x/(1–x).

Note the tricky bit: if x = *e*^{−ħω/kT}, then *e*^{ħω/kT }is x^{−1 }= 1/x, and so we have (1/x)–1 in the denominator of that (mean) energy formula, *not* 1/(x–1). Now 1/[(1/x)–1)] = 1/[(1–x)/x] = x/(1–x), indeed, and so the formula comes out alright.

Some content on this page was disabled on June 17, 2020 as a result of a DMCA takedown notice from Michael A. Gottlieb, Rudolf Pfeiffer, and The California Institute of Technology. You can learn more about the DMCA here: