Maxwell-Boltzmann, Bose-Einstein and Fermi-Dirac statistics

I’ve discussed statistics, in the context of quantum mechanics, a couple of times already (see, for example, my post on amplitudes and statistics). However, I never took the time to properly explain those distribution functions which are referred to as the Maxwell-Boltzmann, Bose-Einstein and Fermi-Dirac distribution functions respectively. Let me try to do that now—without, hopefully, getting lost in too much math! It should be a nice piece, as it connects quantum mechanics with statistical mechanics, i.e. two topics I had nicely separated so far. 🙂

You know the Boltzmann Law now, which says that the probabilities of different conditions of energy are given by e−energy/kT = 1/eenergy/kT. Different ‘conditions of energy’ can be anything: density, molecular speeds, momenta, whatever. The point is: we have some probability density function f, and it’s a function of the energy E, so we write:

f(E) = C·e−energy/kT = C/eenergy/kT

C is just a normalization constant (all probabilities have to add up to one, so the integral of this function over its domain must be one), and k and T are also usual suspects: T is the (absolute) temperature, and k is the Boltzmann constant, which relates the temperate to the kinetic energy of the particles involved. We also know the shape of this function. For example, when we applied it to the density of the atmosphere at various heights (which are related to the potential energy, as P.E. = m·g·h), assuming constant temperature, we got the following graph. The shape of this graph is that of an exponential decay function (we’ll encounter it again, so just take a mental note of it).

graph

A more interesting application is the quantum-mechanical approach to the theory of gases, which I introduced in my previous post. To explain the behavior of gases under various conditions, we assumed that gas molecules are like oscillators but that they can only take on discrete levels of energy. [That’s what quantum theory is about!] We denoted the various energy levels, i.e. the energies of the various molecular states, by E0, E1, E2,…, Ei,…, and if Boltzmann’s Law applies, then the probability of finding a molecule in the particular state Ei is proportional to e−Ei /kT. We can then calculate the relative probabilities, i.e. the probability of being in state Ei, relative to the probability of being in state E0, is:

Pi/P0 = e−Ei /kT/e−E0 /kT = e−(Ei–E0)/kT = 1/e(Ei–E0)/kT

Now, Pi obviously equals ni/N, so it is the ratio of the number of molecules in state Ei (ni) and the total number of molecules (N). Likewise, P0 = n0/N and, therefore, we can write:

ni/ne−(Ei−E0)/kT = 1/e(Ei–E0)/kT

This formulation is just another Boltzmann Law, but it’s nice in that it introduces the idea of a ground state, i.e. the state with the lowest energy level. We may or may not want to equate E0 with zero. It doesn’t matter really: we can always shift all energies by some arbitrary constant because we get to choose the reference point for the potential energy.

So that’s the so-called Maxwell-Boltzmann distribution. Now, in my post on amplitudes and statistics, I had jotted down the formulas for the other distributions, i.e. the distributions when we’re not talking classical particles but fermions and/or bosons. As you know, fermions are particles governed by the Fermi exclusion principle: indistinguishable particles cannot be together in the same state. For bosons, it’s the other way around: having one in some quantum state actually increases the chance of finding another one there, and we can actually have an infinite number of them in the same state.

We also know that fermions and bosons are the real world: fermions are the matter-particles, bosons are the force-carriers, and our ‘Boltzmann particles’ are nothing but a classical approximation of the real world. Hence, even if we can’t see them in the actual world, the Fermi-Dirac and Bose-Einstein distributions are the real-world distributions. 🙂 Let me jot down the equations once again:

Fermi-Dirac (for fermions): f(E) = 1/[Ae(E − EF)/kT + 1]

Bose-Einstein (for bosons):  f(E) = 1/[AeE/kT − 1]

We’ve got some other normalization constant here (A), which we shouldn’t be too worried about—for the time being, that is. Now, to see how these distributions are different from the Maxwell-Boltzmann distribution (which we should re-write as f(E) = C·e−E/kT = 1/[A·eE/kT] so as to make all formulas directly comparable), we should just make a graph. Please go online to find a graph tool (I found a new one recently—really easy to use), and just do it. You’ll see they are all like that exponential decay function. However, in order to make a proper comparison, we would actually need to calculate the normalization coefficients and, for the Fermi energy, we would also need the Fermi energy E(note that, for simplicity, we did equate E0 with zero). Now, we could give it a try, but it’s much easier to google and find an example online.

The HyperPhysics website of Georgia State University gives us one: the example assumes 6 particles and 9 energy levels, and the table and graph below compare the Maxwell-Boltzmann and Bose-Einstein distributions for the model.

Graph Table

Now that is an interesting example, isn’t it? In this example (but all depends on its assumptions, of course), the Maxwell-Boltzmann and Bose-Einstein distributions are almost identical. Having said that, we can clearly see that the lower energy states are, indeed, more probable with Bose-Einstein statistics than with the Maxwell-Boltzmann statistics. While the difference is not dramatic at all in this example, the difference does become very dramatic, in reality, with large numbers (i.e. high matter density) and, more importantly, at very low temperatures, at which bosons can condense into the lowest energy state. This phenomenon is referred to as Bose-Einstein condensation: it causes superfluidity and superconductivity, and it’s real indeed: it has been observed with supercooled He-4, which is not an everyday substance, but real nevertheless!

What about the Fermi-Dirac distribution for this example? The Fermi-Dirac distribution is given below: the lowest energy state is now less probable, the mid-range energies much more, and none of the six particles occupy any of the four highest energy levels. Again, while the difference is not dramatic in this example, it can become very dramatic, in reality, with large numbers (read: high matter density) and very low temperatures: at absolute zero, all of the possible energy states up to the Fermi energy level will be occupied, and all the levels above the Fermi energy will be vacant.

graph 2 Table 2

What can we make out of all of this? First, you may wonder why we actually have more than one particle in one state above: doesn’t that contradict the Fermi exclusion principle? No. We need to distinguish micro- and macro-states. In fact, the example assumes we’re talking electrons here, and so we can have two particles in each energy state—with opposite spin, however. At the same time, it’s true we cannot have three, or more, in any state. That results, in the example we’re looking at here, in five possible distributions only, as shown below.

Table 3

The diagram is an interesting one: if the particles were to be classical particles, or bosons, then 26 combinations are possible, including the five Fermi-Dirac combinations, as shown above. Note the little numbers above the 26 possible combinations (e.g. 6, 20, 30,… 180): they are proportional to the likelihood of occurring under the Maxwell-Boltzmann assumption (so if we assume the particles are ‘classical’ particles). Let me introduce you to the math behind the example by using the diagram below, which shows three possible distributions/combinations (I know the terminology is quite confusing—sorry for that!).

table 4

If we could distinguish the particles, then we’d have 2002 micro-states, which is the total of all those little numbers on top of the combinations that are shown (6+60+180+…). However, the assumption is that we cannot distinguish the particles. Therefore, the first combination in the diagram above, with five particles in the zero energy state and one particle in state 9, occurs 6 times into 2002 and, hence, it has a probability of 6/2002 ≈ 0.003 only. In contrast, the second combination is 10 times more likely, and the third one is 30 times more likely! In any case, the point is, in the classical situation (and in the Bose-Einstein hypothesis as well), we have 26 possible macro-states, as opposed to 5 only for fermions, and so that leads to a very different density function. Capito?

No? Well, this blog is not a textbook on physics and, therefore, I should refer you to the mentioned site once again, which references a 1992 textbook on physics (Frank Blatt, Modern Physics, 1992) as the source of this example. However, I won’t do that: you’ll find the details in the Post Scriptum to this post. 🙂

Let’s first focus on the fundamental stuff, however. The most burning question is: if the real world consists of fermions and bosons, why is that that we only see the Maxwell-Boltzmann distribution in our actual (non-real?) world? 🙂 The answer is that both the Fermi-Dirac and Bose-Einstein distribution approach the Maxwell–Boltzmann distribution if higher temperatures and lower particle densities are involved. In other words, we cannot see the Fermi-Dirac distributions (all matter is fermionic, except for weird stuff like superfluid helium-4 at 1 or 2 degrees Kelvin), but they are there!

Let’s approach it mathematically: the most general formula, encompassing both Fermi-Dirac and Bose-Einstein statistics, is:

Ni(Ei) ∝ 1/[e(Ei − μ)/kT ± 1]

If you’d google, you’d find a formula involving an additional coefficient, gi, which is the so-called degeneracy of the energy level Ei. I included it in the formula I used in the above-mentioned post of mine. However, I don’t want to make it any more complicated than it already is and, therefore, I omitted it this time. What you need to look at are the two terms in the denominator: e(Ei − μ)/kT and ± 1.

From a math point of view, it is obvious that the values of e(Ei − μ)/kT + 1 (Fermi-Dirac) and e(Ei − μ)/kT − 1 (Bose-Einstein) will approach each other if e(Ei − μ)/kT is much larger than ±1, so if e(Ei − μ)/kT >> 1. That’s the case, obviously, if the (Ei − μ)/kT ratio is large, so if (Ei − μ) >> kT. In fact, (Ei − μ) should, obviously, be much larger than kT for the lowest energy levels too! Now, the conditions under which that is the case are associated with the classical situation (such as a cylinder filled with gas, for example). Why?

Well… […] Again, I have to say that this blog can’t substitute for a proper textbook. Hence, I am afraid I have to leave it to you to do the necessary research to see why. 🙂 The non-mathematical approach is to simple note that quantum effects, i.e. the ±1 term, only apply if the concentration of particles is high enough. Indeed, quantum effects appear if the concentration of particles is higher than the so-called quantum concentration. Only when the quantum concentration is reached, particles will start interacting according to what they are, i.e. as bosons or as fermions. At higher temperature, that concentration will not be reached, except in massive objects such as a white dwarf (white dwarfs are stellar remnants with the mass like that of the Sun but a volume like that of the Earth). So, in general, we can say that at higher temperatures and at low concentration we will not have any quantum effects. That should settle the matter—as for now, at least.

You’ll have one last question: we derived Boltzmann’s Law from the kinetic theory of gases, but how do we derive that Ni(Ei) = 1/[Ae(Ei − μ)/kT ± 1] expression? Good question but, again, we’d need more than a few pages to explain that! The answer is: quantum mechanics, of course! Go check it out in Feynman’s third Volume of Lectures! 🙂

Post scriptum: combinations, permutations and multiplicity

The mentioned example from HyperPhysics is really interesting, if only because it shows you also need to master a bit of combinatorics to get into quantum mechanics. Let’s go through the basics. If we have n distinct objects, we can order hem in n! ways, with n! (read: n factorial) equal to n·(n–1)·(n–2)·…·3·2·1. Note that 0! is equal to 1, per definition. We’ll need that definition.

For example, a red, blue and green ball can be ordered in 3·2·1 = 6 ways. Each way is referred to as a permutation.

Besides permutations, we also have the concept of a k-permutation, which we can denote in a number of ways but let’s choose P(n, k). [The P stands for permutation here, not for probability.] P(n, k) is the number of ways to pick k objects out of a set of n objects. Again, the objects are supposed to be distinguishable. The formula is P(n, k) = n·(n–1)·(n–2)·…·(n–k+1) = n!/(n–k)!. That’s easy to understand intuitively: on your first pick you have n choices; on your second, n–1; on your third, n–2, etcetera. When n = k, we obviously get n! again.

There is a third concept: the k-combination (as opposed to the k-permutation), which we’ll denote by C(n, k). That’s when the order within our subset doesn’t matter: an ace, a queen and a jack taken out of some card deck are a queen, a jack, and an ace: we don’t care about the order. If we have k objects, there are k! ways of ordering them and, hence, we just have to divide P(n, k) by k! to get C(n, k). So we write: C(n, k) = P(n, k)/k! = n!/[(n–k)!k!]. You recognize C(n, k): it’s the binomial coeficient.

Now, the HyperPhysics example illustrating the three mentioned distributions (Maxwell-Boltzmann, Bose-Einstein and Fermi-Dirac) is a bit more complicated: we need to associate q energy levels with N particles. Every possible configuration is referred to as a micro-state, and the total number of possible micro-states is referred to as the multiplicity of the system, denoted by Ω(N, q). The formula for Ω(N, q) is another binomial coefficient: Ω(N, q) = (q+N–1)!/[q!(N–1)!]. Ω(N, q) = Ω(6, 9) = (9+6–1)!/[9!(6–1)!] = 2002.

In our example, however, we do not have distinct particles and, therefore, we only have 26 macro-states (as opposed to 2002 micro-states), which are also referred to, confusingly, as distributions or combinations.

Now, the number of micro-states associated with the same macro-state is given by yet another formula: it is equal to N!/[n1!·n2!·n3!·…·nq!], with ni! the number of particles in level i. [See why we need the 0! = 1 definition? It ensures unoccupied states do not affect the calculation.] So that’s how we get those numbers 6, 60 and 180 for those three macro-states.

But how do we calculate those average numbers of particles for each energy level? In other words, how do we calculate the probability densities under the Maxwell-Boltzmann, Fermi-Dirac and Bose-Einstein hypothesis respectively?

For the Maxwell-Boltzmann distribution, we proceed as follows: for each energy level j (or Ej, I should say), we calculate n= ∑nij·Pi over all macro-states i. In this summation, we have nij, which is the number of particles in energy level j in micro-state i, while Pi is the probability of macro-state i as calculated by the ratio of (i) the number of micro-states associated with macro-state i and (ii) the total number of micro-states. For Pi, we gave the example of 3/2002 ≈ 0.3%. For 60 and 180, we get 60/2002 ≈ 3% and 180/2002 ≈ 9%. Calculating all the nj‘s for j ranging from 1 to 9 should yield the numbers and the graph below indeed.

M-B graphOK. That’s how it works for Maxwell-Boltzmann. Now, it is obvious that the Fermi-Dirac and the Bose-Einstein distribution should not be calculated in the same way because, if they were, they would not be different from the Maxwell-Boltzmann distribution! The trick is as follows.

For the Bose-Einstein distribution, we give all macro-states equal weight—so that’s a weight of one, as shown below. Hence, the probability Pi  is, quite simply, 1/26 ≈ 3.85% for all 26 macro-states. So we use the same n= ∑nij·Pformula but with Pi = 1/26.

BE

Finally, I already explained how we get the Fermi-Dirac distribution: we can only have (i) one, (ii) two, or (iii) zero fermions for each energy level—not more than two! Hence, out of the 26 macro-states, only five are actually possible under the Fermi-Dirac hypothesis, as illustrated below once more. So it’s a very different distribution indeed!

Table 3

Now, you’ll probably still have questions. For example, why does the assumption, for the Bose-Einstein analysis, that macro-states have equal probability favor the lower energy states? The answer is that the model also integrates other constraints: first, when associating a particle with an energy level, we do not favor one energy level over another, so all energy levels have equal probability. However, at the same time, the whole system has some fixed energy level, and so we cannot put the particles in the higher energy levels only! At the same time, we know that, if we have q particles, and the probability of a particle having some energy level j is the same for all j, then they are likely not to be all at the same energy level: they’ll be distributed, effectively, as evidenced by the very low chance (0.3% only) of having 5 particles in the ground state and 1 particle at a higher level, as opposed to the 3% and 9% chance of the other two combinations shown in that diagram with three possible Maxwell-Boltzmann (MB) combinations.

So what happens when assigning an equal probability to all 26 possible combinations (with value 1/26) is that the combinations that were previously rather unlikely – because they did have a rather heavy concentration of particles in the ground state only – are now much more likely. So that’s why the Bose-Einstein distribution, in this example at least, is skewed towards the lowest energy level—as compared to the Maxwell-Boltzmann distribution, that is.

So that’s what’s behind, and that should also answer the other question you surely have when looking at those five acceptable Fermi-Dirac configurations: why don’t we have the same five configurations starting from the top down, rather than from the bottom up? Now you know: such configuration would have much higher energy overall, and so that’s not allowed under this particular model.

There’s also this other question: we said the particles were indistinguishable, but so then we suddenly say there can be two at any energy level, because their spin is opposite. It’s obvious this is rather ad hoc as well. However, if we’d allow only one particle at any energy level, we’d have no allowable combinations and, hence, we’d have no Fermi-Dirac distribution at all in this example.

In short, the example is rather intuitive, which is actually why I like it so much: it shows how bosonic and fermionic behavior appear rather gradually, as a consequence of variables that are defined at the system level, such as density, or temperature. So, yes, you’re right if you think the HyperPhysics example lacks rigor. That’s why I think it’s such wonderful pedagogic device. 🙂

The Quantum-Mechanical Gas Law

In my previous posts, it was mentioned repeatedly that the kinetic theory of gases is not quite correct: the experimentally measured values of the so-called specific heat ratio (γ) vary with temperature and, more importantly, their values differ, in general, from what classical theory would predict. It works, more or less, for noble gases, which do behave as ideal gases and for which γ is what the kinetic theory of gases would want it to be: γ = 5/3—but we get in trouble immediately, even for simple diatomic gases like oxygen or hydrogen, as illustrated below: the theoretical value is 9/7 (so that’s 1.286, more or less), but the measured value is very different.

Heat ratioLet me quickly remind you how we get the theoretical number. According to classical theory, a diatomic molecule like oxygen can be represented as two atoms connected by a spring. Each of the atoms absorbs kinetic energy, and for each direction of motion (x, y and z), that energy is equal to kT/2, so the kinetic energy of both atoms – added together – is 2·3·kT/2 = 3kT. However, I should immediately add that not all of that energy is to be associated with the center-of-mass motion of the whole molecule, which determines the temperature of the gas: that energy is and remains equal to the 3kT/2, always. We also have rotational and vibratory motion. The molecule can rotate in two independent directions (and any combination of these directions, of course) and, hence, rotational motion is to absorb an amount of energy equal to 2·kT/2 = kT. Finally, the vibratory motion is to be analyzed as any other oscillation, so like a spring really. There is only one dimension involved and, hence, the kinetic energy here is just kT/2. However, we know that the total energy in an oscillator is the sum of the kinetic and potential energy, which adds another kT/2 term. Putting it all together, we find that the average energy for each diatomic particle is (or should be) equal to 7·kT/2 = (7/2)kT. Now, as mentioned above, the temperature of the gas (T) is proportional to the mean molecular energy of the center-of-mass motion only (in fact, that’s how temperature is defined), with the constant of proportionality equal to 3k/2. Hence, for monatomic ideal gases, we can write: U = N·(3k/2)T and, therefore, PV = NkT = (2/3)·U. Now, γ appears as follows in the ideal gas law: PV = (γ–1)U. Therefore, γ = 2/3 + 1 = 5/3, but so that’s for monatomic ideal gases only! The total kinetic energy of our diatomic molecule is U = N·(7k/2)T and, therefore, PV = (2/7)·U. So γ must be γ = 2/7 + 1 = 9/7 ≈ 1.286 for diatomic gases, like oxygen and hydrogen.

Phew! So that’s the theory. However, as we can see from the diagram, γ approaches that value only when we heat the gas to a few thousand degrees! So what’s wrong? One assumption is that certain kinds of motions “freeze out” as the temperature falls—although it’s kinda weird to think of something ‘freezing out’ at a thousand degrees Kelvin! In any case, at the end of the 19th century, that was the assumption that was advanced, very reluctantly, by scientists such as James Jeans. However, the mystery was about to be solved then, as Max Planck, even more reluctantly, presented his quantum theory of energy at the turn of the century itself.

But the quantum theory was confirmed and so we should now see how we can apply it to the behavior of gas. In my humble view, it’s a really interesting analysis, because we’re applying quantum theory here to a phenomenon that’s usually being analyzed as a classical problem only.

Boltzmann’s Law

We derived Boltzmann’s Law in our post on the First Principles of Statistical Mechanics. To be precise, we gave Boltzmann’s Law for the density of a gas (which we denoted by n = N/V)  in a force field, like a gravitational field, or in an electromagnetic field (assuming our gas particles are electrically charged, of course). We noted, however, Boltzmann’s Law was also applicable to much more complicated situations, like the one below, which shows a potential energy function for two molecules that is quite characteristic of the way molecules actually behave: when they come very close together, they repel each other but, at larger distances, there’s a force of attraction. We don’t really know the forces behind but we don’t need to: as long as these forces are conservative, they can combine in whatever way they want to combine, and Boltzmann’s Law will be applicable. [It should be obvious why. If you hesitate, just think of the definition of work and how it affects potential energy and all that. Work is force times distance, but when doing work, we’re also changing potential energy indeed! So if we’ve got a potential energy function, we can get all the rest.]

randomBoltzmann’s Law itself is illustrated by the graph below, which also gives the formula for it: n = n0·e−P.E/kT.

graph

It’s a graph starting at n = n0 for P.E. = 0, and it then decreases exponentially. [Funny expression, isn’t it? So as to respect mathematical terminology, I should say that it decays exponentially.] In any case, if anything, Boltzmann’s Law shows the natural exponential function is quite ‘natural’ indeed, because Boltzmann’s Law pops up in Nature everywhere! Indeed, Boltzmann’s Law is not limited to functions of potential energy only. For example, Feynman derives another Boltzmann Law for the distribution of molecular speeds or, so as to ensure the formula is also valid in relativity, the distribution of molecular momenta. In case you forgot, momentum (p) is the product of mass (m) and velocity (u), and the relevant Boltzmann Law is:

f(p)·dp = C·e−K.E/kT·dp

The argument is not terribly complicated but somewhat lengthy, and so I’ll refer you to the link for more details. As for the f(p) function (and the dp factor on both sides of the equation), that’s because we’re not talking exact values of p but some range equal to dp and some probability of finding particles that have a momentum within that range. The principle is illustrated below for molecular speeds (denoted by u = p/m), so we have a velocity distribution below. The illustration for p would look the same: just substitute u for p.

Distribution

Boltzmann’s Law can be stated, much more generally, as follows:

The probability of different conditions of energy (E), potential or kinetic, is proportional to e−E/kT

As Feynman notes, “This is a rather beautiful proposition, and a very easy thing to remember too!” It is, and we’ll need it for the next bit.

The quantum-mechanical theory of gases

According to quantum theory, energy comes in discrete packets, quanta, and any system, like an oscillator, will only have a discrete set of energy levels, i.e. states of different energy. An energy state is, obviously, a condition of energy and, hence, Boltzmann’s Law applies. More specifically, if we denote the various energy levels, i.e. the energies of the various molecular states, by E0, E1, E2,…, Ei,…, and if Boltzmann’s Law applies, then the probability of finding a molecule in the particular state Ei will be proportional to e−Ei /kT.

Now, we know we’ve got some constant there, but we can get rid of that by calculating relative probabilities. For example, the probability of being in state E1, relative to the probability of being in state E0, is:

P1/P0 = e−E1 /kT/e−E0 /kT = e−(E1–E0)/kT

But the relative probability Pshould, obviously, also be equal to the ratio n1/N, i.e. the ratio of the number of molecules in state E1 and the total number of molecules. Likewise, P= n0/N. Hence, P1/P0 = n1/nand, therefore, we can write:

n = n0e−(E1–E0)/kT

What can we do with that? Remember we want to explain the behavior of non-monatomic gas—like diatomic gas, for example. Now we need some other assumption, obviously. As it turns out, the assumption that we can represent a system as some kind of oscillation still makes sense! In fact, the assumption that our diatomic molecule is like a spring is equally crucial to our quantum-theoretical analysis of gases as it is to our classical kinetic theory of gases. To be precise, in both theories, we look at it as a harmonic oscillator.

Don’t panic. A harmonic oscillator is, quite simply, a system that, when displaced from its equilibrium position, experiences some kind of restoring force. Now, for it to be harmonic, the force needs to be linear. For example, when talking springs, the restoring force F will be proportional to the displacement x). It basically means we can use a linear differential equation to analyze the system, like m·(d2x/dt2) = –kx. […] I hope you recognize this equation, because you should! It’s Newton’s Law: F = m·a with F = –k·x. If you remember the equation, you’ll also remember that harmonic oscillations were sinusoidal oscillations with a constant amplitude and a constant frequency. That frequency did not depend on the amplitude: because of the sinusoidal function involved, it was easier to write that frequency as an angular frequency, which we denoted by ω0 and which, in the case of our spring, was equal to ω0 = (k/m)1/2. So it’s a property of the system. Indeed, ωis the square root of the ratio of (1) k, which characterizes the spring (it’s its stiffness), and (2) m, i.e. the mass on the spring. Solving the differential equation yielded x = A·cos(ω0t + Δ) as a general solution, with A the (maximum) amplitude, and Δ some phase shift determined by our t = 0 point. Let me quickly jot down too more formulas: the potential energy in the spring is kx2/2, while its kinetic energy is mv2/2, as usual (so the kinetic energy depends on the mass and its velocity, while the potential energy only depends on the displacement and the spring’s stiffness). Of course, kinetic and potential energy add up to the total energy of the system, which is constant and proportional to the square of the (maximum) amplitude: K.E. + P.E. = E ∝ A2. To be precise, E = kA2/2.

That’s simple enough. Let’s get back to our molecular oscillator. While the total energy of an oscillator in classical theory can take on any value, Planck challenged that assumption: according to quantum theory, it can only take up energies equal to ħω at a time. [Note that we use the so-called reduced Planck constant here (i.e. h-bar), because we’re dealing with angular frequencies.] Hence, according to quantum theory, we have an oscillator with equally spaced energy levels, and the difference between them is ħω. Now, ħω is terribly tiny—but it’s there. Let me visualize what I just wrote:

Equipartition-3

So our expression for P1/P0 becomes P1/P0 = e−ħω/kT/e−0/kT = e−ħω/kT. More generally, we have Pi/P0 = e−i·ħω/kT. So what? Well… We’ve got a function here which gives the chance of finding a molecule in state Pi relative to that of finding it in state E0, and it’s a function of temperature. Now, the graph below illustrates the general shape of that function. It’s a bit peculiar, but you can see that the relative probability goes up and down with temperature. The graph makes it clear that, at extremely low temperatures, most particles will be in state E0 and, of course, the internal energy of our body of gas will be close to nil.

Capture-2

Now, we can look at the oscillators in the bottom state (i.e. particles in the molecular energy state E0) as being effectively ‘frozen’: they don’t contribute to the specific heat. However, as we increase the temperature, our molecules gradually begin to have an appreciable probability to be in the second state, and then in the next state, and so on, and so the internal energy of the gas increases effectively. Now, when the probability is appreciable for many states, the quantized states become nearly indistinguishable and, hence, the situation is like classical physics: it is nearly indistinguishable from a continuum of energies.

Now, while you can imagine such analysis should explain why the specific heat ratio for oxygen and hydrogen varies as it does in the very first graph of this post, you can also imagine the details of that analysis fill quite a few pages! In fact, even Feynman doesn’t include it in his Lectures. What he does include is the analysis of the blackbody radiation problem, which is remarkably similar. So… Well… For more details on that, I’ll refer you to Feynman indeed. 🙂

I hope you appreciated this little ‘lecture’, as it sort of wraps up my ‘series’ of posts on statistical mechanics, thermodynamics and, central to both, the classical theory of gases. Have fun with it all!

Entropy, energy and enthalpy

Phew! I am quite happy I got through Feynman’s chapters on thermodynamics. Now is a good time to review the math behind it. We thoroughly understand the gas equation now:

PV = NkT = (γ–1)U

The gamma (γ) in this equation is the specific heat ratio: it’s 5/3 for ideal gases (so that’s about 1.667) and, theoretically, 4/3 ≈ 1.333 or 9/7 ≈ 1.286 for diatomic gases, depending on the degrees of freedom we associate with diatomic molecules. More complicated molecules have even more degrees of freedom and, hence, can absorb even more energy, so γ gets closer to one—according to the kinetic gas theory, that is. While we know that the kinetic gas theory is not quite accurate – an approach involving molecular energy states is a better match for reality – that doesn’t matter here. As for the term (specific heat ratio), I’ll explain that later. [I promise. 🙂 You’ll see it’s quite logical.]

The point to note is that this body of gas (or whatever substance) stores an amount of energy U that is directly proportional to the temperature (T), and Nk/(γ–1) is the constant of proportionality. We can also phrase it the other way around: the temperature is directly proportional to the energy, with (γ–1)/Nk the constant of proportionality. It means temperature and energy are in a linear relationship. [Yes, direct proportionality implies linearity.] The graph below shows the T = [(γ–1)/Nk]·U relationship for three different values of γ, ranging from 5/3 (i.e. the maximum value, which characterizes monatomic noble gases such as helium, neon or krypton) to a value close to 1, which is characteristic of more complicated molecular arrangements indeed, such as heptane (γ = 1.06) or methyl butane ((γ = 1.08). The illustration shows that, unlike monatomic gas, more complicated molecular arrangements allow the gas to absorb a lot of (heat) energy with a relatively moderate rise in temperature only.

CaptureWe’ll soon encounter another variable, enthalpy (H), which is also linearly related to energy: H = γU. From a math point of view, these linear relationships don’t mean all that much: they just show these variables – temperature, energy and enthalphy – are all directly related and, hence, can be defined in terms of each other.

We can invent other variables, like the Gibbs energy, or the Helmholtz energy. In contrast, entropy, while often being mentioned as just some other state function, is something different altogether. In fact, the term ‘state function’ causes a lot of confusion: pressure and volume are state variables too. The term is used to distinguish these variables from so-called process functions, notably heat and work. Process functions describe how we go from one equilibrium state to another, as opposed to the state variables, which describe the equilibrium situation itself. Don’t worry too much about the distinction—for now, that is.

Let’s look at non-linear stuff. The PV = NkT = (γ–1)U says that pressure (P) and volume (V) are inversely proportional one to another, and so that’s a non-linear relationship. [Yes, inverse proportionality is non-linear.] To help you visualize things, I inserted a simple volume-pressure diagram below, which shows how pressure and volume are related for three different values of U (or, what amounts to the same, three different values of T).

graph 2

The curves are simple hyperbolas which have the x- and y-axis as horizontal and vertical asymptote respectively. If you’ve studied social sciences (like me!) – so if you know a tiny little bit of the ‘dismal science’, i.e. economics (like me!) – you’ll note they look like indifference curves. The x- and y-axis then represent the quantity of some good X and some good Y respectively, and the curves closer to the origin are associated with lower utility. How much X and Y we will buy then, depends on (a) their price and (b) our budget, which we represented by a linear budget line tangent to the curve we can reach with our budget, and then we are a little bit happy, very happy or extremely happy, depending on our budget. Hence, our budget determines our happiness. From a math point of view, however, we can also look at it the other way around: our happiness determines our budget. [Now that‘s a nice one, isn’t it? Think about it! 🙂 And, in the process, think about hyperbolas too: the y = 1/x function holds the key to understanding both infinity and nothingness. :-)]

U is a state function but, as mentioned above, we’ve got quite a few state variables in physics. Entropy, of course, denoted by S—and enthalpy too, denoted by H. Let me remind you of the basics of the entropy concept:

  1. The internal energy U changes because (a) we add or remove some heat from the system (ΔQ), (b) because some work is being done (by the gas on its surroundings or the other way around), or (c) because of both. Using the differential notation, we write: dU = dQ – dW, always. The (differential) work that’s being done is PdV. Hence, we have dU = dQ – PdV.
  2. When transferring heat to a system at a certain temperature, there’s a quantity we refer to as the entropy. Remember that illustration of Feynman’s in my post on entropy: we go from one point to another on the temperature-volume diagram, taking infinitesimally small steps along the curve, and, at each step, an infinitesimal amount of work dW is done, and an infinitesimal amount of entropy dS = dQ/T is being delivered.
  3. The total change in entropy, ΔS, is a line integral: ΔS = ∫dQ/T = ∫dS.

That’s somewhat tougher to understand than economics, and so that’s why it took me more time to come with terms with it. 🙂 Just go through Feynman’s Lecture on it, or through that post I referenced above. If you don’t want to do that, then just note that, while entropy is a very mysterious concept, it’s deceptively simple from a math point of view: ΔS = ΔQ/T, so the (infinitesimal) change in entropy is, quite simply, the ratio of (1) the (infinitesimal or incremental) amount of heat that is being added or removed as the system goes from one state to another through a reversible process and (2) the temperature at which the heat is being transferred. However, I am not writing this post to discuss entropy once again. I am writing it to give you an idea of the math behind the system.

So dS = dQ/T. Hence, we can re-write dU = dQ – dW as:

dU = TdS – PdV ⇔ dU + d(PV) = TdS – PdV + d(PV)

⇔ d(U + PV) = dH = TdS – PdV + PdV + VdP = TdS + VdP

The U + PV quantity on the left-hand side of the equation is the so-called enthalpy of the system, which I mentioned above. It’s denoted by H indeed, and it’s just another state variable, like energy: same-same but different, as they say in Asia. We encountered it in our previous post also, where we said that chemists prefer to analyze the behavior of substances using temperature and pressure as ‘independent variables’, rather than temperature and volume. Independent variables? What does that mean, exactly?

According to the PV = NkT equation, we only have two independent variables: if we assign some value to two variables, we’ve got a value for the third one. Indeed, remember that other equation we got when we took the total differential of U. We wrote U as U(V, T) and, taking the total differential, we got:

dU = (∂U/∂T)dT + (∂U/∂V)dV

We did not need to add a (∂U/∂P)dP term, because the pressure is determined by the volume and the temperature. We could also have written U = U(P, T) and, therefore, that dU = (∂U/∂T)dT + (∂U/∂P)dP. However, when working with temperature and pressure as the ‘independent’ variables, it’s easier to work with H rather than U. The point to note is that it’s all quite flexible really: we have two independent variables in the system only. The third one (and all of the other variables really, like energy or enthalpy or whatever) depend on the other two. In other words, from a math point of view, we only have two degrees of freedom in the system here: only two variables are actually free to vary. 🙂

Let’s look at that dH = TdS + VdP equation. That’s a differential equation in which not temperature and pressure, but entropy (S) and pressure (P) are ‘independent’ variables, so we write:

dH(S, P) = TdS + VdP

Now, it is not very likely that we will have some problem to solve with data on entropy and pressure. At our level of understanding, any problem that’s likely to come our way will probably come with data on more common variables, such as the heat, the pressure, the temperature, and/or the volume. So we could continue with the expression above but we don’t do that. It makes more sense to re-write the expression substituting TdS for dQ once again, so we get:

dH = dQ + VdP

That resembles our dU = dQ – PdV expression: it just substitutes V for –P. And, yes, you guessed it: it’s because the two expressions resemble each other that we like to work with H now. 🙂 Indeed, we’re talking the same system and the same infinitesimal changes and, therefore, we can use all the formulas we derived already by just substituting H for U, V for –P, and dP for dV. Huh? Yes. It’s a rather tricky substitution. If we switch V for –P (or vice versa) in a partial derivative involving T, we also need to include the minus sign. However, we do not need to include the minus sign when substituting dV and dP, and we also don’t need to change the sign of the partial derivatives of U and H when going from one expression to another! It’s a subtle and somewhat weird point, but a very important one! I’ll explain it in a moment. Just continue to read as for now. Let’s do the substitution using our rules:

dU = (∂Q/∂T)VdT + [T(∂P/∂T)V − P]dV becomes:

dH = (∂Q/∂T)PdT + (∂H/∂P)TdP = CPdT + [–T·(∂V/∂T)P + V]dP

Note that, just as we referred to (∂Q/∂T)as the specific heat capacity of a substance at constant volume, which we denoted by CV, we now refer to (∂Q/∂T)P as the specific heat capacity at constant pressure, which we’ll denote, logically, as CP. Dropping the subscripts of the partial derivatives, we re-write the expression above as:

dH = CPdT + [–T·(∂V/∂T) + V]dP

So we’ve got what we wanted: we switched from an expression involving derivatives assuming constant volume to an expression involving derivatives assuming constant pressure. [In case you wondered what we wanted, this is it: we wanted an equation that helps us to solve another type of problem—another formula for a problem involving a different set of data.]

As mentioned above, it’s good to use subscripts with the partial derivatives to emphasize what changes and what is constant when calculating those partial derivatives but, strictly speaking, it’s not necessary, and you will usually not find the subscripts when googling other texts. For example, in the Wikipedia article on enthalpy, you’ll find the expression written as:

dH = CPdT + V(1–αT)dP with α = (1/V)(∂V/∂T)

Just write it all out and you’ll find it’s the same thing, exactly. It just introduces another coefficient, α, i.e. the coefficient of (cubic) thermal expansion. If you find this formula is easier to remember, then please use this one. It doesn’t matter.

Now, let’s explain that funny business with the minus signs in the substitution. I’ll do so by going back to that infinitesimal analysis of the reversible cycle in my previous post, in which we had that formula involving ΔQ for the work done by the gas during an infinitesimally small reversible cycle: ΔW = ΔVΔP = ΔQ·(ΔT/T). Now, we can either write that as:

  1. ΔQ = T·(ΔP/ΔT)·ΔV = dQ = T·(∂P/∂T)V·dV – which is what we did for our analysis of (∂U/∂V)or, alternatively, as
  2. ΔQ = T·(ΔV/ΔT)·ΔP = dQ = T·(∂V/∂T)P·dP, which is what we’ve got to do here, for our analysis of (∂H/∂P)T.

Hence, dH = dQ + VdP becomes dH = T·(∂V/∂T)P·dP + V·dP, and dividing all by dP gives us what we want to get: dH/dP = (∂H/∂P)= T·(∂V/∂T)+ V.

[…] Well… NO! We don’t have the minus sign in front of T·(∂V/∂T)P, so we must have done something wrong or, else, that formula above is wrong.

The formula is right (it’s in Wikipedia, so it must be right :-)), so we are wrong. Indeed! The thing is: substituting dT, dV and dP for ΔT, ΔV and ΔP is somewhat tricky. The geometric analysis (illustrated below) makes sense but we need to watch the signs.

Carnot 2

We’ve got a volume increase, a temperature drop and, hence, also a pressure drop over the cycle: the volume goes from V to V+ΔV (and then back to V, of course), while the pressure and the temperature go from P to P–ΔP and T to T–ΔT respectively (and then back to P and T, of course). Hence, we should write: ΔV = dV, –ΔT = dT, and –ΔP = dP. Therefore, as we replace the ratio of the infinitesimal change of pressure and temperature, ΔP/ΔT, by a proper derivative (i.e. ∂P/∂T), we should add a minus sign: ΔP/ΔT = –∂P/∂T. Now that gives us what we want: dH/dP = (∂H/∂P)= –T·(∂V/∂T)+ V, and, therefore, we can, indeed, write what we wrote above:

dU = (∂Q/∂T)VdT + [T(∂P/∂T)V − P]dV becomes:

dH = (∂Q/∂T)PdT + [–T·(∂V/∂T)P + V]dP = CPdT + [–T·(∂V/∂T)P + V]dP

Now, in case you still wonder: what’s the use of all these different expressions stating the same? The answer is simple: it depends on the problem and what information we have. Indeed, note that all derivatives we use in our expression for dH expression assume constant pressure, so if we’ve got that kind of data, we’ll use the chemists’ representation of the system. If we’ve got data describing performance at constant volume, we’ll need the physicists’ formulas, which are given in terms of derivatives assuming constant volume. It all looks complicated but, in the end, it’s the same thing: the PV = NkT equation gives us two ‘independent’ variables and one ‘dependent’ variable. Which one is which will determine our approach.

Now, we left one thing unexplained. Why do we refer to γ as the specific heat ratio? The answer is: it is the ratio of the specific heat capacities indeed, so we can write:

γ = CP/CV

However, it is important to note that that’s valid for ideal gases only. In that case, we know that the (∂U/∂V)derivative in our dU = (∂U/∂T)VdT + (∂U/∂V)TdV expression is zero: we can change the volume, but if the temperature remains the same, the internal energy remains the same. Hence, dU = (∂U/∂T)VdT = CVdT, and dU/dT = CV. Likewise, the (∂H/∂P)T derivative in our dH = (∂H/∂T)PdT + (∂H/∂P)TdP expression is zero—for ideal gases, that is. Hence, dH = (∂H/∂T)PdT = CPdT, and dH/dT = CP. Hence,

CP/CV = (dH/dT)/(dU/dT) = dH/dU

Does that make sense? If dH/dU = γ, then H must be some linear function of U. More specifically, H must be some function H = γU + c, with c some constant (it’s the so-called constant of integration). Now, γ is supposed to be constant too, of course. That’s all perfectly fine: indeed, combining the definition of H (H = U + PV), and using the PV = (γ–1)U relation, we have H = U + (γ–1)U = γU (hence, c = 0). So, yes, dH/dU = γ, and γ = CP/CV.

Note the qualifier, however: we’re assuming γ is constant (which does not imply the gas has to be ideal, so the interpretation is less restrictive than you might think it is). If γ is not a constant, it’s a different ballgame. […] So… Is γ actually constant? The illustration below shows γ is not constant for common diatomic gases like hydrogen or (somewhat less common) oxygen. It’s the same for other gases: when mentioning γ, we need to state the temperate at which we measured it too. 😦  However, the illustration also shows the assumption of γ being constant holds fairly well if temperature varies only slightly (like plus or minus 100° C), so that’s OK. 🙂

Heat ratio

I told you so: the kinetic gas theory is not quite accurate. An approach involving molecular energy states works much better (and is actually correct, as it’s consistent with quantum theory). But so we are where we are and I’ll save the quantum-theoretical approach for later. 🙂

So… What’s left? Well… If you’d google the Wikipedia article on enthalphy in order to check if I am not writing nonsense, you’ll find it gives γ as the ratio of H and U itself: γ = H/U. That’s not wrong, obviously (γ = H/U = γU/U = γ), but that formula doesn’t really explain why γ is referred to as the specific heat ratio, which is what I wanted to do here.

OK. We’ve covered a lot of ground, but let’s reflect some more. We did not say a lot about entropy, and/or the relation between energy and entropy. Too bad… The relationship between entropy and energy is obviously not so simple as between enthalpy and energy. Indeed, because of that easy H = γU relationship, enthalpy emerges as just some auxiliary variable: some temporary variable we need to calculate something. Entropy is, obviously, something different. Unlike enthalpy, entropy involves very complicated thinking, involving (ir)reversibility and all that. So it’s quite deep, I’d say – but I’ll write more about that later. I think this post has gone as far as it should. 🙂

Is gas a reversible engine?

We’ve worked on very complicated matters in the previous posts. In this post, I am going to tie up a few loose ends, not only about the question in the title but also other things. Let me first review a few concepts and constructs.

Temperature

We’ve talked a lot about temperature, but what it is really? You have an answer ready of course: it is the mean kinetic energy of the molecules of a gas or a substance. You’re right. To be precise, it is the mean kinetic energy of the center-of-mass (CM) motions of the gas molecules.

The added precision in the definition above already points out temperature is not just mean kinetic energy or, to put it differently, that the concept of mean kinetic energy itself is not so simple when we are not talking ideal gases. So let’s be precise indeed. First, let me jot down the formula for the mean kinetic energy of the CM motions of the gas particles:

(K.E.)CM = <(1/2)·mv2>

Now let’s recall the most fundamental law in the kinetic theory of gases, which states that the mean value of the kinetic energy for each independent direction of motion will be equal to kT/2. [I know you know the kinetic theory of gases itself is not accurate – we should be talking about molecular energy states – but let’s go along with it.] Now, because we have only three independent directions of motions (the x, y and z directions) for ideal gas molecules (or atoms, I should say), the mean kinetic energy of the gas particles is kT/2 + kT/2 + kT/2 = 3kT/2.

What’s going on here is that we are actually defining temperature here: we basically say that the kinetic energy is linearly proportional to something that we define as the temperature. For practical reasons, that constant of proportionality is written as 3k/2, with k the Boltzmann constant. So we write our definition of temperature as:

(K.E.)CM = 3kT/2 ⇔ T = (3k/2)–1<(1/2)·mv2> = [2/(3k)]·(K.E.)CM

What happens with temperature when considering more complex gases, such as diatomic gases? Nothing. The temperature will still be proportional to the kinetic energy of the center-of-mass motions, but we should just note it’s the (K.E.)CM of the whole diatomic molecule, not of the individual atoms. The thing with more complicated arrangements is that, when adding or removing heat, we’ve got something else going on too: part of the energy will go into the rotational and vibratory motions inside the molecule, which is why we’ll need to add a lot more heat in order to achieve the same change in temperature or, vice versa, we’ll be able to extract a lot more heat out of the gas – as compared to an ideal gas, that is – for the same drop in temperature. [When talking molecular energy states, rather than independent directions of motions, we’re saying the same thing: energy does not only go in center-of-mass motion but somewhere else too.]

You know the ideal gas law is based on the reasoning above and the PV = NkT equation, which is always valid. For ideal gases, we write:

PV = NkT = Nk(3k/2)–1<(1/2)mv2> = (2/3)N<(1/2)·mv2> = (2/3)·U

For diatomic gases, we have to use another coefficient. According to our theory above, which distinguishes 6 independent directions of motions, the mean kinetic energy is twice 3kT/2 now, so that’s 3kT, and, hence, we write: T = (3k)–1<K.E.> =

PV = NkT = Nk(3k)–1<K.E.> = (1/3)·U

The two equations above will usually be written as PV = (γ–1)U, so γ, which is referred to as the specific heat ratio, would be equal 5/3 ≈ 1.67 for ideal gases and 4/3 ≈ 1.33 for diatomic gases. [If you read my previous posts, you’ll note I used 9/7 ≈ 1.286, but that’s because Feynman suddenly decides to add the potential energy of the oscillator as another ‘independent direction of motion’.]

Now, if we’re not adding or removing heat to/from the gas, we can do a differential analysis yielding a differential equation (what did you expect?), which we can then integrate to find that P = C/Vγ relationship. You’ve surely seen it before. The C is some constant related to the energy and/or the state of the gas. It is actually interesting to plot the pressure-volume relationship using that P = C/Vγ relationship for various values of γ. The blue graph below assumes γ = 5/3 ≈ 1.667, which is the theoretical value for ideal gases (γ for helium or krypton comes pretty close to that), while the red graph gives the same relationship for γ = 4/3 ≈ 1.33, which is the theoretical value for diatomic gases (gases like bromine and iodine have a γ that’s close to that).

graph 1

Let me repeat that this P = C/Vγ relationship is only valid for adiabatic expansion or compression: we do not add or remove heat and, hence, this P = C/Vγ function gives us the adiabatic segments only in a Carnot cycle (i.e. the adiabatic lines in a pressure-volume diagram). Now, it is interesting to observe that the slope of the adiabatic line for the ideal gas is more negative than the slope of the adiabatic line for the diatomic gas: the blue curve is the steeper one. That’s logical: for the same volume change, we should get a bigger drop in pressure for the ideal gas, as compared to the diatomic gas, because… Well… You see the logic, don’t you?

Let’s freewheel a bit and see what it implies for our Carnot cycle.

Carnot engines with ideal and non-ideal gas

We know that, if we could build an ideal frictionless gas engine (using a cylinder with a piston or whatever other device we can think of), its efficiency will be determined by the amount of work it can do over a so-called Carnot cycle, which consists of four steps: (1) isothermal expansion (gas absorbs heat and the volume expands at constant temperature), (2) adiabatic expansion (the volume expands while the temperature drops), (3) isothermal compression (the volume decreases at constant temperature, so heat is taken out), and (4) isothermal compression (the volume decreases as we bring the gas back to the same temperature).

Capture Carnot cycle graph

It is important to note that work is being done, by the gas on its surroundings, or by the surroundings on the gas, during each step of the cycle: work is being done by the gas as it expands, always, and work is done on the gas as it is being compressed, always.

You also know that there is only one Carnot efficiency, which is defined as the ratio of (a) the net amount of work we get out of our machine in one such cycle, which we’ll denote by W, and (b) the amount of heat we have to put in to get it (Q1). We’ve also shown that W is equal to the difference between the heat we put during the first step (isothermal expansion) and the heat that’s taken out in the third step (isothermal compression): W = Q1 − Q2, which basically means that all heat is converted into useful work—which is why it’s an efficient engine! We also know that the formula for the efficiency is given by:

W/Q1 = (T1 − T2)/T1.

Where’s Q2 in this formula? It’s there, implicitly, as the efficiency of the engine depends on T2. In fact, that’s the crux of the matter: for efficient engines, we also have the same Q1/T= Q2/Tratio, which we define as the entropy S = Q1/T= Q2/T2. We’ll come back to this.

Now how does it work for non-ideal gases? Can we build an equally efficient engine with actual gases? This was, in fact, Carnot’s original question, and we haven’t really answered it in our previous posts, because we weren’t quite ready for it. Let’s consider the various elements to the answer:

  1. Because we defined temperature the way we defined it, it is obvious that the gas law PV = NkT still holds for diatomic gases, or whatever gas (such as steam vapor, for example, the stuff which was used in Carnot’s time). Hence, the isothermal lines in our pressure-volume diagrams don’t change. For a given temperature T, we’ll have the same green and red isothermal line in the diagram above.
  2. However, the adiabatic lines (i.e .the blue and purple lines in the diagram above) for the non-ideal gas are much flatter than the one for an ideal gas. Now, just take that diagram and draw two flatter curves through point a and c indeed—but not as flat as the isothermal segments, of course! What you’ll notice is that the area of useful work becomes much smaller.

What does that imply in terms of efficiency? Well… Also consider the areas under the graph which, as you know, represent the amount of work done during each step (and you really need to draw the graph here, otherwise you won’t be able to follow my argument):

  1. The phase of isothermal expansion will be associated with a smaller volume change, because our adiabatic line for the diatomic gas intersects the T = T1 isothermal line at a smaller value for V. Hence, less work is being done during that stage.
  2. However, more work will be done during adiabatic expansion, and the associated volume change is also larger.
  3. The isothermal compression phase is also associated with a smaller volume change, because our adiabatic line for the diatomic gas intersects the T = T2 isothermal line at a larger value for V.
  4. Finally, adiabatic compression requires more work to be done to get from T2 to Tagain, and the associated volume change is also larger.

The net result is clear from the graph: the net amount of work that’s being done over the complete cycle is less for our non-ideal gas than as compared to our engine working with ideal gas. But, again, the question here is what it implies in terms of efficiency? What about the W/Q1 ratio?

The problem is that we cannot see how much heat is being put in (Q1) and how much heat is being taken out (Q2) from the graph. The only thing we know is that we have an engine working here between the same temperature T1 to T2. Hence, if we use subscript A for the ideal gas engine and subscript B for the one working with ordinary (i.e. non-ideal) gas, and if both engines are to have the same efficiency W/Q= WB/Q1= WA/Q1A, then it’s obvious that,

if W> WB, then Q1A > Q1B.

Is that consistent with what we wrote above for each of the four steps? It is. Heat energy is taken in during the first step only, as the gas expands isothermally. Now, because the temperature stays the same, there is no change in internal energy, and that includes no change in the internal vibrational and rotational energy. All of the heat energy is converted into work. Now, because the volume change is less, the work will be less and, hence, the heat that’s taken in must also be less. The same goes for the heat that’s being taken out during the third step, i.e. the isothermal compression stage: we’ve got a smaller volume change here and, hence, the surroundings of the gas do less work, and a lesser amount of heat energy is taken out.

So what’s the grand conclusion? It’s that we can build an ideal gas engine working between the same temperature T1 and T1, and with exactly the same efficiency and W/Q1 = (T1 − T2)/Tusing non-ideal gas. Of course, there must be some difference! You’re right: there is. While the ordinary gas machine will be as efficient as the ideal gas machine, it will not do the same amount of work. The key to understanding this is to remember that efficiency is a ratio, not some absolute number.  Let’s go through it. Because their efficiency is the same, we know that the W/Q1 ratios for both engines (A and B) is the same and, hence, we can write:

WA/WB = Q1A/Q1B

What about the entropy? The entropy S = Q1A/T1 = Q2A/T2 is not the same for both machines. For example, if the engine with ideal gas (A) does twice the work of the engine with ordinary gas (B), then Q1A will also be twice the amount Q1B. Indeed, SA = Q1A /T1 and SB = Q1B/T1. Hence, SA/SB = Q1A/Q1B. For example, if Q1A = 2·Q1B, then engine A’s entropy will also be twice that of engine B. [Now that we’re here, I should also note you’ll have the same ratio for Q2A. Indeed, we know that, for an efficient machine, we have: Q1/T= Q2/T2. Hence, Q1A/Q2A = T1/T2 and Q1B/Q2B = T1/T2. So Q1A/Q2= Q1B/Q2and, therefore, So Q1A/Q1= Q2A/Q2B.]

Why would the entropy be any different? We’ve got the same number of particles, the same volume and the same working temperatures, and so the only difference is that the particles in engine B are diatomic: the molecules consist of two atoms, rather than one only. An intuitive answer to the question as to why the entropy is different can be given by comparing it to another example, which I mentioned in a previous post, for which the entropy is also different fro some non-obvious reason. Indeed, we can think of the two atoms as the equivalent of the white and black particles in the box (see my previous post on entropy): if we allow the white and black particles to mix in the same volume, rather than separate them in two compartments, then the entropy goes up (we calculated the increase as equal to k·ln2). Likewise, the entropy is much lower if all particles have to come in pairs, which is the case for a diatomic gas. Indeed, if they have to come in pairs, we significantly reduce the number of ways all particles can be arranged, or the ‘disorder’, so to say. As the entropy is a measure of that number (one can loosely define entropy as the logarithm of the number of ways), the entropy must go down as well. Can we illustrate that using the ΔS = Nkln(V2/V1) formula we introduced in our previous post, or our more general S(V, T) = Nk[lnV + (1/γ-1)lnT] + a formula? Maybe. Let’s give it a try.

We know that our diatomic molecules have an average kinetic energy equal to 3kT/2. Well… Sorry. I should be precise: that’s the kinetic energy of their center-of-mass motion only! Now, let us suppose all our diatomic molecules spit up. We know the average kinetic energy of the constituent parts will also equal 3kT/2. Indeed, if a gas molecule consists of two atoms (let’s just call them atom A and B respectively), and if their combined mass is M = mA + mB, we know that:

<mAvA2/2> = <mBvB2/2> = <MvCM2/2> = 3kT/2

Hence, if they split, we’ll have twice the number of particles (2N) in the same volume with the same average kinetic energy: 3kT/2. Hence, we double the energy, but the average kinetic energy of the particles is the same, so the temperature should be the same. Hmm… You already feel something is wrong here… What about the energy that we associated with the internal motions within the molecule, i.e. the internal rotational and vibratory motions of the atoms, when they were still part of the same molecule? That was also equal to 3kT/2, wasn’t it? It was. Yes. In case you forgot why, let me remind you: the total energy is the sum of the (average) kinetic energy of the two atoms, so that’s <mAvA2/2> + <mBvB2/2> = 3kT/2 + 3kT/2 = 3kT. Now, that sum is also equal to the sum of the center-of-mass motion (which is 3 kT/2) and the average kinetic energy of the rotational and vibratory motions. Hence, the average kinetic energy of the rotational and vibratory motions is 3kT – 3 kT/2 = 3 kT/2. It’s all part of the same theorem: the average kinetic energy for each independent direction of motion is kT/2, and the number of degrees of freedom for a molecule consisting of r atoms is 3, because each atom can move in three directions. Rotation involves another two independent motions (in three dimensions, we’ve got two axes of rotation only), and vibration another one. So the kinetic energy going into rotation is kT/2 + kT/2 = kT and for vibration it’s kT/2. Adding all yields 3kT/2 + kT + kT/2 = 3kT.

The arithmetic is quite tricky. Indeed, you may think that, if we split the molecule, that the rotational and vibratory energy has to go somewhere, and that it is only natural to assume that, when we spit the diatomic molecule, the individual atoms have to absorb it. Hence, you may think that the temperature of the gas will be higher. How much higher? We had an average energy of 3kT per molecule in the diatomic situation, but so now we have twice as many particles, and hence, the average energy per particle now is… Re-read what I wrote above: it’s just 3kT/2 again. The energy that’s associated with the center-of-mass motions and the rotational and vibratory motions is not something extra: it’s part of the average kinetic energy of the atoms themselves. So no rise in temperature!

Having said that, our PV = NkT = (2/3)U equation obviously doesn’t make any sense anymore, as we’ve got twice as many particles now. While the temperature has not gone up, both the internal energy and the pressure have doubled, as we’ve got twice as many particles hitting the walls of our cylinder now. To restore the pressure to its ex ante value, we need to increase the volume. Remember, however, that pressure is force per unit surface area, not per volume unit: P = F/A. So we don’t have to double the volume: we only have to double the surface area. Now, it all depends on the shape of the volume: are we thinking of a box or of some sphere? One thing we know though: if we calculate the volume using some radius r, which may also be the length of the edge of a cube, then we know the volume is going to be proportional to r3, while the surface area is going to be proportional to r2. Hence, the ratio between the surface area and the volume is going to be proportional to r2/r3 = r2/3. So that’s another 2/3 ratio which pops us here, as an exponent this time. It’s not a coincidence, obviously.

Hmm… Interesting exercise. I’ll let you work it out. I am sure you’ll find some sensible value for the new volume, so you should able to use that ΔS = Nkln(V2/V1) formula. However, you also need to think about the comparability of the two situations. We wanted to compare two equal volumes with an equal number of particles (diatomic molecules versus atoms), and so you’ll need to move back in that direction to get a final answer to your question. Please do mail me the answer: I hope it makes sense. 🙂

Inefficient engines

When trying to understand efficient engines, it’s interesting to also imagine how inefficient engines work, so as to see what they imply for our Carnot diagram. Suppose we’ve tried to build a Carnot engine in our kitchen, and we end up with one that is fairly frictionless, and fairly well isolated, so there is little heat loss during the heat transfer steps. We also have good contact surfaces so we think the the heat transfer processes will also be fairly frictionless, so to speak. So we did our calculations and built the engine using the best kitchen design and engineering practices. Now it’s the time for the test. Will it work?

What might happen is the following: while we’ve designed the engine to get some net amount of work out of it (in each and every cycle) that is given by the isothermic and adiabatic lines below, we may find that we’re not able to keep the temperature constant. So we try to follow the green isothermic line alright, but we can’t. We may also find that, when our heat counter tells us we’ve put Q1 in already, that our piston hasn’t moved out quite as far we thought it would. So… Damn, we’re never going to get to c. What’s the reason? Some heat loss, because our isolation wasn’t perfect, and friction.

Inefficient engine

So we’re likely to have followed an actual path that’s closer to the red arrow, which brings us near point d. So we’ve missed point c. We have no choice, however: the temperature has dropped to T2 and, hence, we need to start with the next step. Which one? The second? The third? It’s not quite clear, because our actual path on the pressure-volume diagram doesn’t follow any of our ideal isothermal or adiabatic lines. What to do? Let’s just take some heat out and start compressing to see what happens. If we’ve followed a path like the red arrow, we’re likely to be on something like the black arrow now. Indeed, if we’ve got a problem with friction or heat loss, we’ll continue to have that problem, and so the temperature will drop much faster than we think it should, and so we will not have the expected volume decrease. In fact, we’re not able to maintain the temperature even at T2. What horror! We can’t repeat our process and, hence, it is surely not reversible! All our work for nothing! We have to start all over and re-examine our design.

So our kitchen machine goes nowhere. But then how do actual engines work? The answer is: they put much more heat in, and they also take much more heat out. More importantly, they’re also working much below the theoretical efficiency of an ideal engine, just like our kitchen machine. So that’s why we’ve got the valves and all that in a steam engine. Also note that a car engine works entirely different: it converts chemical energy into heat energy by burning fuel inside of the cylinder. Do we get any useful work out? Of course! My Lamborghini is fantastic. 🙂 Is it efficient? Nope. We’re converting huge amounts of heat energy into a very limited amount of useful work, i.e. the type of energy we need to drive the wheels of my car, or a dynamo. Actual engines are a shadow only of ideal engines. So what’s the Carnot cycle really? What does it mean in practice? Does the mathematical model have any relevance at all?

The Carnot cycle revisited

Let’s look at those differential equations once again. [Don’t be scared by the concept of a differential equation. I’ll come back to it. Just keep reading.] Let’s start with the ΔU = (∂U/∂T)ΔT + (∂U/∂V)ΔV equation, which mathematical purists would probably prefer to write as:

dU = (∂U/∂T)dT + (∂U/∂V)dV

I find Feynman’s use of the Δ symbol more appropriate, because, when dividing by dV or dT, we get dU/dV and dU/dt, which makes us think we’re dealing with ordinary derivatives here, and we are not: it’s partial derivatives that matter here. [I’ll illustrate the usefulness of distinguishing the Δ and d symbol in a moment.] Feynman is even more explicit about that as he uses subscripts for the partial derivatives, so he writes the equation above as:

ΔU = (∂U/∂T)VΔT+ (∂U/∂V)TΔV

However, partial derivatives always assume the other variables are kept constant and, hence, the subscript is not needed. It makes the notation rather cumbersome and, hence, I think it makes the analysis even more unreadable than it already is. In any case, it is obvious that we’re looking at a situation in which all changes: the volume, the temperature and the pressure. However, in the PV = NkT equation (which, I repeat, is valid for all gases, ideal or not, and in all situations, be it adiabatic or isothermal expansion or compression), we have only two independent variables for a given number of particles. We can choose: volume and temperature, or pressure and temperature, or volume and pressure. The third variable depends on the two other variables and, hence, is referred to as dependent. Now, one should not attach too much importance to the terms (dependent or independent does not mean more or less fundamental) but, when everything is said and done, we need to make a choice when approaching the problem. In physics, we usually look at the volume and the temperature as the ‘independent’ variables but the partial derivative notation makes it clear it doesn’t matter. With three variables, we’ll have three partial derivatives: ∂P/∂T, ∂V/∂T and ∂P/∂V, and their reciprocals ∂T/∂P, ∂T/∂V and ∂V/∂P too, of course!

Having said that, when calculating the value of derived variables like energy, or entropy, or enthalpy (which is a state variable used in chemistry), we’ll use two out of the three mentioned variables only, because the third one is redundant, so to speak. So we’ll have some formula for the internal energy of a gas that depends on temperature and volume only, so we write:

U = U(V, T)

Now, in physics, one will often only have a so-called differential equation for a variable, i.e. something that is written in terms of differentials and derivatives, so we’ll do that here too. But let me give some other example first. You may or may not remember that we had this differential equation telling us how the density (n = N/V) of the atmosphere changes with the height (h), as a function of the molecular mass (m), the temperature (T) and the density (n) itself: dn/dh = –(mg/kT)·n, with g the gravitational constant and k the Boltzmann constant. Now, it  is not always easy to go from a differential equation to a proper formula, but this one can be solved rather easily. Indeed, a function which has a derivative that is proportional to itself (that’s what this differential equation says really) is an exponential, and the solution was n = n0e–mgh/kT, with n0 some other constant (the density at h = 0, which can be chosen anywhere). This explicit formula for n says that the density goes down exponentially with height, which is what we would expect.

Let’s get back to our gas though. We also have differentials here, which are infinitesimally small changes in variables. As mentioned above, we prefer to write them with a Δ in front (rather than using the symbol)—i.e. we write ΔT, ΔU, ΔU, or ΔQ. When we have two variables only, say x and y, we can use the d symbol itself and, hence, write Δx and Δy as dx and dy. However, it’s still useful to distinguish, in order to write something like this:

Δy = (dy/dx)Δx

This says we can approximate the change in y at some point x when we know the derivative there. For a function in two variables, we can write the same, which is what we did at the very start of this analysis:

ΔU = (∂U/∂T)ΔT + (∂U/∂V)ΔV

Note that the first term assumes constant volume (because of the ∂U/∂T derivative), while the second assumes constant temperature (because of the ∂U/∂V derivative).

Now, we also have a second equation for ΔU, expressed in differentials only (so no partial derivatives here):

ΔU = ΔQ – PΔV

This equation basically states that the internal energy of a gas can change because (a) some heat is added or removed or (b) some work is being done by or on the gas as its volume gets bigger or smaller. Note the minus sign in front of PΔV: it’s there to ensure the signs come out alright. For example, when compressing the gas (so ΔV is negative), ΔU = – PΔV will be positive. Conversely, when letting the gas expand (so ΔV is positive), ΔU = – PΔV will be negative, as it should be.

What’s the relation between these two equations? Both are valid, but you should surely not think that, just because we have a ΔV in the second term of each equation, we can write –P = ∂U/∂V. No.

Having said that, let’s look at the first term of the ΔU = (∂U/∂T)ΔT + (∂U/∂V)ΔV equation and analyze it using the ΔU = ΔQ – PΔV equation. We know (∂U/∂T)ΔT assumes we keep the volume constant, so ΔV = 0 and, hence, ΔU = ΔQ: all the heat goes into changing the internal energy; none goes into doing some work. Therefore, we can write:

(∂U/∂T)ΔT = (∂Q/∂T)ΔT = CVΔT

You already know that we’ve got a name for that CV function (remember: a derivative is a function too!): it’s the (specific) heat capacity of the gas (or whatever substance) at constant volume. For ideal gases, CV is some constant but, remember, we’re not limiting ourselves to analyzing ideal gases only here!

So we’re done with the first term in that ΔU = (∂U/∂T)ΔT + (∂U/∂V)ΔV. Now it’s time for the second one: (∂U/∂V)ΔV. Now both ΔQ and –PΔV are relevant: the internal energy changes because (a) some heat is being added and (b) because the volume changes and, hence, some work is being done. You know what we need to find. It’s that weird formula:

∂U/∂V = T(∂P/∂T) – P

But how do we get there? We can visualize what’s going on as a tiny Carnot cycle. So we think of gas as an ideal engine itself: we put some heat in (ΔQ) which gets an isothermal expansion at temperature T going, during a tiny little instant, doing a little bit of work. But then we stop adding heat and, hence, we’ll have some tiny little adiabatic expansion, during which the gas keeps going and also does a tiny amount of work as it pushes against the surrounding gas molecules. However, this step involves an infinitesimally small temperature drop—just a little bit, to T–ΔT. And then the surrounding gas will start pushing back and, hence, we’ve got some isothermal compression going, at temperature T–ΔT, which is then followed, once again, by adiabatic compression as the temperature goes back to T. The last two steps involve the surroundings of the tiny little volume of gas we’re looking at, doing work on the gas, instead of the other way around.

Carnot 2 equivalence

You’ll say this sounds very fishy. It does, but it is Feynman’s analysis, so who am I to doubt it? You’ll ask: where does the heat go, and where does the work go? Indeed, if ΔQ is Q1, what about Q2? Also, we can sort of imagine that the gas can sort of store the energy of the work that’s being done during step 1 and 2, to then give (most of it) back during step 3 and 4, but what about the net work that’s being done in this cycle, which is (see the diagram) equal to W = Q1 – Q2 = ΔPΔV? Where does that go? In some kind of flywheel or something? Obviously not! Hmm… Not sure. In any case, Q1 is infinitesimally small and, hence, nearing zero. Q2 is even smaller, so perhaps we should equate it to zero and just forget about it. As for the net work done by the cycle, perhaps this may just go into moving the gas molecules in the equally tiny volume of gas we’re looking at. Hence, perhaps there’s nothing left to be transferred to the surrounding gas. In short, perhaps we should look at ΔQ as the energy that’s needed to do just one cycle.

Well… No. If gas is an ideal engine, we’re talking elastic collisions and, hence, it’s not like a transient, like something that peters out. The energy has to go somewhere—and it will. The tiny little volume we’re looking at will come back to its original state, as it should, because we’re looking at (∂U/∂V)ΔV, which implies we’re doing an analysis at constant temperature, but the energy we put in has got to go somewhere: even if Q2 is zero, and all of ΔQ goes into work, it’s still energy that has to go somewhere!

It does go somewhere, of course! It goes into the internal energy of the gas we’re looking at. It adds to the kinetic energy of the surrounding gas molecules. The thing is: when doing such infinitesimal analysis, it becomes difficult to imagine the physics behind. All is blurred. Indeed, if we’re talking a very small volume of gas, we’re talking a limited number of particles also and, hence, these particles doing work on other gas particles, or these particles getting warmer or colder as they collide with the surrounding body of gas, it all becomes more or less the same. To put it simply: they’re more likely to follow the direction of the red and black arrows in our diagram above. So, yes, the theoretical analysis is what it is: a mathematical idealization, and so we shouldn’t think that’s what actually going on in a gas—even if Feynman tries to think of it in that way. So, yes, I agree with some critics, but to a very limited extent only, who say that Feynman’s Lectures on thermodynamics aren’t the best in the Volume: it may be simpler to just derive the equation we need from some Hamiltonian or whatever other mathematical relationship involving state variables like entropy or what have you. However, I do appreciate Feynman’s attempt to connect the math with the physics, which is what he’s doing here. If anything, it’s sure got me thinking!

In any case, we need to get on with the analysis, so let’s wrap it up. We know the net amount of work that’s being done is equal to W = Q1(T1 – T2)/ T1 = ΔQ(ΔT/T). So that’s equal to ΔPΔV and, hence, we can write:

net work done by the gas = ΔPΔV = ΔQ(ΔT/T)

This implies ΔQ = T(ΔP/ΔT)ΔV. Now, looking at the diagram, we can appreciate ΔP/ΔT is equal to ∂P/∂T (ΔP is the change in pressure at constant volume). Hence, ΔQ = T(∂P/∂T)ΔV. Now we have to add the work, so that’s −PΔV. We get:

ΔU = ΔQ − PΔV = T(∂P/∂T)ΔV − PΔV ⇔ ΔU/ΔV = ∂U/∂V = T(∂P/∂T) − P

So… We are where we wanted to be. 🙂 It’s a rather surprising analysis, though. Is the Q2 = 0 assumption essential? It is, as part of the analysis of the analysis of the second term in the ΔU = (∂U/∂T)ΔT + (∂U/∂V)ΔV expression, that is. Make no mistake: the W = Q1(T1−T2)/ T1 = ΔQ(ΔT/T) formula is valid, always, and the Q2 is taken into account in it implicitly, because of the ΔT (which is defined using T2). However, if Q2 would not be zero, it would add to the internal energy without doing any work and, as such, it would be part of the first term in the ΔU = (∂U/∂T)ΔT + (∂U/∂V)ΔV expression: we’d have heat that is not changing the volume (and, hence, that is not doing any work) but that’s just… Well… Heat that’s just adding heat to the gas. 🙂

To wrap everything up, let me jot down the whole thing now:

ΔU = (∂Q/∂T)·ΔT + [T(∂P/∂T) − P]·ΔV

Now, strangely enough, while we started off saying the second term in our ΔU expression assumed constant temperature (because of the ∂U/∂V derivative), we now re-write that second term using the ∂P/∂T derivative, which assumes constant volume! Now, our first term assumes constant volume too, and so we end up with an expression which assumes constant volume throughout! At the same time, we do have that ΔV factor of course, which implies we do not really assume volume is constant. On the contrary: the question we started off with was about how the internal energy changes with temperature and volume. Hence, the assumptions of constant temperature and volume only concern the partial derivatives that we are using to calculate that change!

Now, as for the model itself, let me repeat: when doing such analysis, it is very difficult to imagine the physics behind. All is blurred. When talking infinitesimally small volumes of gas, one cannot really distinguish between particles doing work on other gas particles, or these particles getting warmer or colder as they collide with them. It’s all the same. So, in reality, the actual paths are more like the red and black arrows in our diagram above. Even for larger volumes of gas, we’ve got a problem: one volume of gas is not thermally isolated from another and, hence, ideal gas is not some Carnot engine. A Carnot engine is this theoretical construct, which assumes we can nicely separate isothermal from adiabatic expansion/compression. In reality, we can’t. Even to get the isothermal expansion started, we need a temperature difference in order to get the energy flow going, which is why the assumption of frictionless heat transfer is so important. But what’s frictionless, and what’s an infinitesimal temperature difference? In the end, it’s a difference, right? So we already have some entropy increase: some heat (let’s say ΔQ) leaves the reservoir, which has temperature T, and enters the cylinder, which has to have a temperature that’s just-a-wee bit lower, let’s say T – ΔT. Hence, the entropy of the reservoir is reduced by ΔQ/T, and the entropy of the cylinder is increased by ΔQ/(T – ΔT). Hence, ΔS = ΔQ/(T–ΔT) –  ΔQ/T = ΔQΔT/[T(T–ΔT)].

You’ll say: sure, but then the temperature in the cylinder must go up to T and… No. Why? We don’t have any information on the volume of the cylinder here. We should also involve the time derivatives, so we should start asking questions like: how much power goes into the cylinder, so what’s the energy exchange per unit time here? The analysis will become endlessly more complicated of course – it may have played a role in Sadi Carnot suffering from “mania” and “general delirium” when he got older 🙂 – but you should arrive at the same conclusion: when everything is said and done, the model is what it is, and that’s a mathematical model of some ideal engine – i.e. an idea of a device we don’t find in Nature, and which we’ll never be able to actually build – that shows how we could, potentially, get some energy out of a gas when using some device build to do just that. As mentioned above, thinking in terms of actual engines – like steam engines or, worse, combustion engines – does not help. Not at all really: just try to understand the Carnot cycle as it’s being presented, and that’s usual a mathematical presentation, which is why textbooks always remind the reader to not take the cylinder and piston thing too literally.

Let me note one more thing. Apart from the heat or energy loss question, there’s another unanswered question: from what source do we take the energy to move our cylinder from one heat reservoir to the other? We may imagine it all happens in space so there’s no gravity and all that (so we do not really have to spend some force just holding it) but even then: we have to move it from one place to another, and so that involves some acceleration and deceleration and, hence, some force times a distance. In short, the conclusion is all the same: the reversible Carnot cycle does not really exist and entropy increases, always.

With this, you should be able to solve some practical problems, which should help you to get the logic of it all. Let’s start with one.

Feynman’s rubber band engine

Feynman’s rubber band engine shows the model is quite general indeed, so it’s not limited to some Carnot engine only. A rubber band engine? Yes. When we heat a rubber band, it does not expand: it contracts, as shown below.

rubber band engine

Why? It’s not very intuitive: heating a metal bar makes it longer, not shorter. It’s got to do with the fact that rubber consists of an enormous tangle of long chains of molecules: think of molecular spaghetti. But don’t worry about the details: just accept we could build an engine using the fact, as shown above. It’s not a very efficient machine (Feynman thinks he’d need heating lamps delivering 400 watts of power to lift a fly with it), but let’s apply our thermodynamic relations:

  1. When we heat the rubber band, it will pull itself in, thereby doing some work. We can write that amount of work as FD So that’s like -PΔV in our ΔU = ΔQ – PΔV equation, but not that F has a direction that’s opposite to the direction of the pressure, so we don’t have the minus sign.
  2. So here we can write: ΔU = ΔQ + FΔL.

So what? Well… We can re-write all of our gas equations by substituting –F for P and L for V, and they’ll apply! For example, when analyzing that infinitesimal Carnot cycle above, we found that ΔQ = T(∂P/∂T)ΔV, with ΔQ the heat that’s needed to change the volume by ΔV at constant temperature. So now we can use the above-mentioned substitution (P becomes –F and V becomes L) to calculate the heat that’s needed to change the length of the rubber band by ΔL at constant temperature: it is equal to ΔQ = –T(∂F/∂T)ΔL. The result may not be what we like (if we want the length to change significantly, we’re likely to need a lot of heat and, hence, we’re likely to end up melting the rubber), but it is what it is. 🙂

As Feynman notes: the power of these thermodynamic equations is that we can apply them to very different situations than gas. Another example is a reversible electric cell, like a rechargeable storage battery. Having said that, the assumption that these devices are all efficient is a rather theoretical one and, hence, that constrains the usefulness of our equations significantly. Having said that, engineers have to start somewhere, and the efficient Carnot cycle is the obvious point of departure. It is also a theoretical reference point to calculate actual efficiencies of actual engines, of course.

Post scriptum: Thermodynamic temperature

Let me quickly say something about an alternative definition of temperature: it’s what Feynman refers to as the thermodynamic definition. It’s an equivalent to the kinetic definition really, but let me quickly show why. As we think about efficient engines, it would be good to have some reference temperature T2, so we can drop the subscripts and have our engines run between T and that reference temperature, which we’ll simply call ‘one degree’ (1°). The amount of heat that an ideal engine will deliver at that reference temperature is denoted by QS, so we can drop the subscript for Q1 and denote it, quite simply, as Q.

We’ve defined entropy as S = Q/T, so Q = ST and QS = S·1°. So what? Nothing much. Just note we can use the S = Q/T and QS = S×1° equations to define temperature in terms of entropy. This definition is referred to as the thermodynamic definition, and it is fully equivalent with our kinetic energy. It’s just a different approach. Feynman makes kind of a big deal out of this but, frankly, there’s nothing more to it.

Just note that the definition also works for our ideal engine with non-ideal gas: the amounts of heat involved for the engine with non-ideal gas, i.e. Q and QS, will be proportionally less than the Q and QS amounts for the reversible engine with ideal gas. Remember that Q1A/Q1= Q2A/Q2B equation, in case you’d have doubts.] Hence, we do not get some other thermodynamical temperature! All makes sense again, as it should! 🙂

The Ideal versus the Actual Gas Law

In previous posts, we referred, repeatedly, to the so-called ideal gas law, for which we have various expressions. The expression we derived from analyzing the kinetics involved when individual gas particles (atoms or molecules) move and collide was P·V = N·k·T, in which the variables are P (pressure), V (volume), N (the number of particles in the given volume), T (temperature) and k (the Boltzmann constant). We also wrote it as P·V = (2/3)·U, in which U represents the total energy, i.e. the sum of the energies of all gas particles. We also said the P·V = (2/3)·U formula was only valid for monatomic gases, in which case U is the kinetic energy of the center-of-mass motion of the atoms.

In order to provide some more generality, the equation is often written as P·V = (γ–1)·U. Hence, for monatomic gases, we have γ = 5/3. For a diatomic gas, we’ll also have vibrational and rotational kinetic energy. As we pointed out in a previous post, each independent direction of motion, i.e. each degree of freedom in the system, will absorb an amount of energy equal to k·T/2. For monatomic gases, we have three independent directions of motion (x, y, z) and, hence, the total energy U = 3·k·T/2 = (2/3)·U.

Finally, when we’re considering adiabatic expansion/compression only – so when we do not add or remove any heat to/from to the gas – we can also write the ideal gas law as PVγ = C, with C some constant. [It is important to note that this PVγ = C relation can be derived from the more general P·V = (γ–1)·U expression, but that the two expressions are not equivalent. Please have a look at the P.S. to this post on this, which shows how we get that PVγ = constant expression, and talks a bit about its meaning.]

So what’s the gas law for diatomic gas, like O2, i.e. oxygen? The key to the analysis of diatomic gases is, basically, a model which represents the oxygen molecule as two atoms connected by a spring, but with a force law that’s not as simplistic as Hooke’s law: we’re not looking at some linear force, but a force that’s referred to as a van der Waals force. The image below gives a vague idea of what that might imply. Remember: when moving an object in a force field, we change its potential energy, and the work done, as we move with or against the force, is equal to the change in potential energy. The graph below shows the force is anything but linear.

randomThe illustration above is a graph of potential energy for two molecules, but we can also apply it for the ‘spring’ model for two atoms within a single molecule. For the detail, I’ll refer you to Feynman’s Lecture on this. It’s not that the full story is too complicated: it’s just too lengthy to reproduce it in this post. Just note the key point of the whole story: one arrives at a theoretical value for γ that is equal to γ = 9/7 ≈ 1.286Wonderful! Yes. Except for the fact that value does not correspond to what is measured in reality: the experimentally confirmed value for γ for oxygen (O2) is about 1.40.

What about other gases? When measuring the value for other diatomic gases, like iodine (I2) or bromine (Br2), we get a value closer to the theoretical value (1.30 and 1.32 respectively) but, still, there’s a variation to be explained here. The value for hydrogen H2 is about 1.4, so that’s like oxygen again. For other gases, we again get different values. Why? What’s the problem?

It cannot be explained using classical theory. In addition, doing the measurements for oxygen and hydrogen at various temperatures also reveals that γ is a function of temperature, as shown below. Now that’s another experimental fact that does not line up with our kinetic theory of gases!

Heat ratioReality is right, always. Hence, our theory must be wrong. Our analysis of the independent direction of motions inside of a molecule doesn’t work—even for the simple case of a diatomic molecule. Great minds such as James Clerk Maxwell couldn’t solve the puzzle in the 19th century and, hence, had to admit classical theory was in trouble. Indeed, popular belief has it that the black-body radiation problem was the only thing classical theory couldn’t explain in the late 19th century but that’s not true: there were many more problems keeping physicists awake. But so we’ve got a problem here. As Feynman writes: “We might try some force law other than a spring but it turns out that anything else will only make γ higher. If we include more forms of energy, γ approaches unity more closely, contradicting the facts. All the classical theoretical things that one can think of will only make it worse. The fact is that there are electrons in each atom, and we know from their spectra that there are internal motions; each of the electrons should have at least kT/2 of kinetic energy, and something for the potential energy, so when these are added in, γ gets still smaller. It is ridiculous. It is wrong.

So what’s the answer? The answer is to be found in quantum mechanics. Indeed, one can develop a model distinguishing various molecular states with various energy levels E0, E1, E2,…, Ei,…, and then associate a probability distribution which gives us the probability of finding a molecule in a particular state. Some more assumptions, all quite similar to the assumptions used by Planck when he solved the black-body radiation problem, then give us what we want: to put it simply, it is like some of the motions ‘freeze out’ at lower temperatures. As a result, γ goes up as we go down in temperature.

Hence, quantum mechanics saves the day, again. However, that’s not what I want to write about here. What I want to do here is to give you an equation for the internal energy of a gas which is based on what we can actually measure, so that’s pressure, volume and temperature. I’ll refer to it as the Actual Gas Law, because it takes into account that γ is not some fixed value (so it’s not some natural constant, like Planck’s or Boltzmann’s constant), and it also takes into account that we’re not always gas—ideal or actual gas—but also liquids and solids.

Now, we have many inter-connected variables here, and so the analysis is quite complicated. In fact, it’s a great opportunity to learn more about partial derivatives and how we can use them. So the lesson is as much about math as it about physics. In fact, it’s probably more about math. 🙂 Let’s see what we can make out of it.

Energy, work, force, pressure and volume

First, I should remind you that work is something that is done by a force on some object in the direction of the displacement of that object. Hence, work is force times distance. Now, because the force may actually vary as our object is being displaced and while the work is being done, we represent work as a line integral:

W = ∫F·ds

We write F and s in bold-face and, hence, we’ve got a vector dot product here, which ensures we only consider the component of the force in the direction of the displacement: F·Δ= |F|·|Δs|·cosθ, with θ the angle between the force and the displacement.

As for the relationship between energy and work, you know that one: as we do work on an object, we change its energy, and that’s what we are looking at here: the (internal) energy of our substance. Indeed, when we have a volume of gas exerting pressure, it’s the same thing: some force is involved (pressure is the force per unit area, so we write: P = F/A) and, using the model of the box with the frictionless piston (illustrated below), we write:

dW = F(–dx) = – PAdx = – PdV

gas-pressure

The dW = – PdV formula is the one we use when looking at infinitesimal changes. When going through the full thing, we should integrate, as the volume (and the pressure) changes over the trajectory, so we write:

W = ∫PdV

Now, it is very important to note that the formulas above (dW = – PdV and W = ∫PdV) are always valid. Always? Yes. We don’t care whether or not the compression (or expansion) is adiabatic or isothermal. [To put it differently, we don’t care whether or not heat is added to (or removed from) the gas as it expands (or decreases in volume).] We also don’t keep track of the temperature here. It doesn’t matter. Work is work.

Now, as you know, an integral is some area under a graph so I can rephrase our result as follows: the work that is being done by a gas, as it expands (or the work that we need to put in in order to compress it), is the area under the pressure-volume graph, always.

Of course, as we go through a so-called reversible cycle, getting work out of it, and then putting some work back in, we’ll have some overlapping areas cancelling each other. That’s how we derived the amount of useful (i.e. net) work that can be done by an ideal gas engine (illustrated below) as it goes through a Carnot cycle, taking in some amount of heat Q1 from one reservoir (which is usually referred to as the boiler) and delivering some other amount of heat (Q1) to another reservoir (usually referred to as the condenser). As I don’t want to repeat myself too much, I’ll refer you to one of my previous posts for more details. Hereunder, I just present the diagram once again. If you want to understand anything of what follows, you need to understand it—thoroughly.

Carnot cycle graphIt’s important to note that work is being done in each of the four steps of the cycle, and that the work done by the gas is positive when it expands, and negative when its volume is being reduced. So, let me repeat: the W = ∫PdV formula is valid for both adiabatic as well as isothermal expansion/compression. We just need to be careful about the sign and see in which direction it goes. Having said that, it’s obvious adiabatic and isothermal expansion/compression are two very different things and, hence, their impact on the (internal) energy of the gas is quite different:

  1. Adiabatic compression/expansion assumes that no (external) heat energy (Q) is added or removed and, hence, all the work done goes into changing the internal energy (U). Hence, we can write: W = PΔV = –ΔU and, therefore, ΔU = –PΔV. Of course, adiabatic compression/expansion must involve a change in temperature, as the kinetic energy of the gas molecules is being transferred from/to the piston. Hence, the temperature (which is nothing but the average kinetic energy of the molecules) changes.
  2. In contrast, isothermal compression/expansion (i.e. a volume change without any change in temperature) must involve an exchange of heat energy with the surroundings so to allow the temperature to remain constant. So ΔQ ≠ 0 in this case.

The grand but simple formula capturing all is, obviously:

ΔU = ΔQ – PΔV

It says what we’ve said already: the internal energy of a substance (a gas) changes because some work is being done as its volume changes and/or because some heat is added or removed.

Now we have to get serious about partial derivatives, which relate one variable (the so-called ‘dependent’ variable) to another (the ‘independent’ variable). Of course, in reality, all depends on all and, hence, the distinction is quite artificial. Physicists tend to treat temperature and volume as the ‘independent’ variables, while chemists seem to prefer to think in terms of pressure and temperature. In math, it doesn’t matter all that much: we simply take the reciprocal and there you go: dy/dx = 1/(dx/dy). We go from one to another. Well… OK… We’ve got a lot of variables here, so… Yes. You’re right. It’s not going to be that simple, obviously! 🙂

Differential analysis

If we have some function f in two variables, x and y, then we can write: Δf = f(x + Δx, y + Δy) –  f(x, y). We can then write the following clever thing:

partial derivativeWhat’s being said here is that we can approximate Δf using the partial derivatives ∂f/∂x and ∂f/∂y. Note that the formula above actually implies that we’re evaluating the (partial) ∂f/∂x derivative at point (x, y+Δy), rather than the point (x, y) itself. It’s a minor detail, but I think it’s good to signal it: this ‘clever thing’ is just pedagogical. [Feynman is the greatest teacher of all times! :-)] The mathematically correct approach is to simply give the formal definition of partial derivatives, and then just get on with it:

Partial derivative definitionNow, let us apply that Δf formula to what we’re interested in, and that’s the change in the (internal) energy U. So we write:

formula 1Now, we can’t do anything with this, in practice, because we cannot directly measure the two partial derivatives. So, while this is an actual gas law (which is what we want), it’s not a practical one, because we can’t use it. 🙂 Let’s see what we can do about that. We need to find some formula for those partial derivatives. Let’s have a look at the (∂U/∂T)factor first. That factor is defined and referred to as the specific heat capacity at constant volume, and it’s usually denoted by CV. Hence, we write:

CV = specific heat capacity at constant volume = (∂U/∂T)V

Heat capacity? But we’re talking internal energy here? It’s the same. Remember that ΔU = ΔQ – PΔV formula: if we keep the volume constant, then ΔV = 0 and, hence, ΔU = ΔQ. Hence, all of the change in internal energy (and I really mean all of the change) is the heat energy we’re adding or removing from the gas. Hence, we can also write CV in its more usual definitional form:

C= (∂Q/∂T)V

As for its interpretation, you should look at it as a ratio: Cis the amount of heat one must put into (or remove from) a substance in order to change its temperature by one degree with the volume held constant. Note that the term ‘specific heat capacity’ is usually referred to as the ‘specific heat’, as that’s shorter and simpler. However, you can see it’s some kind of ‘capacity’ indeed. More specifically, it’s a capacity of a substance to absorb heat. Now that’s stuff we can actually measure and, hence, we’re done with the first term in that ΔU = ΔT·(∂U/∂T)+ ΔV·(∂U/∂V)expression, which we can now write as:

ΔT·(∂U/∂T)= ΔT·(∂Q/∂T)= ΔT·CV

OK. So we’re done with the first term. Just to make sure we’re on the right track here, let’s have a quick look at the units here: the unit in which we should measure Cis, obviously, joule per degree (Kelvin), i.e. J/K. And then we multiply with ΔT, which is measured in degrees Kelvin, and we get some amount in Joule. Fine. We’re done, indeed. 🙂

Let’s look at the second term now, i.e. the ΔV·(∂U/∂V)T term. Now, you may think that we could define CT = (∂U/∂V)as the specific heat capacity at constant temperature because… Well… Hmm… It is the amount of heat one must put into (or remove from) a substance in order to change its volume by one unit with the temperature held constant, isn’t it? So we write CT = (∂U/∂V)T = (∂Q/∂V)T and we’re done here too, aren’t we?

NO! HUGE MISTAKE!

It’s not that simple. Two very different things are happening here. Indeed, the change in (internal) energy ΔU, as the volume changes by ΔV while keeping the temperature constant (we’re looking at that (∂U/∂V)T factor here, and I’ll remind you of that subscript T a couple of times), consists of two parts:

  1. First, the volume is not being kept constant and, hence, the internal energy (U) changes because work is being done.
  2. Second, the internal energy (U) also changes because heat is being put in, so the temperature can be kept constant indeed.

So we cannot simplify. We’re stuck with the full thing: ΔU = ΔQ – PΔV, in which – PΔV is the (infinitesimal amount of) work that’s being done on the substance, and ΔQ is the (infinitesimal amount of) heat that’s being put in. What can we do? How can we relate this to actual measurables?

Now, the logic is quite abstruse, so please be patient and bear with me. The key to the analysis is that diagram of the reversible Carnot cycle, with the shaded area representing the net work that’s being done, except that we’re now talking infinitesimally small changes in volume, temperature and pressure. So we redraw the diagram and get something like this:

Carnot 2Now, you can easily see the equivalence between the shaded area and the ΔPΔV rectangle below:

equivalenceSo the work done by the gas is the shaded area, whose surface is equal to ΔPΔV. […] But… Hey, wait a minute! You should object: we are not talking ideal engines here and, hence, we are not going through a full Carnot cycle, are we? We’re calculating the change in internal energy when the temperature changes with ΔT, the volume changes with ΔV, and the pressure changes with ΔP. Full stop. So we’re not going back to where we came from and, hence, we should not be analyzing this thing using the Carnot cycle, should we? Well… Yes and no. More yes than no. Remember we’re looking at the second term only here: ΔV·(∂U/∂V)T. So we are changing the volume (and, hence, the internal energy) but the subscript in the (∂U/∂V)term makes it clear we’re doing so at constant temperature. In practice, that means we’re looking at a theoretical situation here that assumes a complete and fully reversible cycle indeed. Hence, the conceptual idea is, indeed, that we put some heat in, that the gas does some work as it expands, and that we then are actually putting some work back in to bring the gas back to its original temperature T. So, in short, yes, the reversible cycle idea applies.

[…] I know, it’s very confusing. I am actually struggling with the analysis myself, so don’t be too hard on yourself. Think about it, but don’t lose sleep over it. 🙂 I added a note on it in the P.S. to this post on it so you can check that out too. However, I need to get back to the analysis itself here. From our discussion of the Carnot cycle and ideal engines, we know that the work done is equal to the difference between the heat that’s being put in and the heat that’s being delivered: W = Q1 – Q2. Now, because we’re talking reversible processes here, we also know that Q1/T1 = Q2/T2. Hence, Q2 = (T 2/T1)Q1 and, therefore, the work done is also equal to W = Q– (T 2/T1)Q1 = Q1(1 – T 2/T1) = Q1[(T– T2)/T1]= Q1(ΔT/T1). Let’s now drop the subscripts by equating Q1 with ΔQ, so we have:

W = ΔQ(ΔT/T)

You should note that ΔQ is not the difference between Q1 and Q2. It is not. ΔQ is the heat we put in as it expands isothermally from volume V to volume V + ΔV. I am explicit about it because the Δ symbol usually denotes some difference between two values. In case you wonder how we can do away with Q2, think about it. […] The answer is that we did not really get away with it: the information is captured in the ΔT factor, as T–ΔT is the final temperature reached by the gas as it expands adiabatically on the second leg of the cycle, and the change in temperature obviously depends on Q2! Again, it’s all quite confusing because we’re looking at infinitesimal changes only, but the analysis is valid. [Again, go through the P.S. of this post if you want more remarks on this, although I am not sure they’re going to help you much. The logic is really very deep.]

[…] OK… I know you’re getting tired, but we’re almost done. Hang in there. So what do we have now? The work done by the gas as it goes through this infinitesimally small cycle is the shaded area in the diagram above, and it is equal to:

W = ΔPΔV = ΔQ(ΔT/T)

From this, it follows that ΔQ = T·ΔV·ΔP/ΔT. Now, you should look at the diagram once again to check what ΔP actually stands for: it’s the change in pressure when the temperature changes at constant volume. Hence, using our partial derivative notation, we write:

ΔP/ΔT = (∂P/∂T)V

We can now write ΔQ = T·ΔV·(∂P/∂T)and, therefore, we can re-write ΔU = ΔQ – PΔV as:

ΔU = T·ΔV·(∂P/∂T)– PΔV

Now, dividing both sides by ΔV, and writing all using the partial derivative notation, we get:

ΔU/ΔV = (∂U/∂V)T = T·(∂P/∂T)– P

So now we know how to calculate the (∂U/∂V)factor, from measurable stuff, in that ΔU = ΔT·(∂U/∂T)+ ΔV·(∂U/∂V)expression, and so we’re done. Let’s write it all out:

ΔU = ΔT·(∂U/∂T)+ ΔV·(∂U/∂V)= ΔT·C+ ΔV·[T·(∂P/∂T)– P]

Phew! That was tough, wasn’t it? It was. Very tough. As far as I am concerned, this is probably the toughest of all I’ve written so far.

Dependent and independent variables 

Let’s pause to take stock of what we’ve done here. The expressions above should make it clear we’re actually treating temperature and volume as the independent variables, and pressure and energy as the dependent variables, or as functions of (other) variables, I should say. Let’s jot down the key equations once more:

  1. ΔU = ΔQ – PΔV
  2. ΔU = ΔT·(∂U/∂T)+ ΔV·(∂U/∂V)
  3. (∂U/∂T)= (∂Q/∂T)V = CV
  4. (∂U/∂V)T = T·(∂P/∂T)– P

It looks like Chinese, doesn’t it? 🙂 What can we do with this? Plenty. Especially the first equation is really handy for analyzing and solving various practical problems. The second equation is much more difficult and, hence, less practical. But let’s try to apply this equation for actual gases to an ideal gas—just to see if we’re getting our ideal gas law once again. 🙂 We know that, for an ideal gas, the internal energy depends on temperature, not on V. Indeed, if we change the volume but we keep the temperature constant, the internal energy should be the same, as it only depends on the motion of the molecules and their number. Hence, (∂U/∂V)must equal zero and, hence, T·(∂P/∂T)– P = 0. Replacing the partial derivative with an ordinary one (not forgetting that the volume is kept constant), we get:

T·(dP/dT) – P = 0 (constant volume)

⇔ (1/P)·(dP/dT) = 1/T (constant volume)

Integrating both sides yields: lnP = lnT + constant. This, in turn, implies that P = T × constant. [Just re-write the first constant as the (natural) logarithm of some other constant, i.e. the second constant, obviously).] Now that’s consistent with our ideal gas P = NkT/V, because N, k and V are all constant. So, yes, the ideal gas law is a special case of our more general thermodynamical expression. Fortunately! 🙂

That’s not very exciting, you’ll say—and you’re right. You may be interested – although I doubt it 🙂 – in the chemists’ world view: they usually have performance data (read: values for derivatives) measured under constant pressure. The equations above then transform into:

  1. ΔH = Δ(U + P·V) = ΔQ + VΔP
  2. ΔH = ΔT·(∂H/∂T)+ ΔP·(∂H/∂P)
  3. (∂H/∂P)T = –T·(∂V/∂T)+ V

H? Yes. H is another so-called state variable, so it’s like entropy or internal energy but different. As they say in Asia: “Same-same but different.” 🙂 It’s defined as H = U + PV and its name is enthalpy. Why do we need it? Because some clever man noted that, if you take the total differential of P·V, i.e. Δ(P·V) = P·ΔV + V·ΔP, and our ΔU = ΔQ – P·ΔV expression, and you add both sides of both expressions, you get Δ(U + P·V) = ΔQ + VΔP. So we’ve substituted –P for V – so as to please the chemists – and all our equations hold provided we substitute U for H and, importantly, –P for V. [Note the sign switch is to be applied to derivatives as well: if we substitute P for –V, then ∂P/∂T becomes ∂(–V)/∂T = –(∂V/∂T)!

So that’s the chemists’ model of the world, and they’ll usually measure the specific heat capacity at constant pressure, rather than at constant volume. Indeed, one can show the following:

(∂H/∂T)= (∂Q/∂T)= CP = the specific heat capacity at constant pressure

In short, while we referred to γ as the specific heat ratio in our previous posts, assuming we’re talking ideal gases only, we can now appreciate the fact there is actually no such thing as the specific heat: there are various variables and, hence, various definitions. Indeed, it’s not only pressure or volume: the specific heat capacity of some substance will usually also be expressed as a function of its mass (i.e. per kg), the number of particles involved (i.e. per mole), or its volume (i.e. per m3). In that case, we talk about the molar or volumetric heat capacity respectively. The name for the same thing expressed in joule per degree Kelvin and per kg (J/kg·K) is the same: specific heat capacity. So we’ve got three different concepts here, and two ways of measuring them: at constant pressure or at constant volume. No wonder one gets quite confused when googling tables listing the actual values! 🙂

Now, there’s one question left: why is γ being referred to as the specific heat ratio? The answer is simple: it actually is the ratio of the specific heat capacities CP and CV. Hence, γ is equal to:

γ = CP/CV

I could show you how that works. However, I would just be copying the Wikipedia article on it, so I won’t do that: you’re sufficiently knowledgeable now to check it out yourself, and verify it’s actually true. Good luck with it ! In the process, please also do check why Cis always larger than Cso you can explain why γ is always larger than one. 🙂

Post scriptum: As usual, Feynman’s Lectures, were the inspiration here—once more. Now, Feynman has a habit of ‘integrating’ expressions and, frankly, I never found a satisfactory answer to a pretty elementary question: integration in regard to what variable? His exposé on both the ideal as well as the actual gas law has enlightened me. The answer is simple: it doesn’t matter. 🙂 Let me show that by analyzing the following argument of Feynman:

expose

So… What is that ‘integration’ that ‘yields’ that γlnV + lnP = lnC expression? Are we solving some differential equation here? Well… Yes. But let’s be practical and take the derivative of the expression in regard to V, P and T respectively. Let’s first see where we come from. The fundamental equation is PV = (γ–1)U. That means we’ve got two ‘independent’ variables, and one that ‘depends’ on the others: if we fix P and V, we have U, or if we fix U, then P and V are in an inversely proportional relationship. That’s easy enough. We’ve got three ‘variables’ here: U, P and V—or, in differential form, dU, dP and dV. However, Feynman eliminates one by noting that dU = –PdV. He rightly notes we can only do that because we’re talking adiabatic expansion/compression here: all the work done while expanding/compressing the gas goes into changing the internal energy: no heat is added or removed. Hence, there is no dQ term here.

So we are left with two ‘variables’ only now: P and V, or dP and dV when talking differentials. So we can choose: P depends on V, or V depends on P. If we think of V as the independent variable, we can write:

d[γ·lnV + lnP]/dV = γ·(1/V)·(dV/dV) + (1/P)·(dP/dV), while d[lnC]/dV = 0

So we have γ·(1/V)·(dV/dV) + (1/P)·(dP/dV) = 0, and we can then multiply sides by dV to get:

(γ·dV/V) + (dP/P) = 0,

which is the core equation in this argument, so that’s the one we started off with. Picking P as the ‘independent’ variable and, hence, integrating with respect to P yields the same:

d[γ·lnV + lnP]/dP = γ·(1/V)·(dV/dP) + (1/P)·(dP/dP), while d[lnC]/dP = 0

Multiplying both sides by dP yields the same thing: (γ·dV/V) + (dP/P) = 0. So it doesn’t matter, indeed. But let’s be smart and assume both P and V, or dP and dV, depend on some implicit variable—a parameter really. The obvious candidate is temperature (T). So we’ll now integrate and differentiate in regard to T. We get:

d[γ·lnV + lnP]/dT = γ·(1/V)·(dV/dT) + (1/P)·(dP/dT), while d[lnC]/dT = 0

We can, once again, multiply both sides with dT and – surprise, surprise! – we get the same result: 

(γ·dV/V) + (dP/P) = 0

The point is that the γlnV + lnP = lnC expression is damn valid, and C or lnC or whatever is ‘the constant of integration’ indeed, in regard to whatever variable: it doesn’t matter. So then we can, indeed, take the exponential of both sides (which is much more straightforward than ‘integrating both sides’), so we get:

eγlnV + lnP = eln= C

It then doesn’t take too much intelligence to see that eγlnV + lnP = e(lnV)γ+ln= e(lnV)γ·elnP Vγ·P = P·Vγ. So we’ve got the grand result that what we wanted: PVγ = C, with C some constant determined by the situation we’re in (think of the size of the box, or the density of the gas).

So, yes, we’ve got a ‘law’ here. We should just remind ourselves, always, that it’s only valid when we’re talking adiabatic compression or expansion: so we we do not add or remove heat energy or, as Feynman puts it, much more succinctly, “no heat is being lost“. And, of course, we’re also talking ideal gases only—which excludes a number of real substances. 🙂 In addition, we’re talking adiabatic processes only: we’re not adding nor removing heat.

It’s a weird formula: the pressure times the volume to the 5/3 power is a constant for monatomic gas. But it works: as long as individual atoms are not bound to each other, the law holds. As mentioned above, when various molecular states, with associated energy levels are at play, it becomes an entirely different ballgame. 🙂

I should add one final note as to the functional form of PVγ = C. We can re-write it as P = C/Vγ. Because The shape of that graph is similar to the P = NkT/V relationship we started off with. Putting the two equations side by side, makes it clear our constant and temperature are obviously related one to another, but they are not directly proportional to each other. In fact, as the graphs below clearly show, the P = NkT/V gives us these isothermal lines on the pressure-volume graph (i.e. they show P and V are related at constant temperature), while the P = C/Vγ equation gives us the adiabatic lines. Just google an online function graph tool, and you can now draw your own diagrams of the Carnot cycle! Just change the denominator (i.e. the constants C and T in both equations). 🙂

graphNow, I promised I would say something more about that infinitesimal Carnot cycle: why is it there? Why don’t we limit the analysis to just the first two steps? In fact, the shortest and best explanation I can give is something like this: think of the whole cycle as the first step in a reversible process really. We put some heat in (ΔQ) and the gas does some work, but so that heat has to go through the whole body of gas, and the energy has to go somewhere too. In short, the heat and the work is not being absorbed by the surroundings but it all stays in the ‘system’ that we’re analyzing, so to speak, and that’s why we’re going through the full cycle, not the first two steps only. Now, this ‘answer’ may or may not satisfy you, but I can’t do better. You may want to check Feynman’s explanation itself, but he’s very short on this and, hence, I think it won’t help you much either. 😦