Entropy

The two previous posts were quite substantial. Still, they were only the groundwork for what we really want to talk about: entropy, and the second law of thermodynamics, which you probably know as follows: all of the energy in the universe is constant, but its entropy is always increasing. But what is entropy really? And what’s the nature of this so-called law?

Let’s first answer the second question: Wikipedia notes that this law is more like an empirical finding that has been accepted as an axiom. That probably sums it up best. That description does not downplay its significance. In fact, Newton’s laws of motion, or Einstein’s relatively principle, have the same status: axioms in physics – as opposed to those in math – are grounded in reality. At the same time, and just like in math, one can often choose alternative sets of axioms. In other words, we can derive the law of ever-increasing entropy from other principles, notably the Carnot postulate, which basically says that, if the whole world were at the same temperature, it would impossible to reversibly extract and convert heat energy into work. I talked about that in my previous post, and so I won’t go into more detail here. The bottom line is that we need two separate heat reservoirs at different temperatures, denoted by Tand T2, to convert heat into useful work.

Let’s go to the first question: what is entropy, really?

Defining entropy

Feynman, the Great Teacher, defines entropy as part of his discussion on Carnot’s ideal reversible heat engine, so let’s have a look at it once more. Carnot’s ideal engine can do some work by taking an amount of heat equal to Qout of one heat reservoir and putting an amount of heat equal to Q2 into the other one (or, because it’s reversible, it can also go the other way around, i.e. it can absorb Q2 and put Q1 back in, provided we do the same amount of work W on the engine).

The work done by such machine, or the work that has to be done on the machine when reversing the cycle, is equal W = Q1 – Q2 (the equation shows the machine is as efficient as it can be, indeed: all of the difference in heat energy is converted into useful work, and vice versa—nothing gets ‘lost’ in frictional energy or whatever else!). Now, because it’s a reversible thermodynamic process, one can show that the following relationship must hold:

Q1/T= Q2/T2

This law is valid, always, for any reversible engine and/or for any reversible thermodynamic process, for any Q1, Q2, T1 and T2. [Ergo, it is not valid for non-reversible processes and/or non-reversible engines, i.e. real machines.] Hence, we can look at Q/T as some quantity that remains unchanged: an equal ‘amount’ of Q/T is absorbed and given back, and so there is no gain or loss of Q/T (again, if we’re talking reversible processes, of course). [I need to be precise here: there is no net gain or loss in the Q/T of the substance of the gas. The first reservoir obviously looses Q1/T1, and the second reservoir gains Q2/T2. The whole environment only remains unchanged if we’d reverse the cycle.]

In fact, this Q/T ratio is the entropy, which we’ll denote by S, so we write:

S = Q1/T= Q2/T2

What the above says, is basically the following: whenever the engine is reversible, this relationship between the heats must follow: if the engine absorbs Qat Tand delivers Qat T2, then Qis to Tas Qis to T2 and, therefore, we can define the entropy S as S = Q/T. That implies, obviously:

Q = S·T

From these relations (S = Q/T and Q = S·T), it is obvious that the unit of entropy has to be joule per degree (Kelvin), i.e. J/K. As such, it has the same dimension as the Boltzmann constant, k≈ 1.38×10−23 J/K, which we encountered in the ideal gas formula PV = NkT, and which relates the mean kinetic energy of atoms or molecules in an ideal gas to the temperature. However, while kis, quite simply, a constant of proportionality, S is obviously not a constant: its value depends on the system or, to continue with the mathematical model we’re using, the heat engine we’re looking at.

Still, this definition and relationships do not really answer the question: what is entropy, really? Let’s further explore the relationships so as to try to arrive at a better understanding.

I’ll continue to follow Feynman’s exposé here, so let me use his illustrations and arguments. The first argument revolves around the following set-up, involving three reversible engines (1, 2 and 3), and three temperatures (T1 > T> T3): Three engines

Engine 1 runs between T1 and  Tand delivers W13 by taking in Q1 at T1 and delivering Q3 at T3. Similarly, engine 2 and 3 deliver or absorb W32  and W12 respectively by running between T3 and  T2 and between T2 and  Trespectively. Now, if we let engine 1 and 2 work in tandem, so engine 1 produces W13 and delivers Q3, which is then taken in by engine 2, using an amount of work W32, the net result is the same as what engine 3 is doing: it runs between T1 and  Tand delivers W12, so we can write:

W12 = W13 – W32

This result illustrates that there is only one Carnot efficiency, which Carnot’s Theorem expresses as follows:

  1. All reversible engines operating between the same heat reservoirs are equally efficient.
  2. No actual engine operating between two heat reservoirs can be more efficient than a Carnot engine operating between the same reservoirs.

Now, it’s obvious that it would be nice to have some kind of gauge – or a standard, let’s say – to describe the properties of ideal reversible engines in order to compare them. We can define a very simple gauge by assuming Tin the diagram above is one degree. One degree what? Whatever: we’re working in Kelvin for the moment, but any absolute temperature scale will do. [An absolute temperature scale uses an absolute zero. The Kelvin scale does that, but the Rankine scale does so too: it just uses different units than the Kelvin scale (the Rankine units correspond to Fahrenheit units, while the Kelvin units correspond to Celsius degrees).] So what we do is to let our ideal engines run between some temperature T – at which it absorbs or delivers a certain heat Q – and 1° (one degree), at which it delivers or absorbs an amount of heat which we’ll denote by QS. [Of course, I note this assumes that ideal engines are able to run between one degree Kelvin (i.e. minus 272.15 degrees Celsius) and whatever other temperature. Real (man-made) engines are obviously likely to not have such tolerance. :-)] Then we can apply the Q = S·T equation and write:

Q= S·1°

Like that we solve the gauge problem when measuring the efficiency of ideal engines, for which the formula is W/Q= (T1 –  T)/T1. In my previous post, I illustrated that equation with some graphs for various values of T(e.g. T= 4, 1, or 0.3). [In case you wonder why these values are so small, it doesn’t matter: we can scale the units, or assume 1 unit corresponds to 100 degrees, for example.] These graphs all look the same but cross the x-axis (i.e. the T1-axis) at different points (at T= 4, 1, and 0.3 respectively, obviously). But let us now use our gauge and, hence, standardize the measurement by setting T2 to 1. Hence, the blue graph below is now the efficiency graph for our engine: it shows how the efficiency (W/Q1) depends on its working temperature Tonly. In fact, if we drop the subscripts, and define Q as the heat that’s taken in (or delivered when we reverse the machine), we can simply write:

 W/Q = (T – 1)/T = 1 – 1/T

Capture

Note the formula allows for negative values of the efficiency W/Q: if Twould be lower than one degree, we’d have to put work in and, hence, our ideal engine would have negative efficiency indeed. Hence, the formula is consistent over the whole temperature domain T > 0. Also note that, coincidentally, the three-engine set-up and the W/Q formula also illustrate the scalability of our theoretical reversible heat engines: we can think of one machine substituting for two or three others, or any combination really: we can have several machines of equal efficiency working in parallel, thereby doubling, tripling, quadruping, etcetera, the output as well as the heat that’s being taken in. Indeed, W/Q = 2W/2Q = 3W/3Q1 = 4W/4Q and so on.

Also, looking at that three-engine model once again, we can set T3 to one degree and re-state the result in terms of our standard temperature:

If one engine, absorbing heat Qat T1, delivers the heat QS at one degree, and if another engine absorbing heat Qat T2, will also deliver the same heat QS at one degree, then it follows that an engine which absorbs heat Qat the temperature T1 will deliver heat Qif it runs between T1 and T2.

That’s just stating what we showed, but it’s an important result. All these machines are equivalent, so to say, and, as Feynman notes, all we really have to do is to find how much heat (Q) we need to put in at the temperature T in order to deliver a certain amount of heat Qat the unit temperature (i.e. one degree). If we can do that, then we have everything. So let’s go for it.

Measuring entropy

We already mentioned that we can look at the entropy S = Q/T as some quantity that remains unchanged as long as we’re talking reversible thermodynamic processes. Indeed, as much Q/T is absorbed as is given back in a reversible cycle or, in other words: there is no net change in entropy in a reversible cycle. But what does it mean really?

Well… Feynman defines the entropy of a system, or a substance really (think of that body of gas in the cylinder of our ideal gas engine), as a function of its condition, so it is a quantity which is similar to pressure (which is a function of density, volume and temperature: P = NkT/V), or internal energy (which is a function of pressure and volume (U = (3/2)·PV) or, substituting the pressure function, of density and temperature: U = (3/2)·NkT). That doesn’t bring much clarification, however. What does it mean? We need to go through the full argument and the illustrations here.

Suppose we have a body of gas, i.e. our substance, at some volume Va and some temperature Ta (i.e. condition a), and we bring it into some other condition (b), so it now has volume Vb and temperature Tb, as shown below. [Don’t worry about the ΔS = Sb – Sa and ΔS = Sa – Sb formulas as for now. I’ll explain them in a minute.]  

Entropy change

You may think that a and b are, once again, steps in the reversible cycle of a Carnot engine, but no! What we’re doing here is something different altogether: we’ve got the same body of gas at point b but in a completely different condition: indeed, both the volume and temperature (and, hence, its pressure) of the gas is different in b as compared to a. What we do assume, however, is that the gas went from condition a to condition b through a completely reversible process. Cycle, process? What’s the difference? What do we mean with that?

As Feynman notes, we can think of going from a to b through a series of steps, during which tiny reversible heat engines take out an infinitesimal amount of heat dQ in tiny little reservoirs at the temperature corresponding to that point on the path. [Of course, depending on the path, we may have to add heat (and, hence, do work rather than getting work out). However, in this case, we see a temperature rise but also an expansion of volume, the net result of which is that the substance actually does some (net) work from a to b, rather than us having to put (net) work in.] So the process consists, in principle, of a (potentially infinite) number of tiny little cycles. The thinking is illustrated below. 

Entropy change 2

Don’t panic. It’s one of the most beautiful illustrations in all of Feynman’s Lectures, IMHO. Just analyze it. We’ve got the same horizontal and vertical axis here, showing volume and temperature respectively, and the same points a and b showing the condition of the gas before and after and, importantly, also the same path from condition a to condition b, as in the previous illustration. It takes a pedagogic genius like Feynman to think of this: he just draws all those tiny little reservoirs and tiny engines on a mathematical graph to illustrate what’s going on: at each step, an infinitesimal amount of work dW is done, and an infinitesimal amount of entropy dS = dQ/T is being delivered at the unit temperature.

As mentioned, depending on the path, some steps may involve doing some work on those tiny engines, rather than getting work out of them, but that doesn’t change the analysis. Now, we can write the total entropy that is taken out of the substance (or the little reservoirs, as Feynman puts it), as we go from condition a to b, as:

ΔS = Sb – Sa

Now, in light of all the above, it’s easy to see that this ΔS can be calculated using the following integral:

integral entropy

So we have a function S here which depends on the ‘condition’ indeed—i.e. the volume and the temperature (and, hence, the pressure) of the substance. Now, you may or may not notice that it’s a function that is similar to our internal energy formula (i.e. the formula for U). At the same time, it’s not internal energy. It’s something different. We write:

S = S(V, T)

So now we can rewrite our integral formula for change in S as we go from a to b as:

integral entropy 2

Now, a similar argument as the one we used when discussing Carnot’s postulate (all ideal reversible engines operating between two temperatures are essentially equivalent) can be used to demonstrate that the change in entropy does not depend on the path: only the start and end point (i.e. point a and b) matter. In fact, the whole discussion is very similar to the discussion of potential energy when conservative force fields are involved (e.g. gravity or electromagnetism): the difference between the values for our potential energy function at different points was absolute. The paths we used to go from one point to another didn’t matter. The only thing we had to agree on was some reference point, i.e. a zero point. For potential energy, that zero point is usually infinity. In other words, we defined zero potential energy as the potential energy of a charge or a mass at an infinite distance away from the charge or mass that’s causing the field.

Here we need to do the same: we need to agree on a zero point for S, because the formula above only gives the difference of entropy between two conditions. Now, that’s where the third law of thermodynamics comes in, which simply states that the entropy of any substance at the absolute zero temperature (T = 0) is zero, so we write:

S = 0 at T = 0

That’s easy enough, isn’t it?

Now, you’ll wonder whether we can actually calculate something with that. We can. Let me simply reproduce Feynman’s calculation of the entropy function for an ideal gas. You’ll need to pull all that I wrote in this and my previous posts together, but you should be able to follow his line of reasoning:

Entropy for ideal gas

Huh? I know. At this point, you’re probably suffering from formula overkill. However, please try again. Just go over the text and the formulas above, and try to understand what they really mean. [In case you wonder about the formula with the ln[Vb/Va] factor (i.e. the reference to section 44.4), you can check it in my previous post.] So just try to read the S(V, T) formula: it says that a substance (a gas, liquid or solid) consisting of N atoms or molecules, at some temperature T and with some volume V, is associated with some exact value for its entropy S(V, T). The constant, a, should, of course, ensure that S(V, T) = 0 at T = 0.

The first thing you can note is that S is an increasing function of V at constant temperature T. Conversely, decreasing the volume results in a decrease of entropy. To be precise, using the formula for S, we can derive the following formula for the difference in entropy when keeping the temperature constant at some value T:

Sb – Sa = S(Vb, T) – S(Va, T)

= ΔS = N·k·ln[Vb/Va]

What this formula says, for example, is that we’d do nothing but double the volume (while keeping the temperature constant) of a gas when going from  to a to b (hence, Vb/V= 2), the entropy will change by N·k·ln(2) ≈ 0.7·N·k. Conversely, if we would halve the volume (again, assuming the temperature remains constant), then the change in entropy will be N·k·ln(0.5) ≈ –0.7·N·k.

The graph below shows how it works. It’s quite simple really: it’s just the ln(x) function, and I just inserted it here so you have an idea of how the entropy changes with volume. [In case you would think it looks the same like that efficiency graph, i.e. the graph of the W/Q = (T – 1)/T = 1 – 1/T function, think again: the efficiency graph has a horizontal asymptote (y = 1), while the logarithmic function does not have any horizontal asymptote.]

Capture 2

Now, you may think entropy changes only marginally as we keep increasing the volume, but you should also think twice here. It’s just the nature of the logarithmic scale. Indeed, when we double the volume, going from V = 1 to V = 2, for example, the change in entropy will be equal to N·k·ln(2) ≈ 0.7·N·k. Now, that’s the same change as going from V = 2 to V = 4, and the same as going from V = 4 to V = 8. So, if we double the volume three times in a row, the total change in entropy will be that of going from V = 1 to V = 8, which is equal to N·k·ln(8) = N·k·ln(23) = 3·ln(2). So, yes, looking at the intervals here that are associated with the same ln(2) increase  in entropy, i.e. [1, 2], [2, 4] and [4, 8] respectively, you may think that the increase in entropy is marginal only, as it’s the same increase but the length of each interval is double that of the previous one. However, when reducing the volume, the logic works the other way around, and so the logarithmic function ensures the change is anything but marginal. Indeed, if we halve the volume, going from V = 1 to V = 1/2, and then halve it again, to V = 1/4, and the again, to V = 1/8, we get the same change in entropy once more—but with a minus sign in front, of course: N·k·ln(2–3) = –3·ln(2)—but the same ln(2) change is now associated with intervals on the x-axis (between 1 and 0.5, 0.5 and 0.25, and 0.25 and 0.125 respectively) that are getting smaller and smaller as we further reduce the volume. In fact, the length of each interval is now half of that of the previous interval. Hence, the change in entropy is anything but marginal now!

[In light of the fact that the (negative) change in entropy becomes larger and larger as we further reduce the volume, and in a way that’s anything but marginal, you may now wonder, for a very brief moment, whether or not the entropy might actually take on a negative value. The answer is obviously no. The change in entropy can take on a large negative volume but the S(V, T) = N·k·[ln(V) + ln(T)/(γ–1)] + a formula, with ensuring that the entropy is zero at T = 0, ensures things come out alright—as it should, of course!]

Now, as we’re continue to try to understand what entropy really means, it’s quite interesting to think of what this formula implies at the level of the atoms or molecules that make up the gas: the entropy change per molecule is k·ln2 – or k·ln(1/2) when compressing the gas at the same temperature. Now, its kinetic energy remains the same – because – don’t forget! – we’re changing the volume at constant temperature here. So what causes the entropy change here really? Think about it: the only thing that changed, physically, is how much room the molecule has to run around in—as Feynman puts it aptly. Hence, while everything stays the same (atoms or molecules with the same temperature and energy), we still have an entropy increase (or decrease) when the distribution of the molecules changes.

This remark brings us to the connection between order and entropy, which you vaguely know, for sure, but probably never quite understood because, if you did, you wouldn’t be reading this post. 🙂 So I’ll talk about in a moment. I first need to wrap up this section, however, by showing why all of the above is, somehow, related to that ever-increasing entropy law. 🙂

However, before doing that, I want to quickly note something about that assumption of constant temperature here. How can it remain constant? When a body of gas expands, its temperature should drop, right? Well… Yes. But only if it is pushing against something, like in cylinder with a piston indeed, or as air escapes from a tyre and pushes against the (lower-pressure) air outside of the tyre. What happens here is that the kinetic energy of the gas molecules is being transferred (to the piston, or to the gas molecules outside of the tyre) and, hence, temperature decreases indeed. In such case, the assumption is that we add (or remove) heat from our body of gas as we expand (or decrease) its volume. Having said that, in a more abstract analysis, we could envisage a body of gas that has nothing to push against, except for the walls of its container, which have the same temperature. In such more abstract analysis, we need not worry about how we keep temperature constant: the point here is just to compare the ex post and ex ante entropy of the volume. That’s all.

The Law of Ever-Increasing Entropy 

With all of the above, we’re finally armed to ‘prove’ the second law of thermodynamics which we can also state as follows indeed: while the energy of the universe is constant, its entropy is always increasing. Why is this so? Out of respect, I’ll just quote Feynman once more, as I can’t see how I could possibly summarize it better:

Universe of entropy

So… That should sum it all up. You should re-read the above a couple of times, so you’re sure you grasp it. I’ll also let Feynman summarize all of those ‘laws’ of thermodynamics that we have just learned as, once more, I can’t see how I could possibly write more clearly or succinctly. His statement is much more precise that the statement we started out with: the energy of the universe is always constant but its entropy is always increasing. As Feynman notes, this version of the two laws of thermodynamics don’t say that entropy stays the same in a reversible cycle, and also doesn’t say what entropy actually is. So Feynman’s summary is much more precise and, hence, much better indeed:

Laws of thermodynamics

Entropy and order

What I wrote or reproduced above may not have satisfied you. So we’ve got this funny number, S, describing some condition or state of a substance, but you may still feel you don’t really know what it means. Unfortunately, I cannot do all that much about that. Indeed, technically speaking, a quantity like entropy (S) is a state function, just like internal energy (U), or like enthalpy (usually denoted by H), a related concept which you may remember from chemistry and which is defined H = U + PV. As such, you may just think of S as some number that pops up in a thermodynamical equations. It’s perfectly fine to think of it like that. However, if you’re reading this post, then it’s likely you do so because some popular science book mentioned entropy and related it to order and/or disorder indeed. However, I need to disappoint you here: that relationship is not as straightforward as you may think it is. To get some idea, let’s go through another example, which I’ll also borrow from Feynman.

Let’s go back to that relationship between volume and entropy, keeping temperature constant:

ΔS = N·k·ln[Vb/Va]

We discussed, rather at length, how entropy increases as we allow a body of gas to expand. As the formula shows, it increases logarithmically with the ratio of the ex ante and ex post volume. Now, let us think about two gases, which we can think of as ‘white’ and ‘black’ respectively. Or neon or argon. Whatever. Two different gases. Let’s suppose we’ve kept them into two separate compartments of a box, with some barrier in-between them.

Now, you know that, if we’d take out the barrier, they’ll mix it. That’s just a fact of life. As Feynman puts it: somehow, the whites will worm their way across in the space of blacks, and the blacks will worm their way, by accident, into the space of whites. [There’s a bit of a racist undertone in this, isn’t there? But then I am sure Feynman did not intend it that way.] Also, as he notes correctly: we’ve got a very simple example here of an irreversible process which is completely composed of reversible events. We know this mixing will not affect the kinetic (or internal) energy of the gas. Having said that, both the white and the black molecules now have ‘much more room to run around in’. So is there a change in entropy? You bet.

If we take away that barrier, it’s just similar to moving that piston out when we were discussing one volume of gas only. Indeed, we effectively double the volume for the whites, and we double the volume for the blacks, while keeping all at the same temperature. Hence, both the entropy of the white and black gas increases. By how much? Look at the formula: the amount is given by the product of the number of molecules (N), the Boltzman constant (k), and ln(2), i.e. the natural logarithm of the ratio of the ex post and ex ante volumes: ΔS = N·k·ln[Vb/Va].

So, yes, entropy increases as the molecules are now distributed over a much larger space. Now, if we stretch our mind a bit, we could define as a measure of order, or disorder, especially when considering the process going the other way: suppose the gases were mixed up to begin with and, somehow, we manage to neatly separate them in two separate volumes, each half of the original. You’d agree that amounts to an increase in order and, hence, you’d also agree that, if entropy is, somehow, some measure for disorder, entropy should decrease–which it obviously does using that ΔS = N·k·ln[Vb/Va] formula. Indeed, we calculated ΔS as –0.7·N·k.

However, the interpretation is quite peculiar and, hence, not as straightforward as popular science books suggest. Indeed, from that S(V, T) = Nk[lnV + (1/γ−1)lnT] + a formula, it’s obvious we can also decrease entropy by decreasing the number of molecules, or by decreasing the temperature. You’ll have to admit that in both cases (decrease in N, or decrease in T), you’ll have to be somewhat creative in interpreting such decrease as a decrease in disorder.

So… What more can we say? Nothing much. However, in order to be complete, I should add a final note on this discussion of entropy measuring order (or, to be more precise, measuring disorder). It’s about another concept of entropy, the so-called Shannon entropy. It’s a concept from information theory, and our entropy and the Shannon entropy do have something in common: in both, we see that logarithm pop up. It’s quite interesting but, as you might expect, complicated. Hence, I should just refer you to the Wikipedia article on it, from which I took the illustration and text below.

coin flip

We’ve got two coins with two faces here. They can, obviously, be arranged in 22 = 4 ways. Now, back in 1948, the so-called father of information theory, Claude Shannon, thought it was nonsensical to just use that number (4) to represent the complexity of the situation. Indeed, if we’d take three coins, or four, or five, respectively, then we’d have 2= 8, 2= 16, and 2= 32 ways, respectively, of combining them. Now, you’ll agree that, as a measure of the complexity of the situation, the exponents 1, 2, 3, 4 etcetera describe the situation much better than 2, 4, 8, 16 etcetera.

Hence, Shannon defined the so-called information entropy as, in this case,  the base 2 logarithm of the number of possibilities. To be precise, the information entropy of the situation which we’re describing here (i.e. the ways a set of coins can be arranged) is equal to S = N = log2(2N) = 1, 2, 3, 4 etcetera for N = 1, 2, 3, 4 etcetera. In honor of Shannon, the unit is shannons. [I am not joking.] However, information theorists usually talk about bits, rather than shannons. [We’re not talking a computer bit here, although the two are obviously related, as computer bits are binary too.]

Now, one of the many nice things of logarithmic functions is that it’s easy to switch bases. Hence, instead of expressing information entropy in bits, we can also express it in trits (for base 3 logarithms), nats (for base e logarithms, so that’s the natural logarithmic function ln), or dits (for base 10 logarithms). So… Well… Feynman is right in noting that “the logarithm of the number of ways we can arrange the molecules is (the) entropy”, but that statement needs to be qualified: the concepts of information entropy and entropy tout court, as used in the context of thermodynamical analysis, are related but, as usual, they’re also different. 🙂 Bridging the two concepts involves probability distributions and other stuff. One extremely simple popular account illustrates the principle behind as follows:

Suppose that you put a marble in a large box, and shook the box around, and you didn’t look inside afterwards. Then the marble could be anywhere in the box. Because the box is large, there are many possible places inside the box that the marble could be, so the marble in the box has a high entropy. Now suppose you put the marble in a tiny box and shook up the box. Now, even though you shook the box, you pretty much know where the marble is, because the box is small. In this case we say that the marble in the box has low entropy.

Frankly, examples like this make only very limited sense. They may, perhaps, help us imagine, to some extent, how probability distributions of atoms or molecules might change as the atoms or molecules get more space to move around in. Having said that, I should add that examples like this are, at the same time, also so simplistic they may confuse us more than they enlighten us. In any case, while all of this discussion is highly relevant to statistical mechanics and thermodynamics, I am afraid I have to leave it at this one or two remarks. Otherwise this post risks becoming a course! 🙂

Now, there is one more thing we should talk about here. As you’ve read a lot of popular science books, you probably know that the temperature of the Universe is decreasing because it is expanding. However, from what you’ve learnt so far, it is hard to see why that should be the case. Indeed, it is easy to see why the temperature should drop/increase when there’s adiabatic expansion/compression: momentum and, hence, kinetic energy, is being transferred from/to the piston indeed, as it moves out or into the cylinder while the gas expands or is being compressed. But the expanding universe has nothing to push against, does it? So why should its temperature drop? It’s only the volume that changes here, right? And so its entropy (S) should increase, in line with the ΔS = Sb – Sa = S(Vb, T) – S(Va, T) = ΔS = N·k·ln[Vb/Va] formula, but not its temperature (T), which is nothing but the (average) kinetic energy of all of the particles it contains. Right? Maybe.

[By the way, in case you wonder why we believe the Universe is expanding, that’s because we see it expanding: an analysis of the redshifts and blueshifts of the light we get from other galaxies reveals the distance between galaxies is increasing. The expansion model is often referred to as the raisin bread model: one doesn’t need to be at the center of the Universe to see all others move away: each raisin in a rising loaf of raisin bread will see all other raisins moving away from it as the loaf expands.]

Why is the Universe cooling down?

This is a complicated question and, hence, the answer is also somewhat tricky. Let’s look at the entropy formula for an increasing volume of gas at constant temperature once more. Its entropy must change as follows:

ΔS = Sb – Sa = S(Vb, T) – S(Va, T) = ΔS = N·k·ln[Vb/Va]

Now, the analysis usually assumes we have to add some heat to the gas as it expands in order to keep the temperature (T) and, hence, its internal energy (U) constant. Indeed, you may or may not remember that the internal energy is nothing but the product of the number of gas particles and their average kinetic energy, so we can write:

U = N<mv2/2>

In my previous post, I also showed that, for an ideal gas (i.e. no internal motion inside of the gas molecules), the following equality holds: PV = (2/3)U. For a non-ideal gas, we’ve got a similar formula, but with a different coefficient: PV = (γ−1)U. However, all these formulas were based on the assumption that ‘something’ is containing the gas, and that ‘something’ involves the external environment exerting a force on the gas, as illustrated below.

gas-pressure

As Feynman writes: “Suppose there is nothing, a vacuum, on the outside of the piston. What of it? If the piston were left alone, and nobody held onto it, each time it got banged it would pick up a little momentum and it would gradually get pushed out of the box. So in order to keep it from being pushed out of the box, we have to hold it with a force F.” We know that the pressure is the force per unit area: P = F/A. So can we analyze the Universe using these formulas?

Maybe. The problem is that we’re analyzing limiting situations here, and that we need to re-examine our concepts when applying them to the Universe. 🙂

The first question, obviously, is about the density of the Universe. You know it’s close to a vacuum out there. Close. Yes. But how close? If you google a bit, you’ll find lots of hard-to-read articles on the density of the Universe. If there’s one thing you need to pick up from them, is that, in order for the Universe to expand forever, it should have some critical density (denoted by ρc), which is like a watershed point between an expanding and a contracting Universe.

So what about it? According to Wikipedia, the critical density is estimated to be approximately five atoms (of monatomic hydrogen) per cubic metre, whereas the average density of (ordinary) matter in the Universe is believed to be 0.2 atoms per cubic metre. So that’s OK, isn’t it?

Well… Yes and no. We also have non-ordinary matter in the Universe, which is usually referred to as dark matter in the Universe. The existence of dark matter, and its properties, are inferred from its gravitational effects on visible matter and radiation. In addition, we’ve got dark energy as well. I don’t know much about it, but it seems the dark energy and the dark matter bring the actual density (ρ) of the Universe much closer to the critical density. In fact, cosmologists seem to agree thatρ ≈ ρc and, according to a very recent scientific research mission involving an ESA space observatory doing very precise measurements of the Universe’s cosmic background radiation, the Universe should consist of 4.82 ± 0.05% ordinary matter,25.8 ± 0.4% dark matter and 69 ± 1% dark energy. I’ll leave it to you to challenge that. 🙂

OK. Very low density. So that means very low pressure obviously. But what’s the temperature? I checked on the Physics Stack Exchange site, and the best answer is pretty nuanced: it depends on what you want to average. To be precise, the quoted answer is:

  1. If one averages by volume, then one is basically talking about the ‘temperature’ of the photons that reach us as cosmic background radiation—which is the temperature of the Universe that those popular science books refer to. In that case, we get an average temperature of 2.72 degrees Kelvin. So that’s pretty damn cold!
  2. If we average by observable mass, then our measurement is focused mainly on the temperature of all of the hydrogen gas (most matter in the Universe is hydrogen), which has a temperature of a few 10s of Kelvin. Only one tenth of that mass is in stars, but their temperatures are far higher: in the range of 104to 105 degrees. Averaging gives a range of 10to 104 degrees Kelvin. So that’s pretty damn hot!
  3. Finally, including dark matter and dark energy, which is supposed to have even higher temperature, we’d get an average by total mass in the range of 107 Kelvin. That’s incredibly hot!

This is enlightening, especially the first point: we’re not measuring the average kinetic energy of matter particles here but some average energy of (heat) radiation per unit volume. This ‘cosmological’ definition of temperature is quite different from the ‘physical’ definition that we have been using and the observation that this ‘temperature’ must decrease is quite logical: if the energy of the Universe is a constant, but its volume becomes larger and larger as the Universe expands, then the energy per unit volume must obviously decrease.

So let’s go along with this definition of ‘temperature’ and look at an interesting study of how the Universe is supposed to have cooled down in the past. It basically measures the temperature of that cosmic background radiation, i.e. a remnant of the Big Bang, a few billion years ago, which was a few degrees warmer then than it is now. To be precise, it was measured as 5.08 ± 0.1 degrees Kelvin, and this decrease has nothing to do with our simple ideal gas laws but with the Big Bang theory, according to which the temperature of the cosmic background radiation should, indeed, drop smoothly as the universe expands.

Going through the same logic but the other way around, if the Universe had the same energy at the time of the Big Bang, it was all focused in a very small volume. Now, very small volumes are associated with very small entropy according to that S(V, T) = N·k·[ln(V) + ln(T)/(γ–1)] + a formula, but then temperature was not the same obviously: all that energy has to go somewhere, and a lot of it was obviously concentrated in the kinetic energy of its constituent particles (whatever they were) and, hence, a lot of it was in their temperature. 

So it all makes sense now. It was good to check out it out, as it reminds us that we should not try to analyze the Universe as a simple of body of gas that’s not contained in anything in order to then apply our equally simple ideal gas formulas. Our approach needs to be much more sophisticated. Cosmologists need to understand physics (and thoroughly so), but there’s a reason why it’s a separate discipline altogether. 🙂

Advertisements

First Principles of Thermodynamics

Thermodynamics is not an easy topic, but one can’t avoid it in physics. The main obstacle, probably, is that we very much like to think in terms of dependent and independent variables. While that approach is still valid in thermodynamics, it is more complicated, because it is often not quite clear what the dependent and independent variables are. We’ve got a lot of quantities in thermodynamics indeed: volume, pressure, internal energy, temperature and – soon to be defined – entropy, which are all some function of each other. Hence, the math involves partial derivatives and other subtleties. Let’s try to get through the basics.

Volume, pressure, temperature and the ideal gas law

We all know what a volume is. That’s an unambiguous quantity. Pressure and temperature are not so unambiguous. In fact, as far as I am concerned, the key to understanding thermodynamics is to be able to not only distinguish but also relate pressure and temperature.

The pressure of a gas or a liquid (P) is the force, per unit area, exerted by the atoms or molecules in that gas or liquid as they hit a surface, such as a piston, or the wall of the body that contains it. Hence, pressure is expressed in newton per square meter: 1 pascal (Pa) = 1 N/m2. It’s a small unit for daily use: the standard atmospheric pressure is 1 atm = 101,325 Pa = 1.01325×105 Pa = 1.01325 bar. We derived the formula for pressure in the previous post:

P = F/A = (2/3)·n·〈m·v2/2〉

This formula shows that the pressure depends on two variables:

  1. The density of the gas or the liquid (i.e. the number of particles per unit volume, so it’s two variables really: a number and a volume), and
  2. Their average kinetic energy.

Now, this average kinetic energy of the particles is nothing but the temperature (T), except that, because of historical reasons, we define temperature (expressed in degrees Kelvin) using a constant of proportionality—the Boltzmann constant k = kB. In addition, in order to get rid of that ugly 2/3 factor in our next formula, we’ll also throw in a 3/2 factor. Hence, we re-write the average kinetic energy 〈m·v2/2〉 as:

〈m·v2/2〉 = (3/2)·k·T

Now we substitute that definition into the first equation (while also noting that, if n is the number of particles in a unit volume, we will have N = n·V atoms in a volume V) to get what we want: the so-called ideal gas law, which you should remember from your high-school days:

PV = NkT

The equation implies that, for a given number of particles (for some given substance, that is), and for some given temperature, pressure and volume are inversely proportional one to another: P = NkT/V. The curve representing that relationship between P and V has the same shape as the reciprocal function y = 1/x. To be precise, it has the same shape as a rectangular hyperbola with the center at the origin, i.e. the shape of an y = m/x curve, assuming non-negative values for x and y only. The illustration below shows that graph for m = 1, 3 and 0.3 respectively. We’ll need that graph later when looking at more complicated graphs depicting processes during which we will not keep temperature constant—so that’s why I quickly throw it in here.

Graph

Of course, n·〈m·v2/2〉 is the number of atoms times the average kinetic energy of each and, therefore, it is also the internal energy of the gas. Hence, we can also write the PV = NkT equation as:

PV = (2/3)·U

We should immediately note that we’re considering an ideal gas here, so we disregard any possibility of excitation or motion inside the atoms or molecules. It matters because, if we’re decreasing the volume and, hence, increasing the pressure, we’ll be doing work, and the energy needs to go somewhere. The equation above assumes it all goes into that 〈m·v2/2〉 factor and, hence, into the temperature. Hence, it is obvious that, if were to allow for all kinds of rotational and vibratory motions inside of the atoms or molecules motions also, then the analysis would become more complicated. Having said, in my previous post I showed that the complications are limited: we can account for all kinds of internal motion by inserting another coefficient—i.e. other than 2/3. For example, Feynman calculates it as 2/7, rather than 2/3, for the diatomic oxygen molecule. That is why we usually see a much more general expression of the equation above. We will write:

PV = (γ – 1)·U

The gamma (γ) in the equation above is the rather infamous specific heat ratio, and so it’s equal to 5/3 for the ideal gas (5/3 – 1 = 2/3). I call γ infamous because its theoretical value does not match the experimental value for most gases. For example, while I just noted γ’s theoretical value for O(i.e. he diatomic oxygen molecule) – it’s 9/7 ≈ 1.286, because 9/7 – 1 = 2/7), the experimentally measured value for Ois 1.399. The difference can only be explained using quantum mechanics, which is obviously not the topic of this post, and so we won’t write much about γ. However, I need to say one or two things about it—which I’ll do by showing how we could possibly measure it. Let me reproduce the illustration in my previous post here.

Gas pressureThe pressure is the force per unit area (P = F/A and, hence, F = P·A), and compressing the gas amounts to applying a force over some (infinitesimal) distance dx. Hence, the (differential) work done is equal to dW = F·(−dx) = – P·A·dx = – P·dV, as A·dx = dV, obviously (the area A times the distance dx is the volume change). Now, all the work done goes into changing the internal energy U: there is no heat energy that’s being added or removed here, and no other losses of energy. That’s why it’s referred to as a so-called adiabatic compression, from the Greek a (not), dia (through) and bainein (to go): no heat is going through. The cylinder is thermally insulated. Hence, we write:

dU = – P·dV

This is a very simple differential equation. Note the minus sign: the volume is going to decrease while we do work by compressing the piston, thereby increasing the internal energy. [If you are clever (which, of course, you are), you’ll immediately say that, with increasing internal energy, we should also have an increase in pressure and, hence, we shouldn’t treat P as some constant. You’re right, but so we’re doing a marginal analysis only here: we’ll deal with the full thing later. As mentioned above, the complete picture involves partial derivatives and other mathematical tricks.]

Taking the total differential of U = PV/(γ – 1), we also have another equation:

dU = (P·dV + V·dP)/(γ – 1)

Hence, we have – P·dV = (P·dV + V·dP)/(γ – 1) or, rearranging the terms:

γdV/V + dP/P = 0

Assuming that γ is constant (which is true in theory but not in practice—another reason why this γ is rather infamous), we can integrate this. It gives γlnV + lnP = lnC, with lnC the constant of integration. Now we take the exponential of both sides to get that other formulation of the gas law, which you also may or may not remember from your high-school days:

PVγ = C (a constant)

So here you have the answer to the question as to how we can measure γ: the pressure times the volume to the γth power must be some constant. To be precise, for monatomic gases the pressure times the volume to the 5/3 ≈ 1.67 power must be a constant. The formula works for gases like helium, krypton and argon. However, the issue is more complicated when looking at more complex molecules. You should also note the serious limitation in this analysis: we should not think of P as a constant in the dU = – P·dV equation! But I’ll come back to this. As for now, just take note of it and move on to the next topic.

The Carnot heat engine

The definitions above should help us to understand and distinguish isothermal expansion and compression versus adiabatic expansion and compression which, in turn, should help us to understand what the Carnot cycle is all about. We’re looking at a so-called reversible engine here: there is no friction, and we also assume heat flows ‘frictionless’. The cycle is illustrated below: this so-called heat engine takes an amount of heat (Q1) from a high temperature (T1) heat pad (often referred to as the furnace or the boiler or, more generally, the heat source) and uses it to make some body (i.e. a piston in a cylinder in Carnot’s example) do some work, with some other amount of heat (Q2) goes back into some cold sink (usually referred to as the condenser), which is nothing but a second pad at much lower temperature (T2).

Carnot cycle

The four steps involved are the following:

(1) Isothermal expansion: The gas absorbs heat and expands while keeping the same temperature (T1). As the number of gas atoms or molecules, and their temperature, stays the same, the heat does work, as the gas expands and pushes the piston upwards. So that’s isothermal expansion. The next is different.

(2) Adiabatic expansion: The cylinder and piston are now removed from the heat pad, and the gas continues to expand, thereby doing even more work by pushing the piston further upwards. However, as the piston and cylinder are assumed to be thermally insulated, they neither gain nor lose heat. So it is the gas that loses internal energy: its temperature drops. So the gas cools. How much? It depends on the temperature of the condenser, i.e. T2, or – if there’s no condenser – the temperature of the surroundings. Whatever, the temperature cannot fall below T2.

(3) Isothermal compression: Now we (or the surroundings) will be doing work on the gas (as opposed to the gas doing work on its surroundings). The piston is being pushed back, and so the gas is slowly being compressed while, importantly, keeping it at the same temperature T2. Therefore, it delivers, through the head pad, a heat amount Q2 to the second heat reservoir (i.e. the condenser).

(4) Adiabatic compression: We take the cylinder off the heat pad and continue to compress it, without letting any heat flow out this time around. Hence, the temperature must rise, back to T1. At that point, we can put it back on the first heat pad, and start the Carnot cycle all over again.

The graph below shows the relationship between P and V, and temperature (T), as we move through this cycle. For each cycle, we put in Q1 at temperature T1, and take out Q2 at temperature T2, and then the gas does some work, some net work, or useful work as it’s labeled below.

Carnot cycle graph

Let’s go step by step once again:

  1. Isothermal expansion: Our engine takes in Q1 at temperature T1 from the heat source (isothermal expansion), as we move along line segment (1) from point a to point b on the graph above: the pressure drops, the volume increases, but the temperature stays the same.
  2. Adiabatic expansion: We take the cylinder off the heat path and continue to let the gas expand. Hence, it continues to push the piston, and we move along line segment (2) from point b to c: the pressure further drops, and the volume further increases, but the temperature drops too—from T1 to T2 to be precise.
  3. Isothermal compression: Now we bring the cylinder in touch with the T2 reservoir (the condenser or cold sink) and we now compress the gas (so we do work on the gas, instead of letting the gas do work on its surroundings). As we compress the gas, we reduce the volume and increase the pressure, moving along line segment (3) from c to d, while the temperature of the gas stays at T2.
  4. Adiabatic compression: Finally, we take the cylinder of the cold sink, but we further compress the gas. As its volume further decreases, its pressure and, importantly, its temperature too rises, from T2 to T1 – so we move along line segment 4 from d to – and then we put it back on the heat source to start another cycle.

We could also reverse the cycle. In that case, the steps would be the following:

  1. Our engine would first take in Q2 at temperature T2 (isothermal expansion). We move along line segment (3) here but in the opposite direction: from d to c.
  2. Then we would push the piston to compress the gas (so we’d be doing some work on the gas, rather than have the gas do work on its surroundings) so as to increase the temperature from T2 to T1 (adiabatic compression). On the graph, we go from c to b along line segment (2).
  3. Then we would bring the cylinder in touch with the T1 reservoir and further compress the gas so an amount of heat equal to Q1 is being delivered to the boiler at (the higher) temperature T1 (isothermal compression). So we move along line segment (1) from b to a.
  4. Finally, we would let the gas expand, adiabatically, so the temperature drops, back to T(line segment (4), from a to d), so we can put it back on the T2 reservoir, on which we will let it further expand to take in Q2 again.

It’s interesting to note that the only reason why we can get the machine to do some net work (or why, in the reverse case, we are able to transfer heat by putting some work into some machine) is that there is some mechanism here that allows the machine to take in and transfer heat through isothermal expansion and compression. If we would only have adiabatic expansion and compression, then we’d just be going back and forth between temperature T1 and T2 without getting any net work out of the engine. The shaded area in the graph above then collapses into a line. That is why actual steam engines are very complicated and involve valves and other engineering tricks, such as multiple expansion. Also note that we need two heat reservoirs: we can imagine isothermal expansion and compression using one heat reservoir only but then the engine would also not be doing any net work that is useful to us.

Let’s analyze the work that’s being doing during such Carnot cycle somewhat more in detail.

The work done when compressing a gas, or the work done by a gas as it expands, is an integral. I won’t explain in too much detail here but just remind you of that dW = F·(−dx) = – P·A·dx = – P·dV formula. From this, it’s easy to see that the integral is ∫ PdV.

An integral is an area under a curve: just substitute P for y = f(x) and V for x, and think of ∫ f(x)dx = ∫ y dx. So the area under each of the numbered curves is the work done by or on the gas in the corresponding step. Hence, the net work done (i.e. the so-called useful workis the shaded area of the picture. 

So what is it exactly?

Well… Assuming there are no other losses, the work done should, of course, be equal to the difference in the heat that was put in, and the heat that was taken out, so we write:

W = Q– Q2

So that’s key to understanding it all: an efficient (Carnot) heat engine is one that converts all of the heat energy (i.e. Q– Q2) into useful work or, conversely, which converts all of the work done on the gas into heat energy.

Schematically, Carnot’s reversible heat engine is represented as follows:

Heat engine

So what? You may we’ve got it all now, and that there’s nothing to add to the topic. But that’s not the case. No. We will want to know more about the exact relationship between Q1, Q2, Tand T2. Why? Because we want to be able to answer the very same questions Sadi Carnot wanted to answer, like whether or not the engine could be made more efficient by using another liquid or gas. Indeed, as a young military engineer, fascinated by the steam engines that had – by then – become quite common, Carnot wanted to find an unambiguous answer to two questions:

  1. How much work can we get out of a heat source? Can all heat be used to do useful work?
  2. Could we improve heat engines by replacing the steam with some other working fluid or gas?

These questions obviously make sense, especially in regard to the relatively limited efficiency of steam engines. Indeed, the actual efficiency of the best steam engines at the time was only 10 to 20 percent, and that’s under favorable conditions!

Sadi Carnot attempted to answer these in a memoir, published as a popular work in 1824 when he was only 28 years old. It was entitled Réflexions sur la Puissance Motrice du Feu (Reflections on the Motive Power of Fire). Let’s see if we can make sense of it using more modern and common language. [As for Carnot’s young age, like so many, he was not destined to live long: he was interned in a private asylum in 1832 suffering from ‘mania’ and ‘general delirium’, and died of cholera shortly after, aged 36.]

Carnot’s Theorem

You may think that both questions have easy answers. The first question is, obviously, related to the principle of conservation of energy. So… Well… If we’d be able to build a frictionless Carnot engine, including a ‘frictionless’ heat transfer mechanism, then, yes, we’d be able to convert all heat energy into useful work. But that’s an ideal only.

The second question is more difficult. The formal answer is the following: if an engine is reversible, then it makes no difference how it is designed. In other words, the amount of work that we’ll get out of a reversible Carnot heat engine as it absorbs a given amount of heat (Q1) at temperature Tand delivers some other amount of heat (Q2) at some other temperature T does not depend on the design of the machine. More formally, Carnot’s Theorem can be expressed as follows:

  1. All reversible engines operating between the same heat reservoirs are equally efficient.
  2. No actual engine operating between two heat reservoirs can be more efficient than a Carnot engine operating between the same reservoirs.

Feynman sort of ‘proves’ this Theorem from what he refers to as Carnot’s postulate. However, I feel his ‘proof’ is not a real proof, because Carnot’s postulate is too closely related to the Theorem, and so I feel he’s basically proving something using the result of the proof! However, in order to be complete, I did reproduce Feynman’s ‘proof’ of Carnot’s Theorem in the post scriptum to this post.

So… That’s it. What’s left to do is to actually calculate the efficiency of an ideal reversible Carnot heat engine, so let’s do that now. In fact, the calculation below is much more of a real proof of Carnot’s Theorem and, hence, I’d recommend you go through it.

The efficiency of an ideal engine

Above, I said I would need the result that PVγ is equal to some constant. We do, in the following proof that, for an ideal engine, the following relationship holds, always, for any Q1, Q2, T1 and T2:

Q1/T= Q2/T2

Proof

Now, we still don’t have the efficiency with this. The efficiency of an ideal engine is the ratio of the amount of work done and the amount of heat it takes in:

Efficiency = W/Q1

But W is equal to Q– Q2. Hence, re-writing the equation with the two heat/temperature ratios above as Q= (T/T1)·Q1, we get: W = Q1(1 –  T/T1) = Q1(T1 –  T)/T1. The grand result is:

Efficiency = W/Q= (T1 –  T)/T1

Let me help you to interpret this result by inserting a graph for T1 going from zero to 20 degrees, and for T2 set at 0.3, 1 and 4 degrees respectively.

graph efficiency

The graph makes it clear we need some kind of gauge so as to be able to actually compare the efficiency of ideal engines. I’ll come back to that in my next post. However, in the meanwhile, please note that the result makes sense: Tneeds to be higher than Tfor the efficiency to be positive (of course, we can interpret negative values for the efficiency just as well, as they imply we need to do work on the engine, rather than the engine doing work for us), and the efficiency is always less than unity, getting closer to one as the working temperature of the engine goes up.

Where does the power go?

So we have an engine that does useful work – so it works, literally – and we know where it gets its energy for that: it takes in more heat than it returns. But where is the work going? It is used to do something else, of course—like moving a car. Now how does that work, exactly? The gas exerts a force on the piston, thereby giving it an acceleration a = F/m, in accordance with Newton’s Law: F = m·a.

That’s all great. But then we need to re-compress the gas and, therefore, we need to (a) decelerate the piston, (b) reverse its direction and (c) push it back in. So that should cancel all of the work, shouldn’t it?

Well… No.

Let’s look at the Carnot cycle once more to show why. The illustrations below reproduce the basic steps in the cycle and the diagram relating pressure, volume and temperature for each of the four steps once more.

Carnot cycle Carnot cycle graph

Above, I wrote that the only reason why we can get the machine to do some net work (or why, in the reverse case, we are able to transfer heat from lower to higher temperature by doing some (net) work on it) is that there is some mechanism here that allows the machine to take in and transfer heat through isothermal expansion and compression and that, if we would only have adiabatic expansion and compression, then we’d just be going back and forth between temperature T1 and T2 without getting any net work out of the engine.

Now, that’s correct and incorrect at the same time. Just imagine a cylinder and a piston in equilibrium, i.e. the pressure on the inside and the outside of the piston are the same. Then we could push it in a bit but, as soon as we release, it would come back to its equilibrium situation. In fact, as we assume the piston can move in and out without any friction whatsoever, we’d probably have a transient response before the piston settles back into the steady state position (see below). Hence, we’d be moving back and forth on segment (2), or segment (4), in that P-V-T diagram above.

2000px-Oscillation_amortie

The point is: segment (2) and segment (4) are not the same: points a and b, and points c and d, are marked by the same temperature (T1 and Trespectively) butpressure and volume is very different. Why? Because we had a step in-between step (2) and (4): isothermal compression, which reduced the volume, i.e. step (3). Hence, the area underneath these two segments is different too. Indeed, you’ll remember we can write dW = F·(−dx) = – P·A·dx = – P·dV and, hence, the work done (or put in) during each step of the cycle is equal to the integral ∫ PdV, so that’s the area under each of the line segments. So it’s not like these two steps do not contribute to the net work that’s being done through the cycle. They do. Likewise, step (1) and (3) are not each other’s mirror image: they too take place at different volume and pressure, but that’s easier to see because they take place at different temperature and involve different amounts of heat (Q1 and Qrespectively).

But, again, what happens to the work? When everything is said and done, the piston does move up and down over the same distance in each cycle, and we know that work is force times distance. Hence, if the distance is the same… Yes. You’re right: the piston must exert some net force on something or, to put it differently, the energy W = Q1 − Qmust go somewhere. Now that’s where the time variable comes in, which we’ve neglected so far.

Let’s assume we connect the piston to a flywheel, as illustrated below, there had better be some friction on it because, if not, the flywheel would spin faster and faster and, eventually, spin out of control and all would break down. Indeed, each cycle would transfer additional kinetic energy to the flywheel. When talking work and kinetic energy, one usually applies the following formula: W = Q1 and Q= Δ[mv2/2] = [mv2/2]after − [mv2/2]before. However, we’re talking rotational kinetic energy so we should use the rotational equivalent for mv2/2, which is Iω2/2, in which I is the moment of inertia of the mass about the center of rotation and ω is the angular velocity.

Blender3D_KolbenZylinderAnimation

You get the point. As we’re talking time now, we should also remind you of the concept of power. Power is the amount of work or energy being delivered, or consumed, per unit of time (i.e. per second). So we can write it as P(t) = dW/dt. For linear motion, P(t) can be written as the vector product (I mean the scalar, inner or dot product here) of the force and velocity vectors, so P(t) = F·v. Again, when rotation is involved, we’ve got an equivalent formula: P(t) = τ·ω, in which τ represents the torque and ω is, once again, the angular velocity of the flywheel. Again, we’d better ensure some load is placed on the engine, otherwise it will spin out of control as vand/or ω get higher and higher and, hence, the power involved gets higher and higher too, until all breaks down.

So… Now you know it all. 🙂

Post scriptum: The analysis of the Carnot cycle involves some subtleties which I left out. For example, you may wonder why the gas would actually expand isothermically in the first step of the Carnot cycle. Indeed, if it’s at the same temperature Tas the heat source, there should be no heat flow between the heat pad and the gas and, hence, no gas expansion, no? Well… No. 🙂 The gas particles pound on every wall, but only the piston can move. As the piston moves out, frictionless, inside of the cylinder, kinetic energy is being transferred from the gas particles to the piston and, hence, the gas temperature will want to drop—but then that temperature drop will immediately cause a heat transfer. That’s why the description of a Carnot engine also postulates ‘frictionless’ heat transfer.

In fact, I note that Feynman himself struggles a bit to correctly describe what’s going on here, as his description of the Carnot cycle suggests some active involvement is needed to make the piston move and ensure the temperature does not drop too fast. Indeed, he actually writes following: “If we pull the piston out too fast, the temperature of the gas will fall too much below T and then the process will not quite be reversible.” This sounds, and actually is, a bit nonsensical: no pulling is needed, as the gas does all of the work while pushing the piston and, while it does, its temperature tends to drop, so it will suck it heat in order to equalize its temperature with its surroundings (i.e. the heat source). The situation is, effectively, similar to that of a can with compressed air: we can let the air expand, and thereby we let it do some work. However, the air will not re-compress itself by itself. To re-compress the air, you’ll need to apply the same force (or pressure I should say) but in the reverse direction.

Finally, I promised I would reproduce Feynman’s ‘proof’ of Carnot’s Theorem. This ‘proof’ involves the following imaginary set-up (see below): we’ve got engines, A and B. We assume A is an ideal reversible engine, while B may or may not be reversible. We don’t care about its design. We just assume that both can do work by taking a certain amount of heat out of one reservoir and putting another amount of heat back into another reservoir. In fact, in this set-up, we assume both engines share a large enough reservoir so as to be able to transfer heat through that reservoir.

Ideal engine

Engine A can take an amount of heat equal to Qat temperature T1 from the first reservoir, do an amount of work equal to W, and then deliver an amount of heat equal to Q= Q– W at temperature T2 to the second reservoir. However, because it’s a reversible machine, it can also go the other way around, i.e. it can take Q= Q– W from the second reservoir, have the surroundings do an amount of work W on it, and then deliver Q= Q+ W at temperature T1. We know that engine B can do the same, except that, because it’s different, the work might be different as well, so we’ll denote it by W’.

Now, let us suppose that the design of engine B is, somehow, more efficient, so we can get more work out of B for the same Qand the same temperatures Tand T2. What we’re saying, then, is that W’ – W is some positive non-zero amount. If that would be true, we could combine both machines. Indeed, we could have engine B take Qfrom the reservoir at temperature T1, do an amount of work equal to W on engine A so it delivers the same amount Qback to the reservoir at the same temperature T1, and we’d still be left with some positive amount of useful work W’ – W. In fact, because the amount of heat in the first reservoir is restored (in each cycle, we take Qout but we also put the same amount of heat back in), we could include it as part of the machine. It would no longer need to be some huge external thing with unlimited heat capacity.

So it’s great! Each cycle gives us an amount of useful work equal to W’ – W. What about the energy conservation law? Well… engine A takes Q– W from the reservoir at temperature T2, and engine B gives Q– W’ back to it, so we’re taking a net amount of heat equal to (Q– W) – (Q– W’) = W’ – W out of the T2 reservoir. So that works out too! So we’ve got a combined machine converting thermal energy into useful work. It looks like a nice set-up, doesn’t it?

Yes. The problem is that, according to Feynman, it cannot work. Why not? Because it violates Carnot’s postulate. The reasoning here is not easy. Let’s me try to do my best to present the argument correctly. What’s the problem? The problem is that we’ve got an engine here that operates at one temperature only. Now, according to Carnot’s postulate, it is not possible to extract the energy of heat at a single temperature with no other change in the system or the surroundings. Why not? Feynman gives the example of the can with compressed air. Imagine a can of compressed air indeed, and imagine we let the air expand, to drive a piston, for example. Now, we can imagine that our can with compressed air was in touch with a large heat reservoir at the same temperature, so its temperature doesn’t drop. So we’ve done work with that can at a single temperature. However, this doesn’t violate Carnot’s postulate because we’ve also changed the system: the air has expanded. It would only violate Carnot’s postulate if we’d find a way to put the air back in using exactly the same amount of work, so the process would be fully reversible. Now, Carnot’s postulate says that’s not possible at the same temperature. If the whole world is at the same temperature, then it is not possible to reversibly extract and convert heat energy into work.

I am not sure the example of the can with compressed air helps, but Feynman obviously thinks it should. He then phrases Carnot’s postulate as follows: “It is not possible to obtain useful work from a reservoir at a single temperature with no other changes.” He therefore claims that the combined machine as described above cannot exist. Ergo, W’ cannot be greater than W. Switching the role of A and B (so B becomes reversible too now), he concludes that W can also not be greater than W’. Hence, W and W’ have to be equal.

Hmm… I know that both philosophers and engineers have worked tirelessly to try to disprove Carnot’s postulate, and that they all failed. Hence, I don’t want to try to disprove Carnot’s postulate. In fact, I don’t doubt its truth at all. All that I am saying here is that I do have my doubts on the logical rigor of Feynman’s ‘proof’. It’s like… Well… It’s just a bit too tautological I’d say.

First Principles of Statistical Mechanics

Feynman seems to mix statistical mechanics and thermodynamics in his chapters on it. At first, I thought all was rather messy but, as usual, after re-reading it a couple of times, it all makes sense. Let’s have a look at the basics. We’ll start by talking about gases first.

The ideal gas law

The pressure P is the force we have to apply to the piston containing the gas (see below)—per unit area, that is. So we write: P = F/A. Compressing the gas amounts to applying a force over some (infinitesimal) distance dx. This will change the internal energy (U) of the gas by an infinitesimal amount dU. Hence, we can write:

dU = F·(−dx) = – P·A·dx = – P·dV

Gas pressure

However, before looking at the dynamics, let’s first look at the stationary situation: let’s assume the volume of the gas does not change, and so we just have the gas atoms bouncing of the piston and, hence, exerting pressure on it. Every gas atom or particle delivers a momentum 2mvto the piston (the factor 2 is there because the piston does not bounce back, so there is no transfer of momentum). If there are N atoms in the volume N, then there are n = N/V in each unit volume. Of course, only the atoms within a distance vx·t are going to hit the piston within the time t and, hence, the number of atoms hitting the piston within that time is n·A·vx·t. Per unit time (i.e. per second), it’s n·A·vx·t/t = n·A·vx. Hence, the total momentum that’s being transferred per second is n·A·vx·2mvx.

So far, so good. Indeed, we know that the force is equal to the amount of momentum that’s being transferred per second. If you forget, just check the definitions and units: a force of 1 newton gives an mass of 1 kg an acceleration of 1 m/s per second, so 1 N = 1 kg·m/s= 1 kg·(m/s)/s. [The kg·(m/s) unit is the unit of momentum (mass times velocity), obviously. So there we are.] Hence,

P = F/A = n·A·vx·2mvx/A = 2nmvx2

Of course, we need to take an average 〈vx2〉 here, and we should drop the factor 2 because half of the atoms/particles move away from the piston, rather than towards it. In short, we get:

P = F/A = nm〈vx2

Now, the average velocity in the x-, y- and z-direction are all the same and uncorrelated, so 〈vx2〉 = 〈vy2〉 = 〈vz2〉 = [〈vx2〉 + 〈vy2〉 + 〈vz2〉]/3 = 〈v2〉/3. So we don’t worry about any direction and simply write:

P = F/A = (2/3)·n·〈m·v2/2〉

[As Feynman notes, the math behind this is not difficult but, at the same time, it is also less straightforward than one might expect.] The last factor is, obviously, the kinetic energy of the (center-of-mass) motion of the atom or particle. Multiplying by V gives:

P·V = (2/3)·N·〈m·v2/2〉 = (2/3)·U

[If this confuses you, note that n = N/V, so V = N/n.] Now, that’s not a law you’ll remember from your high school days because… Well… This U – the internal energy of a gas – how do you measure that? We should link it to a measure we do know, and that’s temperature. The atoms or molecules in a gas will have an average kinetic energy which we could define as… Well… That average should have been defined as the temperature but, for historical reasons, the scale of what we know as the ‘temperature’ variable (T) is different. We need to apply a conversion factor, which is usually written as k. In fact, the conversion factor will be (3/2)·k. The 3/2 factor has been thrown in here to get rid of it later (in a few seconds, that is). To make a long story short,  we write the mean atomic or molecular energy as (3/2)·k·T = 3kT/2.

Now, you should also remember that we have three independent directions of motion. Hence, the kinetic energy associated with the component of motion in any of the three directions x, y or z is only 1/2 kT = (3kT/2)/3 = kT/2. [This seems trivial, but the idea of associating energy with some direction is actually quite fundamental.] Now, I said we’d get rid of that 3/2 factor. Indeed, applying the above-mentioned definition of temperature, we get:

P·V = (2/3)·N·〈m·v2/2〉 = (2/3)·N·3kT/2 = N·k·T

Now that is a formula you may or may not remember from your high school days! 🙂 The k factor is a constant of proportionality, which makes the units come out alright. The P·V = (2/3)·U formula tells us both sides of the equation must be expressed in joule (J), i.e. the dimension of energy. Now, N is a pure number, so our k in that N·k·T expression must be expressed in joule per degree (Kelvin). To be precise, k is (about) 1.38×10−23 joule for every degree Kelvin, so it’s a very tiny constant: it’s referred to as the Boltzmann constant and it’s usually denoted with a capital B as subscript (kB). As for how the product of pressure and volume can (also) yield something in joule, you can work that out for yourself, remembering the definition of a joule. […] Well… OK. Let me do it for you: [P]·[V] = (N/m2)·m3 = N·m = J. 🙂

One immediate implication of the formula above is that gases at the same temperature and pressure, in the same volume, must consist of an equal number of atoms/molecules. You’ll say: of course – because you remember that from your high school classes. However, thinking about it some more – and also in light of what we’ll be learning a bit later on gases composed of more complex molecules (diatomic molecules, for example) – you’ll have to admit it’s not all that obvious as a result.

Now, the number of atoms/molecules is usually measured in moles: one mole (or mol) is 6.02×1023 units (more or less, that is). To be somewhat more precise, its CODATA value is 6.02214129(27)×1023. That number is Avogadro’s number (or constant), after the Italian mathematical physicist Amedeo Avogadro – who stated that law above, which is referred to as Avogradro’s Law: gases at the same temperature and pressure, in the same volume, must consist of an equal number of atoms/molecules. Avogadro’s number is defined as the amount of any substance that contains as many elementary entities (e.g. atoms, molecules, ions or electrons) as there are atoms in 12 grams of pure carbon-12 (12C), the isotope of carbon with relative atomic mass of exactly 12 (also by definition). Avogadro’s constant is one of the base units in the International Systems of Units, usually denoted by NA or – as Feynman does – N0.

Now, if we reinterpret N as the number of moles, rather than the number of atoms, ions or molecules in a gas, we can re-write the same equation using the so-called universal or ideal gas constant, which is equal to R = (1.38×10−23 joule)×(6.02×1023/mol) per degree Kelvin = 8.314 J·K−1·mol−1. In short, the ideal gas constant is the product of two other constants: the Boltzmann constant (kB) and the Avogadro number (N0). So we get:

P·V = N·R·T with N = no. of moles and R = kB·N0

As you can see, you need to watch out with all those different constants and notations in use.

The ideal gas law and internal motion

There’s an interesting and essential remark to be made in regard to complex molecules in a gas. A complex molecule is any molecule that is not mono-atomic. The simplest example of a complex molecule is a diatomic molecule, consisting of two atoms, which we’ll denote by A and B, with mass mand mrespectively. A and B are together but are able to oscillate or move relative to one another. In short, we also have some internal motion here, in addition to the motion of the whole thing, which will also has some kinetic energy. Hence, the kinetic energy of the gas consists of two parts:

  1. The kinetic energy of the so-called center-of-mass motion of the whole thing (i.e. the molecule), which we’ll denote by M = m+ mB, and
  2. The kinetic energy of the rotational and vibratory motions of the two atoms (A and B) inside the molecule.

We noted that for single atoms the mean value of the kinetic energy in one direction is kT/2 and that the total kinetic energy is 3kT/2, i.e. three times as much. So what do we have here? Well… The reasoning we followed for the single atoms is also valid for the diatomic molecule considered as a single body of total mass M and with some center-of-mass velocity vCM. Hence, we can write that

M·vCM2/2 = (3/2)·kT

So that’s the same, regardless of whether or not we’re considering the separate pieces or the whole thing. But let’s look at the separate pieces now. We need some vector analysis here, because A and B can move in separate directions, so we have vand v(note the boldface used for vectors). So what’s the relation between vand von the one hand, and vCM on the other? The analysis is somewhat tricky here but – assuming that the vand vB representations themselves are some idealization of the actual rotational and vibratory movements of the A and B atoms – we can write:

   vCM = (mAv+ mBvB)/M

Now we need to calculate 〈vCM2〉, of course, i.e. the average velocity squared. I’ll refer you to Feynman for the details which, in the end, do lead to that M·vCM2/2 = (3/2)·kT equation. The whole calculation depends on the assumption that the relative velocity wvvis not any more likely to point in one direction than another, so its average component in any direction is zero. Indeed, the interim result is that

M·vCM2/2 = (3/2)·kT + 2mAmBvA·vB〉/M

Hence, one needs to prove, somehow, that 〈vA·vB〉 is zero in order to get the result we want, which is what that assumption about the relative velocity w ensures. Now, we still don’t have the kinetic energy of the A and B parts of the molecule. Because A and B can move in all three directions in space, their average kinetic energy 〈mA·vA2/2〉 and  〈mB·vB2/2〉 is also 3·k·T/2. Now, adding 3·k·T/2 and 3·k·T/2 yields 3kT. So now we have what we wanted:

  1. The kinetic energy of the center-of-mass motion of the diatomic molecule is (3/2)·k·T.
  2. The total energy of the diatomic molecule is the sum of the energies of A and B, and so that’s 3·k·T/2 + 3·k·T/2 = 3 k·T.
  3. The kinetic energy of the internal rotational and vibratory motions of the two atoms (A and B) inside the molecule is the difference, so that’s 3·k·T – (3/2)·k·T = (3/2)·k·T.

The more general result can be stated as follows:

  1. A r-atom molecule in a gas will have a kinetic energy of (3/2)·r·k·T, on average, of which:
  2. 3/2·k·T is kinetic energy of the center-of-mass motion of the entire molecule,
  3. The rest, (3/2)·(r−1)·k·T, is internal vibrational and rotational kinetic energy.

Another way to state is that, for an r-atom molecule, we find that the average energy for each ‘independent direction of motion’, i.e. for each degree of freedom in the system, is kT/2, with the number of degrees of freedom being equal to 3r.

So in this particular case (example of a diatomic molecule), we have 6 degrees of freedom (two times three), because we have three directions in space for each of the two atoms. A common error is to consider the center-of-mass energy as something separate, rather than including it as a part of the total energy. So always remember: the total kinetic energy is, quite simply, the sum of the kinetic energies of the separate atoms, which can be separated into (1) the kinetic energy associated with the center-of-mass motion and (2) the kinetic energy of the internal motions.

You see? It is not that difficult, is it? Let’s move on to the next topic.

The exponential atmosphere

Feynman uses this rather intriguing title to introduce Boltzmann’s Law, which is a law about densities. Let’s jot it down first:

n = n0·e−P.E/kT

In this equation, P.E. is the potential energy, k is our Boltzmann constant, and T is the temperature expressed in Kelvin. As for n0, that’s just a constant which depends on the reference point (P.E. = 0). What are we calculating here? Densities, so that’s the relative or absolute number of molecules per unit volume, so we look for a formula for a variable like n = N/V.

Let’s do an example: the ‘exponential’ atmosphere. 🙂 Feynman models our ‘atmosphere’ as a huge column of gas (see below). To simplify the analysis, we make silly assumptions. For example, we assume the temperature is the same at all heights. That’s assured by the mechanism for equalizing temperature: if the molecules on top would have less energy than those at the bottom, the molecules at the bottom would shake the molecules at the top, via the rod and the balls. That’s a very theoretical set-up, of course, but let’s just go along with it. The idea is that – when thermal equilibrium is reached – the average kinetic energy of all molecules is the same. 

Atmosphere

So, if the temperature is the same, then what’s different? The pressure, of course, which is determined by the number of molecules per unit volume. The pressure must increase with lower altitude because it has to hold, so to speak, the weight of all the gas above it. Conversely, as we go higher, the atmosphere becomes more tenuous. So what’s the ‘law’ or formula here?

We’ll use our gas law: PV = NkT, which we can re-write as P = nkT with n = N/V, so n is the number of molecules per unit volume indeed. What’s stated here is that the pressure (P) and the number of molecules per unit volume (n) are directly proportional, with kT the proportionality factor. So we have gravity (the g force) and we can do a differential analysis: what happens when we go from h to h + dh? If m is the mass of each molecule, and if we assume we’re looking at unit areas (both at h as well as h + dh), then the gravitational force on each molecule will be mg, and ndh will be the total number of molecules in that ‘unit section’.

Now, we can write dP as dP = Ph+dh − Ph and, of course, we know that the difference in pressure must be sufficient to hold, so to speak, the molecules in that small unit section dh. So we can write the following:

dP = Ph+dh − Ph = − m·g·n·dh

Now, P is P = nkT and, hence, because we assume T to be constant, we can write the whole equation as dP = k·T·d= − m·g·n·dh. From that, we get a differential equation:

dn/d= −(m·g)/(k·T)·n

We all hate differential equations, of course, but this one has an easy solution: the equation basically states we should find a function for n which has a derivative which is proportional to itself. Of course, we know that the exponential function is such function, so the solution of the differential equation is:

n = n0·e−mgh/kT

The n0 factor is the constant of integration and is, as mentioned above, the density at h = 0. Also note that mgh is, indeed, the potential energy of the molecules, increasing with height. So we have a Boltzmann Law indeed here, which we can write as n = n0·e−P.E/kT. Done ! The illustration below was also taken from Feynman, and illustrates the ‘exponential atmosphere’ for two gases: oxygen and hydrogen. Because their mass is very different, the curve is different too: it shows how, in theory and in practice, lighter gases will dominate at great heights, because the exponentials for the heavier stuff have all died out.

Gases

Generalization

It is easy to show that we’ll have a Boltzmann Law in any situation where the force comes from a potential. In other words, we’ll have a Boltzmann Law in any situation for which the work done when taking a molecule from x to x + dx can be represented as potential energy. An example would be molecules that are electrically charged and attracted by some electric field or another charge that attracts them. In that case, we have an electric force of attraction which varies with position and acts on all molecules. So we could take two parallel planes in the gas, separated by a distance dx indeed, and we’d have a similar situation: the force on each atom, times the number of atoms in the unit section that’s delineated by dx, would have to be balanced by the pressure change, and we’d find a similar ‘law’: n = n0·e−P.E/kT.

Let’s quickly show it. The key variable is, once again, the density n: n = N/V. If we assume volume and temperature remain constant, then we can use our gas law to write the pressure as P = NkT/V = kT·n, which implies that any change in pressure must involve a density change. To be precise, dP = d(kT·n) = kT·dn. Now, we’ve got a force, and moving a molecule from x to x + dx involves work, which is the force times the distance, so the work is F·dx. The force can be anything, but we assume it’s conservative, like the electromagnetic force or gravity. Hence, the force field can be represented by a potential and the work done is equal to the change in potential energy. Hence, we can write: Fdx = –d(P.E.). Why the minus sign? If the force is doing work, we’re moving with the force and, hence, we’ll have a decrease in potential energy. Conversely, if the surroundings are doing work against the force, we’ll increase potential energy.

Now, we said the force must be balanced by the pressure. What does that mean, exactly? It’s the same analysis as the one we did for our ‘exponential’ atmosphere: we’ve got a small slice, given by dx, and the difference in pressure when going from x to x + dx must be sufficient to hold, so to speak, the molecules in that small unit section dx. [Note we assume we’re talking unit areas once again.] So, instead of writing dP = Ph+dh − Ph = − m·g·n·dh, we now write dP = F·n·dx. So, when it’s a gravitational field, the magnitude of the force involved is, obviously, F = m·g.

The minus sign business is confusing, as usual: it’s obvious that dP must be negative for positive dh, and vice versa, but here we are moving with the force, so no minus sign is needed. If you find that confusing, let me give you another way of getting that dP = F·n·dx expression. The pressure is, quite simply, the force times the number of particles, so P = F·N. Dividing both sides by V yields P/V = F·N/V = F·n. Therefore, P = F·n·V and, hence, dP must be equal to dP = d(F·n·V) = F·n·dV = F·n·dx. [Again, the assumption is that our unit of analysis is the unit area.] […] OK. I need to move on. Combining (1) dP = d(kT·n) = kT·dn, (2) dP = F·n·dx and (3) Fdx = –d(P.E.), we get:

kT·dn = –d(P.E.)·n ⇔ dn/d(P.E.) = −[1/(kT)]·n

That’s, once again, a differential equation that’s easy to solve. Indeed, we’ve repeated it ad nauseam: a function which has a derivative proportional to itself is an exponential. Hence, we have our grand equation:

n = n0·e−P.E/kT

If the whole thing troubles you, just remember that the key to solving problems like this is to clearly identify and separate the so-called ‘dependent’ and ‘independent’ variables. In this case, we want a formula for n and, hence, it’s potential energy that’s the ‘independent’ variable. That’s all. In case of doubt: just do the derivation: d(n0·e−P.E./kT)/d(P.E.) = −n0·e−P.E/kT·1/(kT) = −n/(kT).

The graph looks the same, of course: the density is greatest at P.E. = 0. To be precise, the density there will be equal to n = n0·e= n0 (don’t think it’s infinity there!). And for higher (potential) energy values, we get lower density values. It’s a simple but powerful graph, and so you should always remember it.

graph

Boltzmann’s Law is a very simple law but it can be applied to very complicated situations. Indeed, while the law is simple, the potential energy curve can be very complicated. So our Law can be applied to other situations than gravity or the electric force. The potential can combine a number of forces (as long as they’re all conservative), as shown in the graph below, which shows a situation in which molecules will attract each other at a distance r > r(and, hence, their potential energy decreases as they come closer together), but repel each other strongly as r becomes smaller than r(so potential energy increases, and very much so as we try to force them on top of each other).

Potential energy

Again, despite the complicated shape of the curve, the density function will – in essence – follow Boltzmann’s Law: in a given volume, the density will be highest at the distance of minimum energy, and the density will be much less at other distances. So, yes, Boltzmann’s Law is pretty powerful !