Pre-script (dated 26 June 2020): This post has become less relevant (even irrelevant, perhaps) because my views on all things quantum-mechanical have evolved significantly as a result of my progression towards a more complete realist (classical) interpretation of quantum physics. The text also got mutilated because of the removal of material by the dark force. I keep blog posts like these mainly because I want to keep track of where I came from. I might review them one day, but I currently don’t have the time or energy for it. 🙂
Original post:
We’ve worked on very complicated matters in the previous posts. In this post, I am going to tie up a few loose ends, not only about the question in the title but also other things. Let me first review a few concepts and constructs.
Temperature
We’ve talked a lot about temperature, but what it is really? You have an answer ready of course: it is the mean kinetic energy of the molecules of a gas or a substance. You’re right. To be precise, it is the mean kinetic energy of the center-of-mass (CM) motions of the gas molecules.
The added precision in the definition above already points out temperature is not just mean kinetic energy or, to put it differently, that the concept of mean kinetic energy itself is not so simple when we are not talking ideal gases. So let’s be precise indeed. First, let me jot down the formula for the mean kinetic energy of the CM motions of the gas particles:
(K.E.)CM = <(1/2)·mv2>
Now let’s recall the most fundamental law in the kinetic theory of gases, which states that the mean value of the kinetic energy for each independent direction of motion will be equal to kT/2. [I know you know the kinetic theory of gases itself is not accurate – we should be talking about molecular energy states – but let’s go along with it.] Now, because we have only three independent directions of motions (the x, y and z directions) for ideal gas molecules (or atoms, I should say), the mean kinetic energy of the gas particles is kT/2 + kT/2 + kT/2 = 3kT/2.
What’s going on here is that we are actually defining temperature here: we basically say that the kinetic energy is linearly proportional to something that we define as the temperature. For practical reasons, that constant of proportionality is written as 3k/2, with k the Boltzmann constant. So we write our definition of temperature as:
(K.E.)CM = 3kT/2 ⇔ T = (3k/2)–1<(1/2)·mv2> = [2/(3k)]·(K.E.)CM
What happens with temperature when considering more complex gases, such as diatomic gases? Nothing. The temperature will still be proportional to the kinetic energy of the center-of-mass motions, but we should just note it’s the (K.E.)CM of the whole diatomic molecule, not of the individual atoms. The thing with more complicated arrangements is that, when adding or removing heat, we’ve got something else going on too: part of the energy will go into the rotational and vibratory motions inside the molecule, which is why we’ll need to add a lot more heat in order to achieve the same change in temperature or, vice versa, we’ll be able to extract a lot more heat out of the gas – as compared to an ideal gas, that is – for the same drop in temperature. [When talking molecular energy states, rather than independent directions of motions, we’re saying the same thing: energy does not only go in center-of-mass motion but somewhere else too.]
You know the ideal gas law is based on the reasoning above and the PV = NkT equation, which is always valid. For ideal gases, we write:
PV = NkT = Nk(3k/2)–1<(1/2)mv2> = (2/3)N<(1/2)·mv2> = (2/3)·U
For diatomic gases, we have to use another coefficient. According to our theory above, which distinguishes 6 independent directions of motions, the mean kinetic energy is twice 3kT/2 now, so that’s 3kT, and, hence, we write: T = (3k)–1<K.E.> =
PV = NkT = Nk(3k)–1<K.E.> = (1/3)·U
The two equations above will usually be written as PV = (γ–1)U, so γ, which is referred to as the specific heat ratio, would be equal 5/3 ≈ 1.67 for ideal gases and 4/3 ≈ 1.33 for diatomic gases. [If you read my previous posts, you’ll note I used 9/7 ≈ 1.286, but that’s because Feynman suddenly decides to add the potential energy of the oscillator as another ‘independent direction of motion’.]
Now, if we’re not adding or removing heat to/from the gas, we can do a differential analysis yielding a differential equation (what did you expect?), which we can then integrate to find that P = C/Vγ relationship. You’ve surely seen it before. The C is some constant related to the energy and/or the state of the gas. It is actually interesting to plot the pressure-volume relationship using that P = C/Vγ relationship for various values of γ. The blue graph below assumes γ = 5/3 ≈ 1.667, which is the theoretical value for ideal gases (γ for helium or krypton comes pretty close to that), while the red graph gives the same relationship for γ = 4/3 ≈ 1.33, which is the theoretical value for diatomic gases (gases like bromine and iodine have a γ that’s close to that).
Let me repeat that this P = C/Vγ relationship is only valid for adiabatic expansion or compression: we do not add or remove heat and, hence, this P = C/Vγ function gives us the adiabatic segments only in a Carnot cycle (i.e. the adiabatic lines in a pressure-volume diagram). Now, it is interesting to observe that the slope of the adiabatic line for the ideal gas is more negative than the slope of the adiabatic line for the diatomic gas: the blue curve is the steeper one. That’s logical: for the same volume change, we should get a bigger drop in pressure for the ideal gas, as compared to the diatomic gas, because… Well… You see the logic, don’t you?
Let’s freewheel a bit and see what it implies for our Carnot cycle.
Carnot engines with ideal and non-ideal gas
We know that, if we could build an ideal frictionless gas engine (using a cylinder with a piston or whatever other device we can think of), its efficiency will be determined by the amount of work it can do over a so-called Carnot cycle, which consists of four steps: (1) isothermal expansion (gas absorbs heat and the volume expands at constant temperature), (2) adiabatic expansion (the volume expands while the temperature drops), (3) isothermal compression (the volume decreases at constant temperature, so heat is taken out), and (4) isothermal compression (the volume decreases as we bring the gas back to the same temperature).
It is important to note that work is being done, by the gas on its surroundings, or by the surroundings on the gas, during each step of the cycle: work is being done by the gas as it expands, always, and work is done on the gas as it is being compressed, always.
You also know that there is only one Carnot efficiency, which is defined as the ratio of (a) the net amount of work we get out of our machine in one such cycle, which we’ll denote by W, and (b) the amount of heat we have to put in to get it (Q1). We’ve also shown that W is equal to the difference between the heat we put during the first step (isothermal expansion) and the heat that’s taken out in the third step (isothermal compression): W = Q1 − Q2, which basically means that all heat is converted into useful work—which is why it’s an efficient engine! We also know that the formula for the efficiency is given by:
W/Q1 = (T1 − T2)/T1.
Where’s Q2 in this formula? It’s there, implicitly, as the efficiency of the engine depends on T2. In fact, that’s the crux of the matter: for efficient engines, we also have the same Q1/T1 = Q2/T2 ratio, which we define as the entropy S = Q1/T1 = Q2/T2. We’ll come back to this.
Now how does it work for non-ideal gases? Can we build an equally efficient engine with actual gases? This was, in fact, Carnot’s original question, and we haven’t really answered it in our previous posts, because we weren’t quite ready for it. Let’s consider the various elements to the answer:
- Because we defined temperature the way we defined it, it is obvious that the gas law PV = NkT still holds for diatomic gases, or whatever gas (such as steam vapor, for example, the stuff which was used in Carnot’s time). Hence, the isothermal lines in our pressure-volume diagrams don’t change. For a given temperature T, we’ll have the same green and red isothermal line in the diagram above.
- However, the adiabatic lines (i.e .the blue and purple lines in the diagram above) for the non-ideal gas are much flatter than the one for an ideal gas. Now, just take that diagram and draw two flatter curves through point a and c indeed—but not as flat as the isothermal segments, of course! What you’ll notice is that the area of useful work becomes much smaller.
What does that imply in terms of efficiency? Well… Also consider the areas under the graph which, as you know, represent the amount of work done during each step (and you really need to draw the graph here, otherwise you won’t be able to follow my argument):
- The phase of isothermal expansion will be associated with a smaller volume change, because our adiabatic line for the diatomic gas intersects the T = T1 isothermal line at a smaller value for V. Hence, less work is being done during that stage.
- However, more work will be done during adiabatic expansion, and the associated volume change is also larger.
- The isothermal compression phase is also associated with a smaller volume change, because our adiabatic line for the diatomic gas intersects the T = T2 isothermal line at a larger value for V.
- Finally, adiabatic compression requires more work to be done to get from T2 to T1 again, and the associated volume change is also larger.
The net result is clear from the graph: the net amount of work that’s being done over the complete cycle is less for our non-ideal gas than as compared to our engine working with ideal gas. But, again, the question here is what it implies in terms of efficiency? What about the W/Q1 ratio?
The problem is that we cannot see how much heat is being put in (Q1) and how much heat is being taken out (Q2) from the graph. The only thing we know is that we have an engine working here between the same temperature T1 to T2. Hence, if we use subscript A for the ideal gas engine and subscript B for the one working with ordinary (i.e. non-ideal) gas, and if both engines are to have the same efficiency W/Q1 = WB/Q1B = WA/Q1A, then it’s obvious that,
if WA > WB, then Q1A > Q1B.
Is that consistent with what we wrote above for each of the four steps? It is. Heat energy is taken in during the first step only, as the gas expands isothermally. Now, because the temperature stays the same, there is no change in internal energy, and that includes no change in the internal vibrational and rotational energy. All of the heat energy is converted into work. Now, because the volume change is less, the work will be less and, hence, the heat that’s taken in must also be less. The same goes for the heat that’s being taken out during the third step, i.e. the isothermal compression stage: we’ve got a smaller volume change here and, hence, the surroundings of the gas do less work, and a lesser amount of heat energy is taken out.
So what’s the grand conclusion? It’s that we can build an ideal gas engine working between the same temperature T1 and T1, and with exactly the same efficiency and W/Q1 = (T1 − T2)/T1 using non-ideal gas. Of course, there must be some difference! You’re right: there is. While the ordinary gas machine will be as efficient as the ideal gas machine, it will not do the same amount of work. The key to understanding this is to remember that efficiency is a ratio, not some absolute number. Let’s go through it. Because their efficiency is the same, we know that the W/Q1 ratios for both engines (A and B) is the same and, hence, we can write:
WA/WB = Q1A/Q1B
What about the entropy? The entropy S = Q1A/T1 = Q2A/T2 is not the same for both machines. For example, if the engine with ideal gas (A) does twice the work of the engine with ordinary gas (B), then Q1A will also be twice the amount Q1B. Indeed, SA = Q1A /T1 and SB = Q1B/T1. Hence, SA/SB = Q1A/Q1B. For example, if Q1A = 2·Q1B, then engine A’s entropy will also be twice that of engine B. [Now that we’re here, I should also note you’ll have the same ratio for Q2A. Indeed, we know that, for an efficient machine, we have: Q1/T1 = Q2/T2. Hence, Q1A/Q2A = T1/T2 and Q1B/Q2B = T1/T2. So Q1A/Q2A = Q1B/Q2B and, therefore, So Q1A/Q1B = Q2A/Q2B.]
Why would the entropy be any different? We’ve got the same number of particles, the same volume and the same working temperatures, and so the only difference is that the particles in engine B are diatomic: the molecules consist of two atoms, rather than one only. An intuitive answer to the question as to why the entropy is different can be given by comparing it to another example, which I mentioned in a previous post, for which the entropy is also different fro some non-obvious reason. Indeed, we can think of the two atoms as the equivalent of the white and black particles in the box (see my previous post on entropy): if we allow the white and black particles to mix in the same volume, rather than separate them in two compartments, then the entropy goes up (we calculated the increase as equal to k·ln2). Likewise, the entropy is much lower if all particles have to come in pairs, which is the case for a diatomic gas. Indeed, if they have to come in pairs, we significantly reduce the number of ways all particles can be arranged, or the ‘disorder’, so to say. As the entropy is a measure of that number (one can loosely define entropy as the logarithm of the number of ways), the entropy must go down as well. Can we illustrate that using the ΔS = Nkln(V2/V1) formula we introduced in our previous post, or our more general S(V, T) = Nk[lnV + (1/γ-1)lnT] + a formula? Maybe. Let’s give it a try.
We know that our diatomic molecules have an average kinetic energy equal to 3kT/2. Well… Sorry. I should be precise: that’s the kinetic energy of their center-of-mass motion only! Now, let us suppose all our diatomic molecules spit up. We know the average kinetic energy of the constituent parts will also equal 3kT/2. Indeed, if a gas molecule consists of two atoms (let’s just call them atom A and B respectively), and if their combined mass is M = mA + mB, we know that:
<mAvA2/2> = <mBvB2/2> = <MvCM2/2> = 3kT/2
Hence, if they split, we’ll have twice the number of particles (2N) in the same volume with the same average kinetic energy: 3kT/2. Hence, we double the energy, but the average kinetic energy of the particles is the same, so the temperature should be the same. Hmm… You already feel something is wrong here… What about the energy that we associated with the internal motions within the molecule, i.e. the internal rotational and vibratory motions of the atoms, when they were still part of the same molecule? That was also equal to 3kT/2, wasn’t it? It was. Yes. In case you forgot why, let me remind you: the total energy is the sum of the (average) kinetic energy of the two atoms, so that’s <mAvA2/2> + <mBvB2/2> = 3kT/2 + 3kT/2 = 3kT. Now, that sum is also equal to the sum of the center-of-mass motion (which is 3 kT/2) and the average kinetic energy of the rotational and vibratory motions. Hence, the average kinetic energy of the rotational and vibratory motions is 3kT – 3 kT/2 = 3 kT/2. It’s all part of the same theorem: the average kinetic energy for each independent direction of motion is kT/2, and the number of degrees of freedom for a molecule consisting of r atoms is 3, because each atom can move in three directions. Rotation involves another two independent motions (in three dimensions, we’ve got two axes of rotation only), and vibration another one. So the kinetic energy going into rotation is kT/2 + kT/2 = kT and for vibration it’s kT/2. Adding all yields 3kT/2 + kT + kT/2 = 3kT.
The arithmetic is quite tricky. Indeed, you may think that, if we split the molecule, that the rotational and vibratory energy has to go somewhere, and that it is only natural to assume that, when we spit the diatomic molecule, the individual atoms have to absorb it. Hence, you may think that the temperature of the gas will be higher. How much higher? We had an average energy of 3kT per molecule in the diatomic situation, but so now we have twice as many particles, and hence, the average energy per particle now is… Re-read what I wrote above: it’s just 3kT/2 again. The energy that’s associated with the center-of-mass motions and the rotational and vibratory motions is not something extra: it’s part of the average kinetic energy of the atoms themselves. So no rise in temperature!
Having said that, our PV = NkT = (2/3)U equation obviously doesn’t make any sense anymore, as we’ve got twice as many particles now. While the temperature has not gone up, both the internal energy and the pressure have doubled, as we’ve got twice as many particles hitting the walls of our cylinder now. To restore the pressure to its ex ante value, we need to increase the volume. Remember, however, that pressure is force per unit surface area, not per volume unit: P = F/A. So we don’t have to double the volume: we only have to double the surface area. Now, it all depends on the shape of the volume: are we thinking of a box or of some sphere? One thing we know though: if we calculate the volume using some radius r, which may also be the length of the edge of a cube, then we know the volume is going to be proportional to r3, while the surface area is going to be proportional to r2. Hence, the ratio between the surface area and the volume is going to be proportional to r2/r3 = r2/3. So that’s another 2/3 ratio which pops us here, as an exponent this time. It’s not a coincidence, obviously.
Hmm… Interesting exercise. I’ll let you work it out. I am sure you’ll find some sensible value for the new volume, so you should able to use that ΔS = Nkln(V2/V1) formula. However, you also need to think about the comparability of the two situations. We wanted to compare two equal volumes with an equal number of particles (diatomic molecules versus atoms), and so you’ll need to move back in that direction to get a final answer to your question. Please do mail me the answer: I hope it makes sense. 🙂
Inefficient engines
When trying to understand efficient engines, it’s interesting to also imagine how inefficient engines work, so as to see what they imply for our Carnot diagram. Suppose we’ve tried to build a Carnot engine in our kitchen, and we end up with one that is fairly frictionless, and fairly well isolated, so there is little heat loss during the heat transfer steps. We also have good contact surfaces so we think the the heat transfer processes will also be fairly frictionless, so to speak. So we did our calculations and built the engine using the best kitchen design and engineering practices. Now it’s the time for the test. Will it work?
What might happen is the following: while we’ve designed the engine to get some net amount of work out of it (in each and every cycle) that is given by the isothermic and adiabatic lines below, we may find that we’re not able to keep the temperature constant. So we try to follow the green isothermic line alright, but we can’t. We may also find that, when our heat counter tells us we’ve put Q1 in already, that our piston hasn’t moved out quite as far we thought it would. So… Damn, we’re never going to get to c. What’s the reason? Some heat loss, because our isolation wasn’t perfect, and friction.
So we’re likely to have followed an actual path that’s closer to the red arrow, which brings us near point d. So we’ve missed point c. We have no choice, however: the temperature has dropped to T2 and, hence, we need to start with the next step. Which one? The second? The third? It’s not quite clear, because our actual path on the pressure-volume diagram doesn’t follow any of our ideal isothermal or adiabatic lines. What to do? Let’s just take some heat out and start compressing to see what happens. If we’ve followed a path like the red arrow, we’re likely to be on something like the black arrow now. Indeed, if we’ve got a problem with friction or heat loss, we’ll continue to have that problem, and so the temperature will drop much faster than we think it should, and so we will not have the expected volume decrease. In fact, we’re not able to maintain the temperature even at T2. What horror! We can’t repeat our process and, hence, it is surely not reversible! All our work for nothing! We have to start all over and re-examine our design.
So our kitchen machine goes nowhere. But then how do actual engines work? The answer is: they put much more heat in, and they also take much more heat out. More importantly, they’re also working much below the theoretical efficiency of an ideal engine, just like our kitchen machine. So that’s why we’ve got the valves and all that in a steam engine. Also note that a car engine works entirely different: it converts chemical energy into heat energy by burning fuel inside of the cylinder. Do we get any useful work out? Of course! My Lamborghini is fantastic. 🙂 Is it efficient? Nope. We’re converting huge amounts of heat energy into a very limited amount of useful work, i.e. the type of energy we need to drive the wheels of my car, or a dynamo. Actual engines are a shadow only of ideal engines. So what’s the Carnot cycle really? What does it mean in practice? Does the mathematical model have any relevance at all?
The Carnot cycle revisited
Let’s look at those differential equations once again. [Don’t be scared by the concept of a differential equation. I’ll come back to it. Just keep reading.] Let’s start with the ΔU = (∂U/∂T)ΔT + (∂U/∂V)ΔV equation, which mathematical purists would probably prefer to write as:
dU = (∂U/∂T)dT + (∂U/∂V)dV
I find Feynman’s use of the Δ symbol more appropriate, because, when dividing by dV or dT, we get dU/dV and dU/dt, which makes us think we’re dealing with ordinary derivatives here, and we are not: it’s partial derivatives that matter here. [I’ll illustrate the usefulness of distinguishing the Δ and d symbol in a moment.] Feynman is even more explicit about that as he uses subscripts for the partial derivatives, so he writes the equation above as:
ΔU = (∂U/∂T)VΔT+ (∂U/∂V)TΔV
However, partial derivatives always assume the other variables are kept constant and, hence, the subscript is not needed. It makes the notation rather cumbersome and, hence, I think it makes the analysis even more unreadable than it already is. In any case, it is obvious that we’re looking at a situation in which all changes: the volume, the temperature and the pressure. However, in the PV = NkT equation (which, I repeat, is valid for all gases, ideal or not, and in all situations, be it adiabatic or isothermal expansion or compression), we have only two independent variables for a given number of particles. We can choose: volume and temperature, or pressure and temperature, or volume and pressure. The third variable depends on the two other variables and, hence, is referred to as dependent. Now, one should not attach too much importance to the terms (dependent or independent does not mean more or less fundamental) but, when everything is said and done, we need to make a choice when approaching the problem. In physics, we usually look at the volume and the temperature as the ‘independent’ variables but the partial derivative notation makes it clear it doesn’t matter. With three variables, we’ll have three partial derivatives: ∂P/∂T, ∂V/∂T and ∂P/∂V, and their reciprocals ∂T/∂P, ∂T/∂V and ∂V/∂P too, of course!
Having said that, when calculating the value of derived variables like energy, or entropy, or enthalpy (which is a state variable used in chemistry), we’ll use two out of the three mentioned variables only, because the third one is redundant, so to speak. So we’ll have some formula for the internal energy of a gas that depends on temperature and volume only, so we write:
U = U(V, T)
Now, in physics, one will often only have a so-called differential equation for a variable, i.e. something that is written in terms of differentials and derivatives, so we’ll do that here too. But let me give some other example first. You may or may not remember that we had this differential equation telling us how the density (n = N/V) of the atmosphere changes with the height (h), as a function of the molecular mass (m), the temperature (T) and the density (n) itself: dn/dh = –(mg/kT)·n, with g the gravitational constant and k the Boltzmann constant. Now, it is not always easy to go from a differential equation to a proper formula, but this one can be solved rather easily. Indeed, a function which has a derivative that is proportional to itself (that’s what this differential equation says really) is an exponential, and the solution was n = n0e–mgh/kT, with n0 some other constant (the density at h = 0, which can be chosen anywhere). This explicit formula for n says that the density goes down exponentially with height, which is what we would expect.
Let’s get back to our gas though. We also have differentials here, which are infinitesimally small changes in variables. As mentioned above, we prefer to write them with a Δ in front (rather than using the d symbol)—i.e. we write ΔT, ΔU, ΔU, or ΔQ. When we have two variables only, say x and y, we can use the d symbol itself and, hence, write Δx and Δy as dx and dy. However, it’s still useful to distinguish, in order to write something like this:
Δy = (dy/dx)Δx
This says we can approximate the change in y at some point x when we know the derivative there. For a function in two variables, we can write the same, which is what we did at the very start of this analysis:
ΔU = (∂U/∂T)ΔT + (∂U/∂V)ΔV
Note that the first term assumes constant volume (because of the ∂U/∂T derivative), while the second assumes constant temperature (because of the ∂U/∂V derivative).
Now, we also have a second equation for ΔU, expressed in differentials only (so no partial derivatives here):
ΔU = ΔQ – PΔV
This equation basically states that the internal energy of a gas can change because (a) some heat is added or removed or (b) some work is being done by or on the gas as its volume gets bigger or smaller. Note the minus sign in front of PΔV: it’s there to ensure the signs come out alright. For example, when compressing the gas (so ΔV is negative), ΔU = – PΔV will be positive. Conversely, when letting the gas expand (so ΔV is positive), ΔU = – PΔV will be negative, as it should be.
What’s the relation between these two equations? Both are valid, but you should surely not think that, just because we have a ΔV in the second term of each equation, we can write –P = ∂U/∂V. No.
Having said that, let’s look at the first term of the ΔU = (∂U/∂T)ΔT + (∂U/∂V)ΔV equation and analyze it using the ΔU = ΔQ – PΔV equation. We know (∂U/∂T)ΔT assumes we keep the volume constant, so ΔV = 0 and, hence, ΔU = ΔQ: all the heat goes into changing the internal energy; none goes into doing some work. Therefore, we can write:
(∂U/∂T)ΔT = (∂Q/∂T)ΔT = CVΔT
You already know that we’ve got a name for that CV function (remember: a derivative is a function too!): it’s the (specific) heat capacity of the gas (or whatever substance) at constant volume. For ideal gases, CV is some constant but, remember, we’re not limiting ourselves to analyzing ideal gases only here!
So we’re done with the first term in that ΔU = (∂U/∂T)ΔT + (∂U/∂V)ΔV. Now it’s time for the second one: (∂U/∂V)ΔV. Now both ΔQ and –PΔV are relevant: the internal energy changes because (a) some heat is being added and (b) because the volume changes and, hence, some work is being done. You know what we need to find. It’s that weird formula:
∂U/∂V = T(∂P/∂T) – P
But how do we get there? We can visualize what’s going on as a tiny Carnot cycle. So we think of gas as an ideal engine itself: we put some heat in (ΔQ) which gets an isothermal expansion at temperature T going, during a tiny little instant, doing a little bit of work. But then we stop adding heat and, hence, we’ll have some tiny little adiabatic expansion, during which the gas keeps going and also does a tiny amount of work as it pushes against the surrounding gas molecules. However, this step involves an infinitesimally small temperature drop—just a little bit, to T–ΔT. And then the surrounding gas will start pushing back and, hence, we’ve got some isothermal compression going, at temperature T–ΔT, which is then followed, once again, by adiabatic compression as the temperature goes back to T. The last two steps involve the surroundings of the tiny little volume of gas we’re looking at, doing work on the gas, instead of the other way around.
You’ll say this sounds very fishy. It does, but it is Feynman’s analysis, so who am I to doubt it? You’ll ask: where does the heat go, and where does the work go? Indeed, if ΔQ is Q1, what about Q2? Also, we can sort of imagine that the gas can sort of store the energy of the work that’s being done during step 1 and 2, to then give (most of it) back during step 3 and 4, but what about the net work that’s being done in this cycle, which is (see the diagram) equal to W = Q1 – Q2 = ΔPΔV? Where does that go? In some kind of flywheel or something? Obviously not! Hmm… Not sure. In any case, Q1 is infinitesimally small and, hence, nearing zero. Q2 is even smaller, so perhaps we should equate it to zero and just forget about it. As for the net work done by the cycle, perhaps this may just go into moving the gas molecules in the equally tiny volume of gas we’re looking at. Hence, perhaps there’s nothing left to be transferred to the surrounding gas. In short, perhaps we should look at ΔQ as the energy that’s needed to do just one cycle.
Well… No. If gas is an ideal engine, we’re talking elastic collisions and, hence, it’s not like a transient, like something that peters out. The energy has to go somewhere—and it will. The tiny little volume we’re looking at will come back to its original state, as it should, because we’re looking at (∂U/∂V)ΔV, which implies we’re doing an analysis at constant temperature, but the energy we put in has got to go somewhere: even if Q2 is zero, and all of ΔQ goes into work, it’s still energy that has to go somewhere!
It does go somewhere, of course! It goes into the internal energy of the gas we’re looking at. It adds to the kinetic energy of the surrounding gas molecules. The thing is: when doing such infinitesimal analysis, it becomes difficult to imagine the physics behind. All is blurred. Indeed, if we’re talking a very small volume of gas, we’re talking a limited number of particles also and, hence, these particles doing work on other gas particles, or these particles getting warmer or colder as they collide with the surrounding body of gas, it all becomes more or less the same. To put it simply: they’re more likely to follow the direction of the red and black arrows in our diagram above. So, yes, the theoretical analysis is what it is: a mathematical idealization, and so we shouldn’t think that’s what actually going on in a gas—even if Feynman tries to think of it in that way. So, yes, I agree with some critics, but to a very limited extent only, who say that Feynman’s Lectures on thermodynamics aren’t the best in the Volume: it may be simpler to just derive the equation we need from some Hamiltonian or whatever other mathematical relationship involving state variables like entropy or what have you. However, I do appreciate Feynman’s attempt to connect the math with the physics, which is what he’s doing here. If anything, it’s sure got me thinking!
In any case, we need to get on with the analysis, so let’s wrap it up. We know the net amount of work that’s being done is equal to W = Q1(T1 – T2)/ T1 = ΔQ(ΔT/T). So that’s equal to ΔPΔV and, hence, we can write:
net work done by the gas = ΔPΔV = ΔQ(ΔT/T)
This implies ΔQ = T(ΔP/ΔT)ΔV. Now, looking at the diagram, we can appreciate ΔP/ΔT is equal to ∂P/∂T (ΔP is the change in pressure at constant volume). Hence, ΔQ = T(∂P/∂T)ΔV. Now we have to add the work, so that’s −PΔV. We get:
ΔU = ΔQ − PΔV = T(∂P/∂T)ΔV − PΔV ⇔ ΔU/ΔV = ∂U/∂V = T(∂P/∂T) − P
So… We are where we wanted to be. 🙂 It’s a rather surprising analysis, though. Is the Q2 = 0 assumption essential? It is, as part of the analysis of the analysis of the second term in the ΔU = (∂U/∂T)ΔT + (∂U/∂V)ΔV expression, that is. Make no mistake: the W = Q1(T1−T2)/ T1 = ΔQ(ΔT/T) formula is valid, always, and the Q2 is taken into account in it implicitly, because of the ΔT (which is defined using T2). However, if Q2 would not be zero, it would add to the internal energy without doing any work and, as such, it would be part of the first term in the ΔU = (∂U/∂T)ΔT + (∂U/∂V)ΔV expression: we’d have heat that is not changing the volume (and, hence, that is not doing any work) but that’s just… Well… Heat that’s just adding heat to the gas. 🙂
To wrap everything up, let me jot down the whole thing now:
ΔU = (∂Q/∂T)·ΔT + [T(∂P/∂T) − P]·ΔV
Now, strangely enough, while we started off saying the second term in our ΔU expression assumed constant temperature (because of the ∂U/∂V derivative), we now re-write that second term using the ∂P/∂T derivative, which assumes constant volume! Now, our first term assumes constant volume too, and so we end up with an expression which assumes constant volume throughout! At the same time, we do have that ΔV factor of course, which implies we do not really assume volume is constant. On the contrary: the question we started off with was about how the internal energy changes with temperature and volume. Hence, the assumptions of constant temperature and volume only concern the partial derivatives that we are using to calculate that change!
Now, as for the model itself, let me repeat: when doing such analysis, it is very difficult to imagine the physics behind. All is blurred. When talking infinitesimally small volumes of gas, one cannot really distinguish between particles doing work on other gas particles, or these particles getting warmer or colder as they collide with them. It’s all the same. So, in reality, the actual paths are more like the red and black arrows in our diagram above. Even for larger volumes of gas, we’ve got a problem: one volume of gas is not thermally isolated from another and, hence, ideal gas is not some Carnot engine. A Carnot engine is this theoretical construct, which assumes we can nicely separate isothermal from adiabatic expansion/compression. In reality, we can’t. Even to get the isothermal expansion started, we need a temperature difference in order to get the energy flow going, which is why the assumption of frictionless heat transfer is so important. But what’s frictionless, and what’s an infinitesimal temperature difference? In the end, it’s a difference, right? So we already have some entropy increase: some heat (let’s say ΔQ) leaves the reservoir, which has temperature T, and enters the cylinder, which has to have a temperature that’s just-a-wee bit lower, let’s say T – ΔT. Hence, the entropy of the reservoir is reduced by ΔQ/T, and the entropy of the cylinder is increased by ΔQ/(T – ΔT). Hence, ΔS = ΔQ/(T–ΔT) – ΔQ/T = ΔQΔT/[T(T–ΔT)].
You’ll say: sure, but then the temperature in the cylinder must go up to T and… No. Why? We don’t have any information on the volume of the cylinder here. We should also involve the time derivatives, so we should start asking questions like: how much power goes into the cylinder, so what’s the energy exchange per unit time here? The analysis will become endlessly more complicated of course – it may have played a role in Sadi Carnot suffering from “mania” and “general delirium” when he got older 🙂 – but you should arrive at the same conclusion: when everything is said and done, the model is what it is, and that’s a mathematical model of some ideal engine – i.e. an idea of a device we don’t find in Nature, and which we’ll never be able to actually build – that shows how we could, potentially, get some energy out of a gas when using some device build to do just that. As mentioned above, thinking in terms of actual engines – like steam engines or, worse, combustion engines – does not help. Not at all really: just try to understand the Carnot cycle as it’s being presented, and that’s usual a mathematical presentation, which is why textbooks always remind the reader to not take the cylinder and piston thing too literally.
Let me note one more thing. Apart from the heat or energy loss question, there’s another unanswered question: from what source do we take the energy to move our cylinder from one heat reservoir to the other? We may imagine it all happens in space so there’s no gravity and all that (so we do not really have to spend some force just holding it) but even then: we have to move it from one place to another, and so that involves some acceleration and deceleration and, hence, some force times a distance. In short, the conclusion is all the same: the reversible Carnot cycle does not really exist and entropy increases, always.
With this, you should be able to solve some practical problems, which should help you to get the logic of it all. Let’s start with one.
Feynman’s rubber band engine
Feynman’s rubber band engine shows the model is quite general indeed, so it’s not limited to some Carnot engine only. A rubber band engine? Yes. When we heat a rubber band, it does not expand: it contracts, as shown below.
Why? It’s not very intuitive: heating a metal bar makes it longer, not shorter. It’s got to do with the fact that rubber consists of an enormous tangle of long chains of molecules: think of molecular spaghetti. But don’t worry about the details: just accept we could build an engine using the fact, as shown above. It’s not a very efficient machine (Feynman thinks he’d need heating lamps delivering 400 watts of power to lift a fly with it), but let’s apply our thermodynamic relations:
- When we heat the rubber band, it will pull itself in, thereby doing some work. We can write that amount of work as FD So that’s like -PΔV in our ΔU = ΔQ – PΔV equation, but not that F has a direction that’s opposite to the direction of the pressure, so we don’t have the minus sign.
- So here we can write: ΔU = ΔQ + FΔL.
So what? Well… We can re-write all of our gas equations by substituting –F for P and L for V, and they’ll apply! For example, when analyzing that infinitesimal Carnot cycle above, we found that ΔQ = T(∂P/∂T)ΔV, with ΔQ the heat that’s needed to change the volume by ΔV at constant temperature. So now we can use the above-mentioned substitution (P becomes –F and V becomes L) to calculate the heat that’s needed to change the length of the rubber band by ΔL at constant temperature: it is equal to ΔQ = –T(∂F/∂T)ΔL. The result may not be what we like (if we want the length to change significantly, we’re likely to need a lot of heat and, hence, we’re likely to end up melting the rubber), but it is what it is. 🙂
As Feynman notes: the power of these thermodynamic equations is that we can apply them to very different situations than gas. Another example is a reversible electric cell, like a rechargeable storage battery. Having said that, the assumption that these devices are all efficient is a rather theoretical one and, hence, that constrains the usefulness of our equations significantly. Having said that, engineers have to start somewhere, and the efficient Carnot cycle is the obvious point of departure. It is also a theoretical reference point to calculate actual efficiencies of actual engines, of course.
Post scriptum: Thermodynamic temperature
Let me quickly say something about an alternative definition of temperature: it’s what Feynman refers to as the thermodynamic definition. It’s an equivalent to the kinetic definition really, but let me quickly show why. As we think about efficient engines, it would be good to have some reference temperature T2, so we can drop the subscripts and have our engines run between T and that reference temperature, which we’ll simply call ‘one degree’ (1°). The amount of heat that an ideal engine will deliver at that reference temperature is denoted by QS, so we can drop the subscript for Q1 and denote it, quite simply, as Q.
We’ve defined entropy as S = Q/T, so Q = ST and QS = S·1°. So what? Nothing much. Just note we can use the S = Q/T and QS = S×1° equations to define temperature in terms of entropy. This definition is referred to as the thermodynamic definition, and it is fully equivalent with our kinetic energy. It’s just a different approach. Feynman makes kind of a big deal out of this but, frankly, there’s nothing more to it.
Just note that the definition also works for our ideal engine with non-ideal gas: the amounts of heat involved for the engine with non-ideal gas, i.e. Q and QS, will be proportionally less than the Q and QS amounts for the reversible engine with ideal gas. Remember that Q1A/Q1B = Q2A/Q2B equation, in case you’d have doubts.] Hence, we do not get some other thermodynamical temperature! All makes sense again, as it should! 🙂
Some content on this page was disabled on June 17, 2020 as a result of a DMCA takedown notice from Michael A. Gottlieb, Rudolf Pfeiffer, and The California Institute of Technology. You can learn more about the DMCA here:
https://en.support.wordpress.com/copyright-and-the-dmca/
Some content on this page was disabled on June 17, 2020 as a result of a DMCA takedown notice from Michael A. Gottlieb, Rudolf Pfeiffer, and The California Institute of Technology. You can learn more about the DMCA here:
2 thoughts on “Is gas a reversible engine?”