Entropy, energy and enthalpy

Phew! I am quite happy I got through Feynman’s chapters on thermodynamics. Now is a good time to review the math behind it. We thoroughly understand the gas equation now:

PV = NkT = (γ–1)U

The gamma (γ) in this equation is the specific heat ratio: it’s 5/3 for ideal gases (so that’s about 1.667) and, theoretically, 4/3 ≈ 1.333 or 9/7 ≈ 1.286 for diatomic gases, depending on the degrees of freedom we associate with diatomic molecules. More complicated molecules have even more degrees of freedom and, hence, can absorb even more energy, so γ gets closer to one—according to the kinetic gas theory, that is. While we know that the kinetic gas theory is not quite accurate – an approach involving molecular energy states is a better match for reality – that doesn’t matter here. As for the term (specific heat ratio), I’ll explain that later. [I promise. 🙂 You’ll see it’s quite logical.]

The point to note is that this body of gas (or whatever substance) stores an amount of energy U that is directly proportional to the temperature (T), and Nk/(γ–1) is the constant of proportionality. We can also phrase it the other way around: the temperature is directly proportional to the energy, with (γ–1)/Nk the constant of proportionality. It means temperature and energy are in a linear relationship. [Yes, direct proportionality implies linearity.] The graph below shows the T = [(γ–1)/Nk]·U relationship for three different values of γ, ranging from 5/3 (i.e. the maximum value, which characterizes monatomic noble gases such as helium, neon or krypton) to a value close to 1, which is characteristic of more complicated molecular arrangements indeed, such as heptane (γ = 1.06) or methyl butane ((γ = 1.08). The illustration shows that, unlike monatomic gas, more complicated molecular arrangements allow the gas to absorb a lot of (heat) energy with a relatively moderate rise in temperature only.

CaptureWe’ll soon encounter another variable, enthalpy (H), which is also linearly related to energy: H = γU. From a math point of view, these linear relationships don’t mean all that much: they just show these variables – temperature, energy and enthalphy – are all directly related and, hence, can be defined in terms of each other.

We can invent other variables, like the Gibbs energy, or the Helmholtz energy. In contrast, entropy, while often being mentioned as just some other state function, is something different altogether. In fact, the term ‘state function’ causes a lot of confusion: pressure and volume are state variables too. The term is used to distinguish these variables from so-called process functions, notably heat and work. Process functions describe how we go from one equilibrium state to another, as opposed to the state variables, which describe the equilibrium situation itself. Don’t worry too much about the distinction—for now, that is.

Let’s look at non-linear stuff. The PV = NkT = (γ–1)U says that pressure (P) and volume (V) are inversely proportional one to another, and so that’s a non-linear relationship. [Yes, inverse proportionality is non-linear.] To help you visualize things, I inserted a simple volume-pressure diagram below, which shows how pressure and volume are related for three different values of U (or, what amounts to the same, three different values of T).

graph 2

The curves are simple hyperbolas which have the x- and y-axis as horizontal and vertical asymptote respectively. If you’ve studied social sciences (like me!) – so if you know a tiny little bit of the ‘dismal science’, i.e. economics (like me!) – you’ll note they look like indifference curves. The x- and y-axis then represent the quantity of some good X and some good Y respectively, and the curves closer to the origin are associated with lower utility. How much X and Y we will buy then, depends on (a) their price and (b) our budget, which we represented by a linear budget line tangent to the curve we can reach with our budget, and then we are a little bit happy, very happy or extremely happy, depending on our budget. Hence, our budget determines our happiness. From a math point of view, however, we can also look at it the other way around: our happiness determines our budget. [Now that‘s a nice one, isn’t it? Think about it! 🙂 And, in the process, think about hyperbolas too: the y = 1/x function holds the key to understanding both infinity and nothingness. :-)]

U is a state function but, as mentioned above, we’ve got quite a few state variables in physics. Entropy, of course, denoted by S—and enthalpy too, denoted by H. Let me remind you of the basics of the entropy concept:

  1. The internal energy U changes because (a) we add or remove some heat from the system (ΔQ), (b) because some work is being done (by the gas on its surroundings or the other way around), or (c) because of both. Using the differential notation, we write: dU = dQ – dW, always. The (differential) work that’s being done is PdV. Hence, we have dU = dQ – PdV.
  2. When transferring heat to a system at a certain temperature, there’s a quantity we refer to as the entropy. Remember that illustration of Feynman’s in my post on entropy: we go from one point to another on the temperature-volume diagram, taking infinitesimally small steps along the curve, and, at each step, an infinitesimal amount of work dW is done, and an infinitesimal amount of entropy dS = dQ/T is being delivered.
  3. The total change in entropy, ΔS, is a line integral: ΔS = ∫dQ/T = ∫dS.

That’s somewhat tougher to understand than economics, and so that’s why it took me more time to come with terms with it. 🙂 Just go through Feynman’s Lecture on it, or through that post I referenced above. If you don’t want to do that, then just note that, while entropy is a very mysterious concept, it’s deceptively simple from a math point of view: ΔS = ΔQ/T, so the (infinitesimal) change in entropy is, quite simply, the ratio of (1) the (infinitesimal or incremental) amount of heat that is being added or removed as the system goes from one state to another through a reversible process and (2) the temperature at which the heat is being transferred. However, I am not writing this post to discuss entropy once again. I am writing it to give you an idea of the math behind the system.

So dS = dQ/T. Hence, we can re-write dU = dQ – dW as:

dU = TdS – PdV ⇔ dU + d(PV) = TdS – PdV + d(PV)

⇔ d(U + PV) = dH = TdS – PdV + PdV + VdP = TdS + VdP

The U + PV quantity on the left-hand side of the equation is the so-called enthalpy of the system, which I mentioned above. It’s denoted by H indeed, and it’s just another state variable, like energy: same-same but different, as they say in Asia. We encountered it in our previous post also, where we said that chemists prefer to analyze the behavior of substances using temperature and pressure as ‘independent variables’, rather than temperature and volume. Independent variables? What does that mean, exactly?

According to the PV = NkT equation, we only have two independent variables: if we assign some value to two variables, we’ve got a value for the third one. Indeed, remember that other equation we got when we took the total differential of U. We wrote U as U(V, T) and, taking the total differential, we got:

dU = (∂U/∂T)dT + (∂U/∂V)dV

We did not need to add a (∂U/∂P)dP term, because the pressure is determined by the volume and the temperature. We could also have written U = U(P, T) and, therefore, that dU = (∂U/∂T)dT + (∂U/∂P)dP. However, when working with temperature and pressure as the ‘independent’ variables, it’s easier to work with H rather than U. The point to note is that it’s all quite flexible really: we have two independent variables in the system only. The third one (and all of the other variables really, like energy or enthalpy or whatever) depend on the other two. In other words, from a math point of view, we only have two degrees of freedom in the system here: only two variables are actually free to vary. 🙂

Let’s look at that dH = TdS + VdP equation. That’s a differential equation in which not temperature and pressure, but entropy (S) and pressure (P) are ‘independent’ variables, so we write:

dH(S, P) = TdS + VdP

Now, it is not very likely that we will have some problem to solve with data on entropy and pressure. At our level of understanding, any problem that’s likely to come our way will probably come with data on more common variables, such as the heat, the pressure, the temperature, and/or the volume. So we could continue with the expression above but we don’t do that. It makes more sense to re-write the expression substituting TdS for dQ once again, so we get:

dH = dQ + VdP

That resembles our dU = dQ – PdV expression: it just substitutes V for –P. And, yes, you guessed it: it’s because the two expressions resemble each other that we like to work with H now. 🙂 Indeed, we’re talking the same system and the same infinitesimal changes and, therefore, we can use all the formulas we derived already by just substituting H for U, V for –P, and dP for dV. Huh? Yes. It’s a rather tricky substitution. If we switch V for –P (or vice versa) in a partial derivative involving T, we also need to include the minus sign. However, we do not need to include the minus sign when substituting dV and dP, and we also don’t need to change the sign of the partial derivatives of U and H when going from one expression to another! It’s a subtle and somewhat weird point, but a very important one! I’ll explain it in a moment. Just continue to read as for now. Let’s do the substitution using our rules:

dU = (∂Q/∂T)VdT + [T(∂P/∂T)V − P]dV becomes:

dH = (∂Q/∂T)PdT + (∂H/∂P)TdP = CPdT + [–T·(∂V/∂T)P + V]dP

Note that, just as we referred to (∂Q/∂T)as the specific heat capacity of a substance at constant volume, which we denoted by CV, we now refer to (∂Q/∂T)P as the specific heat capacity at constant pressure, which we’ll denote, logically, as CP. Dropping the subscripts of the partial derivatives, we re-write the expression above as:

dH = CPdT + [–T·(∂V/∂T) + V]dP

So we’ve got what we wanted: we switched from an expression involving derivatives assuming constant volume to an expression involving derivatives assuming constant pressure. [In case you wondered what we wanted, this is it: we wanted an equation that helps us to solve another type of problem—another formula for a problem involving a different set of data.]

As mentioned above, it’s good to use subscripts with the partial derivatives to emphasize what changes and what is constant when calculating those partial derivatives but, strictly speaking, it’s not necessary, and you will usually not find the subscripts when googling other texts. For example, in the Wikipedia article on enthalpy, you’ll find the expression written as:

dH = CPdT + V(1–αT)dP with α = (1/V)(∂V/∂T)

Just write it all out and you’ll find it’s the same thing, exactly. It just introduces another coefficient, α, i.e. the coefficient of (cubic) thermal expansion. If you find this formula is easier to remember, then please use this one. It doesn’t matter.

Now, let’s explain that funny business with the minus signs in the substitution. I’ll do so by going back to that infinitesimal analysis of the reversible cycle in my previous post, in which we had that formula involving ΔQ for the work done by the gas during an infinitesimally small reversible cycle: ΔW = ΔVΔP = ΔQ·(ΔT/T). Now, we can either write that as:

  1. ΔQ = T·(ΔP/ΔT)·ΔV = dQ = T·(∂P/∂T)V·dV – which is what we did for our analysis of (∂U/∂V)or, alternatively, as
  2. ΔQ = T·(ΔV/ΔT)·ΔP = dQ = T·(∂V/∂T)P·dP, which is what we’ve got to do here, for our analysis of (∂H/∂P)T.

Hence, dH = dQ + VdP becomes dH = T·(∂V/∂T)P·dP + V·dP, and dividing all by dP gives us what we want to get: dH/dP = (∂H/∂P)= T·(∂V/∂T)+ V.

[…] Well… NO! We don’t have the minus sign in front of T·(∂V/∂T)P, so we must have done something wrong or, else, that formula above is wrong.

The formula is right (it’s in Wikipedia, so it must be right :-)), so we are wrong. Indeed! The thing is: substituting dT, dV and dP for ΔT, ΔV and ΔP is somewhat tricky. The geometric analysis (illustrated below) makes sense but we need to watch the signs.

Carnot 2

We’ve got a volume increase, a temperature drop and, hence, also a pressure drop over the cycle: the volume goes from V to V+ΔV (and then back to V, of course), while the pressure and the temperature go from P to P–ΔP and T to T–ΔT respectively (and then back to P and T, of course). Hence, we should write: ΔV = dV, –ΔT = dT, and –ΔP = dP. Therefore, as we replace the ratio of the infinitesimal change of pressure and temperature, ΔP/ΔT, by a proper derivative (i.e. ∂P/∂T), we should add a minus sign: ΔP/ΔT = –∂P/∂T. Now that gives us what we want: dH/dP = (∂H/∂P)= –T·(∂V/∂T)+ V, and, therefore, we can, indeed, write what we wrote above:

dU = (∂Q/∂T)VdT + [T(∂P/∂T)V − P]dV becomes:

dH = (∂Q/∂T)PdT + [–T·(∂V/∂T)P + V]dP = CPdT + [–T·(∂V/∂T)P + V]dP

Now, in case you still wonder: what’s the use of all these different expressions stating the same? The answer is simple: it depends on the problem and what information we have. Indeed, note that all derivatives we use in our expression for dH expression assume constant pressure, so if we’ve got that kind of data, we’ll use the chemists’ representation of the system. If we’ve got data describing performance at constant volume, we’ll need the physicists’ formulas, which are given in terms of derivatives assuming constant volume. It all looks complicated but, in the end, it’s the same thing: the PV = NkT equation gives us two ‘independent’ variables and one ‘dependent’ variable. Which one is which will determine our approach.

Now, we left one thing unexplained. Why do we refer to γ as the specific heat ratio? The answer is: it is the ratio of the specific heat capacities indeed, so we can write:

γ = CP/CV

However, it is important to note that that’s valid for ideal gases only. In that case, we know that the (∂U/∂V)derivative in our dU = (∂U/∂T)VdT + (∂U/∂V)TdV expression is zero: we can change the volume, but if the temperature remains the same, the internal energy remains the same. Hence, dU = (∂U/∂T)VdT = CVdT, and dU/dT = CV. Likewise, the (∂H/∂P)T derivative in our dH = (∂H/∂T)PdT + (∂H/∂P)TdP expression is zero—for ideal gases, that is. Hence, dH = (∂H/∂T)PdT = CPdT, and dH/dT = CP. Hence,

CP/CV = (dH/dT)/(dU/dT) = dH/dU

Does that make sense? If dH/dU = γ, then H must be some linear function of U. More specifically, H must be some function H = γU + c, with c some constant (it’s the so-called constant of integration). Now, γ is supposed to be constant too, of course. That’s all perfectly fine: indeed, combining the definition of H (H = U + PV), and using the PV = (γ–1)U relation, we have H = U + (γ–1)U = γU (hence, c = 0). So, yes, dH/dU = γ, and γ = CP/CV.

Note the qualifier, however: we’re assuming γ is constant (which does not imply the gas has to be ideal, so the interpretation is less restrictive than you might think it is). If γ is not a constant, it’s a different ballgame. […] So… Is γ actually constant? The illustration below shows γ is not constant for common diatomic gases like hydrogen or (somewhat less common) oxygen. It’s the same for other gases: when mentioning γ, we need to state the temperate at which we measured it too. 😦  However, the illustration also shows the assumption of γ being constant holds fairly well if temperature varies only slightly (like plus or minus 100° C), so that’s OK. 🙂

Heat ratio

I told you so: the kinetic gas theory is not quite accurate. An approach involving molecular energy states works much better (and is actually correct, as it’s consistent with quantum theory). But so we are where we are and I’ll save the quantum-theoretical approach for later. 🙂

So… What’s left? Well… If you’d google the Wikipedia article on enthalphy in order to check if I am not writing nonsense, you’ll find it gives γ as the ratio of H and U itself: γ = H/U. That’s not wrong, obviously (γ = H/U = γU/U = γ), but that formula doesn’t really explain why γ is referred to as the specific heat ratio, which is what I wanted to do here.

OK. We’ve covered a lot of ground, but let’s reflect some more. We did not say a lot about entropy, and/or the relation between energy and entropy. Too bad… The relationship between entropy and energy is obviously not so simple as between enthalpy and energy. Indeed, because of that easy H = γU relationship, enthalpy emerges as just some auxiliary variable: some temporary variable we need to calculate something. Entropy is, obviously, something different. Unlike enthalpy, entropy involves very complicated thinking, involving (ir)reversibility and all that. So it’s quite deep, I’d say – but I’ll write more about that later. I think this post has gone as far as it should. 🙂

Entropy

The two previous posts were quite substantial. Still, they were only the groundwork for what we really want to talk about: entropy, and the second law of thermodynamics, which you probably know as follows: all of the energy in the universe is constant, but its entropy is always increasing. But what is entropy really? And what’s the nature of this so-called law?

Let’s first answer the second question: Wikipedia notes that this law is more like an empirical finding that has been accepted as an axiom. That probably sums it up best. That description does not downplay its significance. In fact, Newton’s laws of motion, or Einstein’s relatively principle, have the same status: axioms in physics – as opposed to those in math – are grounded in reality. At the same time, and just like in math, one can often choose alternative sets of axioms. In other words, we can derive the law of ever-increasing entropy from other principles, notably the Carnot postulate, which basically says that, if the whole world were at the same temperature, it would impossible to reversibly extract and convert heat energy into work. I talked about that in my previous post, and so I won’t go into more detail here. The bottom line is that we need two separate heat reservoirs at different temperatures, denoted by Tand T2, to convert heat into useful work.

Let’s go to the first question: what is entropy, really?

Defining entropy

Feynman, the Great Teacher, defines entropy as part of his discussion on Carnot’s ideal reversible heat engine, so let’s have a look at it once more. Carnot’s ideal engine can do some work by taking an amount of heat equal to Qout of one heat reservoir and putting an amount of heat equal to Q2 into the other one (or, because it’s reversible, it can also go the other way around, i.e. it can absorb Q2 and put Q1 back in, provided we do the same amount of work W on the engine).

The work done by such machine, or the work that has to be done on the machine when reversing the cycle, is equal W = Q1 – Q2 (the equation shows the machine is as efficient as it can be, indeed: all of the difference in heat energy is converted into useful work, and vice versa—nothing gets ‘lost’ in frictional energy or whatever else!). Now, because it’s a reversible thermodynamic process, one can show that the following relationship must hold:

Q1/T= Q2/T2

This law is valid, always, for any reversible engine and/or for any reversible thermodynamic process, for any Q1, Q2, T1 and T2. [Ergo, it is not valid for non-reversible processes and/or non-reversible engines, i.e. real machines.] Hence, we can look at Q/T as some quantity that remains unchanged: an equal ‘amount’ of Q/T is absorbed and given back, and so there is no gain or loss of Q/T (again, if we’re talking reversible processes, of course). [I need to be precise here: there is no net gain or loss in the Q/T of the substance of the gas. The first reservoir obviously looses Q1/T1, and the second reservoir gains Q2/T2. The whole environment only remains unchanged if we’d reverse the cycle.]

In fact, this Q/T ratio is the entropy, which we’ll denote by S, so we write:

S = Q1/T= Q2/T2

What the above says, is basically the following: whenever the engine is reversible, this relationship between the heats must follow: if the engine absorbs Qat Tand delivers Qat T2, then Qis to Tas Qis to T2 and, therefore, we can define the entropy S as S = Q/T. That implies, obviously:

Q = S·T

From these relations (S = Q/T and Q = S·T), it is obvious that the unit of entropy has to be joule per degree (Kelvin), i.e. J/K. As such, it has the same dimension as the Boltzmann constant, k≈ 1.38×10−23 J/K, which we encountered in the ideal gas formula PV = NkT, and which relates the mean kinetic energy of atoms or molecules in an ideal gas to the temperature. However, while kis, quite simply, a constant of proportionality, S is obviously not a constant: its value depends on the system or, to continue with the mathematical model we’re using, the heat engine we’re looking at.

Still, this definition and relationships do not really answer the question: what is entropy, really? Let’s further explore the relationships so as to try to arrive at a better understanding.

I’ll continue to follow Feynman’s exposé here, so let me use his illustrations and arguments. The first argument revolves around the following set-up, involving three reversible engines (1, 2 and 3), and three temperatures (T1 > T> T3): Three engines

Engine 1 runs between T1 and  Tand delivers W13 by taking in Q1 at T1 and delivering Q3 at T3. Similarly, engine 2 and 3 deliver or absorb W32  and W12 respectively by running between T3 and  T2 and between T2 and  Trespectively. Now, if we let engine 1 and 2 work in tandem, so engine 1 produces W13 and delivers Q3, which is then taken in by engine 2, using an amount of work W32, the net result is the same as what engine 3 is doing: it runs between T1 and  Tand delivers W12, so we can write:

W12 = W13 – W32

This result illustrates that there is only one Carnot efficiency, which Carnot’s Theorem expresses as follows:

  1. All reversible engines operating between the same heat reservoirs are equally efficient.
  2. No actual engine operating between two heat reservoirs can be more efficient than a Carnot engine operating between the same reservoirs.

Now, it’s obvious that it would be nice to have some kind of gauge – or a standard, let’s say – to describe the properties of ideal reversible engines in order to compare them. We can define a very simple gauge by assuming Tin the diagram above is one degree. One degree what? Whatever: we’re working in Kelvin for the moment, but any absolute temperature scale will do. [An absolute temperature scale uses an absolute zero. The Kelvin scale does that, but the Rankine scale does so too: it just uses different units than the Kelvin scale (the Rankine units correspond to Fahrenheit units, while the Kelvin units correspond to Celsius degrees).] So what we do is to let our ideal engines run between some temperature T – at which it absorbs or delivers a certain heat Q – and 1° (one degree), at which it delivers or absorbs an amount of heat which we’ll denote by QS. [Of course, I note this assumes that ideal engines are able to run between one degree Kelvin (i.e. minus 272.15 degrees Celsius) and whatever other temperature. Real (man-made) engines are obviously likely to not have such tolerance. :-)] Then we can apply the Q = S·T equation and write:

Q= S·1°

Like that we solve the gauge problem when measuring the efficiency of ideal engines, for which the formula is W/Q= (T1 –  T)/T1. In my previous post, I illustrated that equation with some graphs for various values of T(e.g. T= 4, 1, or 0.3). [In case you wonder why these values are so small, it doesn’t matter: we can scale the units, or assume 1 unit corresponds to 100 degrees, for example.] These graphs all look the same but cross the x-axis (i.e. the T1-axis) at different points (at T= 4, 1, and 0.3 respectively, obviously). But let us now use our gauge and, hence, standardize the measurement by setting T2 to 1. Hence, the blue graph below is now the efficiency graph for our engine: it shows how the efficiency (W/Q1) depends on its working temperature Tonly. In fact, if we drop the subscripts, and define Q as the heat that’s taken in (or delivered when we reverse the machine), we can simply write:

 W/Q = (T – 1)/T = 1 – 1/T

Capture

Note the formula allows for negative values of the efficiency W/Q: if Twould be lower than one degree, we’d have to put work in and, hence, our ideal engine would have negative efficiency indeed. Hence, the formula is consistent over the whole temperature domain T > 0. Also note that, coincidentally, the three-engine set-up and the W/Q formula also illustrate the scalability of our theoretical reversible heat engines: we can think of one machine substituting for two or three others, or any combination really: we can have several machines of equal efficiency working in parallel, thereby doubling, tripling, quadruping, etcetera, the output as well as the heat that’s being taken in. Indeed, W/Q = 2W/2Q = 3W/3Q1 = 4W/4Q and so on.

Also, looking at that three-engine model once again, we can set T3 to one degree and re-state the result in terms of our standard temperature:

If one engine, absorbing heat Qat T1, delivers the heat QS at one degree, and if another engine absorbing heat Qat T2, will also deliver the same heat QS at one degree, then it follows that an engine which absorbs heat Qat the temperature T1 will deliver heat Qif it runs between T1 and T2.

That’s just stating what we showed, but it’s an important result. All these machines are equivalent, so to say, and, as Feynman notes, all we really have to do is to find how much heat (Q) we need to put in at the temperature T in order to deliver a certain amount of heat Qat the unit temperature (i.e. one degree). If we can do that, then we have everything. So let’s go for it.

Measuring entropy

We already mentioned that we can look at the entropy S = Q/T as some quantity that remains unchanged as long as we’re talking reversible thermodynamic processes. Indeed, as much Q/T is absorbed as is given back in a reversible cycle or, in other words: there is no net change in entropy in a reversible cycle. But what does it mean really?

Well… Feynman defines the entropy of a system, or a substance really (think of that body of gas in the cylinder of our ideal gas engine), as a function of its condition, so it is a quantity which is similar to pressure (which is a function of density, volume and temperature: P = NkT/V), or internal energy (which is a function of pressure and volume (U = (3/2)·PV) or, substituting the pressure function, of density and temperature: U = (3/2)·NkT). That doesn’t bring much clarification, however. What does it mean? We need to go through the full argument and the illustrations here.

Suppose we have a body of gas, i.e. our substance, at some volume Va and some temperature Ta (i.e. condition a), and we bring it into some other condition (b), so it now has volume Vb and temperature Tb, as shown below. [Don’t worry about the ΔS = Sb – Sa and ΔS = Sa – Sb formulas as for now. I’ll explain them in a minute.]  

Entropy change

You may think that a and b are, once again, steps in the reversible cycle of a Carnot engine, but no! What we’re doing here is something different altogether: we’ve got the same body of gas at point b but in a completely different condition: indeed, both the volume and temperature (and, hence, its pressure) of the gas is different in b as compared to a. What we do assume, however, is that the gas went from condition a to condition b through a completely reversible process. Cycle, process? What’s the difference? What do we mean with that?

As Feynman notes, we can think of going from a to b through a series of steps, during which tiny reversible heat engines take out an infinitesimal amount of heat dQ in tiny little reservoirs at the temperature corresponding to that point on the path. [Of course, depending on the path, we may have to add heat (and, hence, do work rather than getting work out). However, in this case, we see a temperature rise but also an expansion of volume, the net result of which is that the substance actually does some (net) work from a to b, rather than us having to put (net) work in.] So the process consists, in principle, of a (potentially infinite) number of tiny little cycles. The thinking is illustrated below. 

Entropy change 2

Don’t panic. It’s one of the most beautiful illustrations in all of Feynman’s Lectures, IMHO. Just analyze it. We’ve got the same horizontal and vertical axis here, showing volume and temperature respectively, and the same points a and b showing the condition of the gas before and after and, importantly, also the same path from condition a to condition b, as in the previous illustration. It takes a pedagogic genius like Feynman to think of this: he just draws all those tiny little reservoirs and tiny engines on a mathematical graph to illustrate what’s going on: at each step, an infinitesimal amount of work dW is done, and an infinitesimal amount of entropy dS = dQ/T is being delivered at the unit temperature.

As mentioned, depending on the path, some steps may involve doing some work on those tiny engines, rather than getting work out of them, but that doesn’t change the analysis. Now, we can write the total entropy that is taken out of the substance (or the little reservoirs, as Feynman puts it), as we go from condition a to b, as:

ΔS = Sb – Sa

Now, in light of all the above, it’s easy to see that this ΔS can be calculated using the following integral:

integral entropy

So we have a function S here which depends on the ‘condition’ indeed—i.e. the volume and the temperature (and, hence, the pressure) of the substance. Now, you may or may not notice that it’s a function that is similar to our internal energy formula (i.e. the formula for U). At the same time, it’s not internal energy. It’s something different. We write:

S = S(V, T)

So now we can rewrite our integral formula for change in S as we go from a to b as:

integral entropy 2

Now, a similar argument as the one we used when discussing Carnot’s postulate (all ideal reversible engines operating between two temperatures are essentially equivalent) can be used to demonstrate that the change in entropy does not depend on the path: only the start and end point (i.e. point a and b) matter. In fact, the whole discussion is very similar to the discussion of potential energy when conservative force fields are involved (e.g. gravity or electromagnetism): the difference between the values for our potential energy function at different points was absolute. The paths we used to go from one point to another didn’t matter. The only thing we had to agree on was some reference point, i.e. a zero point. For potential energy, that zero point is usually infinity. In other words, we defined zero potential energy as the potential energy of a charge or a mass at an infinite distance away from the charge or mass that’s causing the field.

Here we need to do the same: we need to agree on a zero point for S, because the formula above only gives the difference of entropy between two conditions. Now, that’s where the third law of thermodynamics comes in, which simply states that the entropy of any substance at the absolute zero temperature (T = 0) is zero, so we write:

S = 0 at T = 0

That’s easy enough, isn’t it?

Now, you’ll wonder whether we can actually calculate something with that. We can. Let me simply reproduce Feynman’s calculation of the entropy function for an ideal gas. You’ll need to pull all that I wrote in this and my previous posts together, but you should be able to follow his line of reasoning:

Entropy for ideal gas

Huh? I know. At this point, you’re probably suffering from formula overkill. However, please try again. Just go over the text and the formulas above, and try to understand what they really mean. [In case you wonder about the formula with the ln[Vb/Va] factor (i.e. the reference to section 44.4), you can check it in my previous post.] So just try to read the S(V, T) formula: it says that a substance (a gas, liquid or solid) consisting of N atoms or molecules, at some temperature T and with some volume V, is associated with some exact value for its entropy S(V, T). The constant, a, should, of course, ensure that S(V, T) = 0 at T = 0.

The first thing you can note is that S is an increasing function of V at constant temperature T. Conversely, decreasing the volume results in a decrease of entropy. To be precise, using the formula for S, we can derive the following formula for the difference in entropy when keeping the temperature constant at some value T:

Sb – Sa = S(Vb, T) – S(Va, T)

= ΔS = N·k·ln[Vb/Va]

What this formula says, for example, is that we’d do nothing but double the volume (while keeping the temperature constant) of a gas when going from  to a to b (hence, Vb/V= 2), the entropy will change by N·k·ln(2) ≈ 0.7·N·k. Conversely, if we would halve the volume (again, assuming the temperature remains constant), then the change in entropy will be N·k·ln(0.5) ≈ –0.7·N·k.

The graph below shows how it works. It’s quite simple really: it’s just the ln(x) function, and I just inserted it here so you have an idea of how the entropy changes with volume. [In case you would think it looks the same like that efficiency graph, i.e. the graph of the W/Q = (T – 1)/T = 1 – 1/T function, think again: the efficiency graph has a horizontal asymptote (y = 1), while the logarithmic function does not have any horizontal asymptote.]

Capture 2

Now, you may think entropy changes only marginally as we keep increasing the volume, but you should also think twice here. It’s just the nature of the logarithmic scale. Indeed, when we double the volume, going from V = 1 to V = 2, for example, the change in entropy will be equal to N·k·ln(2) ≈ 0.7·N·k. Now, that’s the same change as going from V = 2 to V = 4, and the same as going from V = 4 to V = 8. So, if we double the volume three times in a row, the total change in entropy will be that of going from V = 1 to V = 8, which is equal to N·k·ln(8) = N·k·ln(23) = 3·ln(2). So, yes, looking at the intervals here that are associated with the same ln(2) increase  in entropy, i.e. [1, 2], [2, 4] and [4, 8] respectively, you may think that the increase in entropy is marginal only, as it’s the same increase but the length of each interval is double that of the previous one. However, when reducing the volume, the logic works the other way around, and so the logarithmic function ensures the change is anything but marginal. Indeed, if we halve the volume, going from V = 1 to V = 1/2, and then halve it again, to V = 1/4, and the again, to V = 1/8, we get the same change in entropy once more—but with a minus sign in front, of course: N·k·ln(2–3) = –3·ln(2)—but the same ln(2) change is now associated with intervals on the x-axis (between 1 and 0.5, 0.5 and 0.25, and 0.25 and 0.125 respectively) that are getting smaller and smaller as we further reduce the volume. In fact, the length of each interval is now half of that of the previous interval. Hence, the change in entropy is anything but marginal now!

[In light of the fact that the (negative) change in entropy becomes larger and larger as we further reduce the volume, and in a way that’s anything but marginal, you may now wonder, for a very brief moment, whether or not the entropy might actually take on a negative value. The answer is obviously no. The change in entropy can take on a large negative volume but the S(V, T) = N·k·[ln(V) + ln(T)/(γ–1)] + a formula, with ensuring that the entropy is zero at T = 0, ensures things come out alright—as it should, of course!]

Now, as we’re continue to try to understand what entropy really means, it’s quite interesting to think of what this formula implies at the level of the atoms or molecules that make up the gas: the entropy change per molecule is k·ln2 – or k·ln(1/2) when compressing the gas at the same temperature. Now, its kinetic energy remains the same – because – don’t forget! – we’re changing the volume at constant temperature here. So what causes the entropy change here really? Think about it: the only thing that changed, physically, is how much room the molecule has to run around in—as Feynman puts it aptly. Hence, while everything stays the same (atoms or molecules with the same temperature and energy), we still have an entropy increase (or decrease) when the distribution of the molecules changes.

This remark brings us to the connection between order and entropy, which you vaguely know, for sure, but probably never quite understood because, if you did, you wouldn’t be reading this post. 🙂 So I’ll talk about in a moment. I first need to wrap up this section, however, by showing why all of the above is, somehow, related to that ever-increasing entropy law. 🙂

However, before doing that, I want to quickly note something about that assumption of constant temperature here. How can it remain constant? When a body of gas expands, its temperature should drop, right? Well… Yes. But only if it is pushing against something, like in cylinder with a piston indeed, or as air escapes from a tyre and pushes against the (lower-pressure) air outside of the tyre. What happens here is that the kinetic energy of the gas molecules is being transferred (to the piston, or to the gas molecules outside of the tyre) and, hence, temperature decreases indeed. In such case, the assumption is that we add (or remove) heat from our body of gas as we expand (or decrease) its volume. Having said that, in a more abstract analysis, we could envisage a body of gas that has nothing to push against, except for the walls of its container, which have the same temperature. In such more abstract analysis, we need not worry about how we keep temperature constant: the point here is just to compare the ex post and ex ante entropy of the volume. That’s all.

The Law of Ever-Increasing Entropy 

With all of the above, we’re finally armed to ‘prove’ the second law of thermodynamics which we can also state as follows indeed: while the energy of the universe is constant, its entropy is always increasing. Why is this so? Out of respect, I’ll just quote Feynman once more, as I can’t see how I could possibly summarize it better:

Universe of entropy

So… That should sum it all up. You should re-read the above a couple of times, so you’re sure you grasp it. I’ll also let Feynman summarize all of those ‘laws’ of thermodynamics that we have just learned as, once more, I can’t see how I could possibly write more clearly or succinctly. His statement is much more precise that the statement we started out with: the energy of the universe is always constant but its entropy is always increasing. As Feynman notes, this version of the two laws of thermodynamics don’t say that entropy stays the same in a reversible cycle, and also doesn’t say what entropy actually is. So Feynman’s summary is much more precise and, hence, much better indeed:

Laws of thermodynamics

Entropy and order

What I wrote or reproduced above may not have satisfied you. So we’ve got this funny number, S, describing some condition or state of a substance, but you may still feel you don’t really know what it means. Unfortunately, I cannot do all that much about that. Indeed, technically speaking, a quantity like entropy (S) is a state function, just like internal energy (U), or like enthalpy (usually denoted by H), a related concept which you may remember from chemistry and which is defined H = U + PV. As such, you may just think of S as some number that pops up in a thermodynamical equations. It’s perfectly fine to think of it like that. However, if you’re reading this post, then it’s likely you do so because some popular science book mentioned entropy and related it to order and/or disorder indeed. However, I need to disappoint you here: that relationship is not as straightforward as you may think it is. To get some idea, let’s go through another example, which I’ll also borrow from Feynman.

Let’s go back to that relationship between volume and entropy, keeping temperature constant:

ΔS = N·k·ln[Vb/Va]

We discussed, rather at length, how entropy increases as we allow a body of gas to expand. As the formula shows, it increases logarithmically with the ratio of the ex ante and ex post volume. Now, let us think about two gases, which we can think of as ‘white’ and ‘black’ respectively. Or neon or argon. Whatever. Two different gases. Let’s suppose we’ve kept them into two separate compartments of a box, with some barrier in-between them.

Now, you know that, if we’d take out the barrier, they’ll mix it. That’s just a fact of life. As Feynman puts it: somehow, the whites will worm their way across in the space of blacks, and the blacks will worm their way, by accident, into the space of whites. [There’s a bit of a racist undertone in this, isn’t there? But then I am sure Feynman did not intend it that way.] Also, as he notes correctly: we’ve got a very simple example here of an irreversible process which is completely composed of reversible events. We know this mixing will not affect the kinetic (or internal) energy of the gas. Having said that, both the white and the black molecules now have ‘much more room to run around in’. So is there a change in entropy? You bet.

If we take away that barrier, it’s just similar to moving that piston out when we were discussing one volume of gas only. Indeed, we effectively double the volume for the whites, and we double the volume for the blacks, while keeping all at the same temperature. Hence, both the entropy of the white and black gas increases. By how much? Look at the formula: the amount is given by the product of the number of molecules (N), the Boltzman constant (k), and ln(2), i.e. the natural logarithm of the ratio of the ex post and ex ante volumes: ΔS = N·k·ln[Vb/Va].

So, yes, entropy increases as the molecules are now distributed over a much larger space. Now, if we stretch our mind a bit, we could define as a measure of order, or disorder, especially when considering the process going the other way: suppose the gases were mixed up to begin with and, somehow, we manage to neatly separate them in two separate volumes, each half of the original. You’d agree that amounts to an increase in order and, hence, you’d also agree that, if entropy is, somehow, some measure for disorder, entropy should decrease–which it obviously does using that ΔS = N·k·ln[Vb/Va] formula. Indeed, we calculated ΔS as –0.7·N·k.

However, the interpretation is quite peculiar and, hence, not as straightforward as popular science books suggest. Indeed, from that S(V, T) = Nk[lnV + (1/γ−1)lnT] + a formula, it’s obvious we can also decrease entropy by decreasing the number of molecules, or by decreasing the temperature. You’ll have to admit that in both cases (decrease in N, or decrease in T), you’ll have to be somewhat creative in interpreting such decrease as a decrease in disorder.

So… What more can we say? Nothing much. However, in order to be complete, I should add a final note on this discussion of entropy measuring order (or, to be more precise, measuring disorder). It’s about another concept of entropy, the so-called Shannon entropy. It’s a concept from information theory, and our entropy and the Shannon entropy do have something in common: in both, we see that logarithm pop up. It’s quite interesting but, as you might expect, complicated. Hence, I should just refer you to the Wikipedia article on it, from which I took the illustration and text below.

coin flip

We’ve got two coins with two faces here. They can, obviously, be arranged in 22 = 4 ways. Now, back in 1948, the so-called father of information theory, Claude Shannon, thought it was nonsensical to just use that number (4) to represent the complexity of the situation. Indeed, if we’d take three coins, or four, or five, respectively, then we’d have 2= 8, 2= 16, and 2= 32 ways, respectively, of combining them. Now, you’ll agree that, as a measure of the complexity of the situation, the exponents 1, 2, 3, 4 etcetera describe the situation much better than 2, 4, 8, 16 etcetera.

Hence, Shannon defined the so-called information entropy as, in this case,  the base 2 logarithm of the number of possibilities. To be precise, the information entropy of the situation which we’re describing here (i.e. the ways a set of coins can be arranged) is equal to S = N = log2(2N) = 1, 2, 3, 4 etcetera for N = 1, 2, 3, 4 etcetera. In honor of Shannon, the unit is shannons. [I am not joking.] However, information theorists usually talk about bits, rather than shannons. [We’re not talking a computer bit here, although the two are obviously related, as computer bits are binary too.]

Now, one of the many nice things of logarithmic functions is that it’s easy to switch bases. Hence, instead of expressing information entropy in bits, we can also express it in trits (for base 3 logarithms), nats (for base e logarithms, so that’s the natural logarithmic function ln), or dits (for base 10 logarithms). So… Well… Feynman is right in noting that “the logarithm of the number of ways we can arrange the molecules is (the) entropy”, but that statement needs to be qualified: the concepts of information entropy and entropy tout court, as used in the context of thermodynamical analysis, are related but, as usual, they’re also different. 🙂 Bridging the two concepts involves probability distributions and other stuff. One extremely simple popular account illustrates the principle behind as follows:

Suppose that you put a marble in a large box, and shook the box around, and you didn’t look inside afterwards. Then the marble could be anywhere in the box. Because the box is large, there are many possible places inside the box that the marble could be, so the marble in the box has a high entropy. Now suppose you put the marble in a tiny box and shook up the box. Now, even though you shook the box, you pretty much know where the marble is, because the box is small. In this case we say that the marble in the box has low entropy.

Frankly, examples like this make only very limited sense. They may, perhaps, help us imagine, to some extent, how probability distributions of atoms or molecules might change as the atoms or molecules get more space to move around in. Having said that, I should add that examples like this are, at the same time, also so simplistic they may confuse us more than they enlighten us. In any case, while all of this discussion is highly relevant to statistical mechanics and thermodynamics, I am afraid I have to leave it at this one or two remarks. Otherwise this post risks becoming a course! 🙂

Now, there is one more thing we should talk about here. As you’ve read a lot of popular science books, you probably know that the temperature of the Universe is decreasing because it is expanding. However, from what you’ve learnt so far, it is hard to see why that should be the case. Indeed, it is easy to see why the temperature should drop/increase when there’s adiabatic expansion/compression: momentum and, hence, kinetic energy, is being transferred from/to the piston indeed, as it moves out or into the cylinder while the gas expands or is being compressed. But the expanding universe has nothing to push against, does it? So why should its temperature drop? It’s only the volume that changes here, right? And so its entropy (S) should increase, in line with the ΔS = Sb – Sa = S(Vb, T) – S(Va, T) = ΔS = N·k·ln[Vb/Va] formula, but not its temperature (T), which is nothing but the (average) kinetic energy of all of the particles it contains. Right? Maybe.

[By the way, in case you wonder why we believe the Universe is expanding, that’s because we see it expanding: an analysis of the redshifts and blueshifts of the light we get from other galaxies reveals the distance between galaxies is increasing. The expansion model is often referred to as the raisin bread model: one doesn’t need to be at the center of the Universe to see all others move away: each raisin in a rising loaf of raisin bread will see all other raisins moving away from it as the loaf expands.]

Why is the Universe cooling down?

This is a complicated question and, hence, the answer is also somewhat tricky. Let’s look at the entropy formula for an increasing volume of gas at constant temperature once more. Its entropy must change as follows:

ΔS = Sb – Sa = S(Vb, T) – S(Va, T) = ΔS = N·k·ln[Vb/Va]

Now, the analysis usually assumes we have to add some heat to the gas as it expands in order to keep the temperature (T) and, hence, its internal energy (U) constant. Indeed, you may or may not remember that the internal energy is nothing but the product of the number of gas particles and their average kinetic energy, so we can write:

U = N<mv2/2>

In my previous post, I also showed that, for an ideal gas (i.e. no internal motion inside of the gas molecules), the following equality holds: PV = (2/3)U. For a non-ideal gas, we’ve got a similar formula, but with a different coefficient: PV = (γ−1)U. However, all these formulas were based on the assumption that ‘something’ is containing the gas, and that ‘something’ involves the external environment exerting a force on the gas, as illustrated below.

gas-pressure

As Feynman writes: “Suppose there is nothing, a vacuum, on the outside of the piston. What of it? If the piston were left alone, and nobody held onto it, each time it got banged it would pick up a little momentum and it would gradually get pushed out of the box. So in order to keep it from being pushed out of the box, we have to hold it with a force F.” We know that the pressure is the force per unit area: P = F/A. So can we analyze the Universe using these formulas?

Maybe. The problem is that we’re analyzing limiting situations here, and that we need to re-examine our concepts when applying them to the Universe. 🙂

The first question, obviously, is about the density of the Universe. You know it’s close to a vacuum out there. Close. Yes. But how close? If you google a bit, you’ll find lots of hard-to-read articles on the density of the Universe. If there’s one thing you need to pick up from them, is that, in order for the Universe to expand forever, it should have some critical density (denoted by ρc), which is like a watershed point between an expanding and a contracting Universe.

So what about it? According to Wikipedia, the critical density is estimated to be approximately five atoms (of monatomic hydrogen) per cubic metre, whereas the average density of (ordinary) matter in the Universe is believed to be 0.2 atoms per cubic metre. So that’s OK, isn’t it?

Well… Yes and no. We also have non-ordinary matter in the Universe, which is usually referred to as dark matter in the Universe. The existence of dark matter, and its properties, are inferred from its gravitational effects on visible matter and radiation. In addition, we’ve got dark energy as well. I don’t know much about it, but it seems the dark energy and the dark matter bring the actual density (ρ) of the Universe much closer to the critical density. In fact, cosmologists seem to agree thatρ ≈ ρc and, according to a very recent scientific research mission involving an ESA space observatory doing very precise measurements of the Universe’s cosmic background radiation, the Universe should consist of 4.82 ± 0.05% ordinary matter,25.8 ± 0.4% dark matter and 69 ± 1% dark energy. I’ll leave it to you to challenge that. 🙂

OK. Very low density. So that means very low pressure obviously. But what’s the temperature? I checked on the Physics Stack Exchange site, and the best answer is pretty nuanced: it depends on what you want to average. To be precise, the quoted answer is:

  1. If one averages by volume, then one is basically talking about the ‘temperature’ of the photons that reach us as cosmic background radiation—which is the temperature of the Universe that those popular science books refer to. In that case, we get an average temperature of 2.72 degrees Kelvin. So that’s pretty damn cold!
  2. If we average by observable mass, then our measurement is focused mainly on the temperature of all of the hydrogen gas (most matter in the Universe is hydrogen), which has a temperature of a few 10s of Kelvin. Only one tenth of that mass is in stars, but their temperatures are far higher: in the range of 104to 105 degrees. Averaging gives a range of 10to 104 degrees Kelvin. So that’s pretty damn hot!
  3. Finally, including dark matter and dark energy, which is supposed to have even higher temperature, we’d get an average by total mass in the range of 107 Kelvin. That’s incredibly hot!

This is enlightening, especially the first point: we’re not measuring the average kinetic energy of matter particles here but some average energy of (heat) radiation per unit volume. This ‘cosmological’ definition of temperature is quite different from the ‘physical’ definition that we have been using and the observation that this ‘temperature’ must decrease is quite logical: if the energy of the Universe is a constant, but its volume becomes larger and larger as the Universe expands, then the energy per unit volume must obviously decrease.

So let’s go along with this definition of ‘temperature’ and look at an interesting study of how the Universe is supposed to have cooled down in the past. It basically measures the temperature of that cosmic background radiation, i.e. a remnant of the Big Bang, a few billion years ago, which was a few degrees warmer then than it is now. To be precise, it was measured as 5.08 ± 0.1 degrees Kelvin, and this decrease has nothing to do with our simple ideal gas laws but with the Big Bang theory, according to which the temperature of the cosmic background radiation should, indeed, drop smoothly as the universe expands.

Going through the same logic but the other way around, if the Universe had the same energy at the time of the Big Bang, it was all focused in a very small volume. Now, very small volumes are associated with very small entropy according to that S(V, T) = N·k·[ln(V) + ln(T)/(γ–1)] + a formula, but then temperature was not the same obviously: all that energy has to go somewhere, and a lot of it was obviously concentrated in the kinetic energy of its constituent particles (whatever they were) and, hence, a lot of it was in their temperature. 

So it all makes sense now. It was good to check out it out, as it reminds us that we should not try to analyze the Universe as a simple of body of gas that’s not contained in anything in order to then apply our equally simple ideal gas formulas. Our approach needs to be much more sophisticated. Cosmologists need to understand physics (and thoroughly so), but there’s a reason why it’s a separate discipline altogether. 🙂

The weird force

In my previous post (Loose Ends), I mentioned the weak force as the weird force. Indeed, unlike photons or gluons (i.e. the presumed carriers of the electromagnetic and strong force respectively), the weak force carriers (W bosons) have (1) mass and (2) electric charge:

  1. W bosons are very massive. The equivalent mass of a W+ and W– boson is some 86.3 atomic mass units (amu): that’s about the same as a rubidium or strontium atom. The mass of a Z boson is even larger: roughly equivalent to the mass of a molybdenium atom (98 amu). That is extremely heavy: just compare with iron or silver, which have a mass of  about 56 amu and 108 amu respectively. Because they are so massive, W bosons cannot travel very far before disintegrating (they actually go (almost) nowhere), which explains why the weak force is very short-range only, and so that’s yet another fundamental difference as compared to the other fundamental forces.
  2. The electric charge of W and Z bosons explains why we have a trio of weak force carriers rather than just one: W+, W– and Z0. Feynman calls them “the three W’s”.

The electric charge of W and Z bosons is what it is: an electric charge – just like protons and electrons. Hence, one has to distinguish it from the the weak charge as such: the weak charge (or, to be correct, I should say the weak isospin number) of a particle (such as a proton or a neutron for example) is related to the propensity of that particle to interact through the weak force — just like the electric charge is related to the propensity of a particle to interact through the electromagnetic force (think about Coulomb’s law for example: likes repel and opposites attract), and just like the so-called color charge (or the (strong) isospin number I should say) is related to the propensity of quarks (and gluons) to interact with each other through the strong force.

In short, as compared to the electromagnetic force and the strong force, the weak force (or Fermi’s interaction as it’s often called) is indeed the odd one out: these W bosons seem to mix just about everything: mass, charge and whatever else. In his 1985 Lectures on Quantum Electrodynamics, Feynman writes the following about this:

“The observed coupling constant for W’s is much the same as that for the photon. Therefore, the possibility exists that the three W’s and the photon are all different aspects of the same thing. Stephen Weinberg and Abdus Salam tried to combine quantum electrodynamics with what’s called ‘the weak interactions’ into one quantum theory, and they did it. But if you look at the results they get, you can see the glue—so to speak. It’s very clear that the photon and the three W’s are interconnected somehow, but at the present level of understanding, the connection is difficult to see clearly—you can still see the ‘seams’ in the theories; they have not yet been smoothed out so that the connection becomes more beautiful and, therefore, probably more correct.” (Feynman, 1985, p. 142)

Well… That says it all, I think. And from what I can see, the (tentative) confirmation of the existence of the Higgs field has not made these ‘seams’ any less visible. However, before criticizing eminent scientists such as Weinberg and Salam, we should obviously first have a closer look at those W bosons without any prejudice.

Alpha decay, potential wells and quantum tunneling

The weak force is usually explained as the force behind a process referred to as beta decay. However, because beta decay is just one form of radioactive decay, I need to say something about alpha decay too. [There is also gamma decay but that’s like a by-product of alpha and beta decay: when a nucleus emits an α or β particle (i.e. when we have alpha or beta decay), the nucleus will usually be left in an excited state, and so it can then move to a lower energy state by emitting a gamma ray photon (gamma radiation is very hard (i.e. very high-energy) radiation) – in the same way that an atomic electron can jump to a lower energy state by emitting a (soft) light ray photon. But so I won’t talk about gamma decay.]

Atomic decay, in general, is a loss of energy accompanying a transformation of the nucleus of the atom. Alpha decay occurs when the nucleus ejects an alpha particle: an α-particle consist of two protons and two neutrons bound together and, hence, it’s identical to a helium nucleus. Alpha particles are commonly emitted by all of the larger radioactive nuclei, such as uranium (which becomes thorium as a result of the decay process), or radium (which becomes radon gas). However, alpha decay is explained by a mechanism not involving the weak force: the electromagnetic force and the nuclear force (i.e. the strong force) will do. The reasoning is as follows: the alpha particle can be looked at as a stable but somewhat separate particle inside the nucleus. Because of their charge (both positive), the alpha particle inside of the nucleus and ‘the rest of the nucleus’ are subject to strong repulsive electromagnetic forces between them. However, these strong repulsive electromagnetic forces are not as strong as the strong force between the quarks that make up matter and, hence, that’s what keeps them together – most of the time that is.

Let me be fully complete here. The so-called nuclear force between composite particles such as protons and neutrons – or between clusters of protons and neutrons in this case – is actually the residual effect of the strong force. The strong force itself is between quarks – and between them only – and so that’s what binds them together in protons and neutrons (so that’s the next level of aggregation you might say). Now, the strong force is mostly neutralized within those protons and neutrons, but there is some residual force, and so that’s what keeps a nucleus together and what is referred to as the nuclear force.

There is a very helpful analogy here: the electromagnetic forces between neutral atoms (and/or molecules)—referred to as van der Waals forces (that’s what explains the liquid shape of water, among other things)— are also the residual of the (much stronger) electromagnetic forces that tie the electrons to the nucleus.

Now, that residual strong force – i.e. the nuclear force – diminishes in strength with distance but, within a certain distance, that residual force is strong enough to do what it does, and that’s to keep the nucleus together. This stable situation is usually depicted by what is referred to as a potential well:

Potential wellThe name is obvious: a well is a hole in the ground from which you can get water (or oil or gas or whatever). Now, the sea level might actually be lower than the bottom of a well, but the water would still stay in the well. In the illustration above, we are not depicting water levels but energy levels, but it’s equally obvious it would require some energy to kick a particle out of this well: if it would be water, we’d require a pump to get it out but, of course, it would be happy to flow to the sea once it’s out. Indeed, once a charged particle would be out (I am talking our alpha particle now), it will obviously stay out because of the repulsive electromagnetic forces coming into play (positive charges reject each other).

But so how can it escape the nuclear force and go up on the side of the well? [A potential pond or lake would have been a better term – but then that doesn’t sound quite right, does it? :-)]

Well, the energy may come from outside – that’s what’s referred to as induced radioactive decay (just Google it and you will tons of articles on experiments involving laser-induced accelerated alpha decay) – or, and that’s much more intriguing, the Uncertainty Principle comes into play.

Huh? Yes. According to the Uncertainty Principle, the energy of our alpha particle inside of the nucleus wiggles around some mean value but our alpha particle would also have an amplitude to have some higher energy level. That results not only in a theoretical probability for it to escape out of the well but also into something actually happening if we wait long enough: the amplitude (and, hence, the probability) is tiny, but it’s what explains the decay process – and what gives U-232 a half-life of 68.9 years, and also what gives the more common U-238 a much more comfortable 4.47 billion years as the half-life period.

[…]

Now that we’re talking about wells and all that, we should also mention that this phenomenon of getting out of the well is referred to as quantum tunneling. You can easily see why: it’s like the particle dug its way out. However, it didn’t: instead of digging under the sidewall, it sort of ‘climbed over’ it. Think of it being stuck and trying and trying and trying – a zillion times – to escape, until it finally did. So now you understand this fancy word: quantum tunneling. However, this post is about the weak force and so let’s discuss beta decay now.

Beta decay and intermediate vector bosons

Beta decay also involves transmutation of nuclei, but not by the emission of an α-particle but by a β-particle. A beta particle is just a different name for an electron (β) and/or its anti-matter counterpart: the positron (β+). [Physicists usually simplify stuff but in this case, they obviously didn’t: why don’t they just write e and ehere?]

An example of β decay is the decay of carbon-14 (C-14) into nitrogen-14 (N-14), and an example of β+ decay is the decay of magnesium-23 into sodium-23. C-14 and N-14 have the same mass but they are different atoms. The decay process is described by the equations below:

Beta decay

You’ll remember these formulas from your high-school days: beta decay does not change the mass number (carbon and nitrogen have the same mass: 14 units) but it does change the atomic (or proton) number: nitrogen has an extra proton. So one of the neutrons became a proton ! [The second equation shows the opposite: a proton became a neutron.] In order to do that, the carbon atom had to eject a negative charge: that’s the electron you see in the equation above.

In addition, there is also the ejection of a anti-neutrino (that’s what the bar above the ve symbol stands for: antimatter). You’ll wonder what an antineutrino could possibly be. Don’t worry about it: it’s not any spookier than the neutrino. Neutrinos and anti-neutrinos have no electric charge and so you cannot distinguish them on that account (electric charge). However, all antineutrinos have right-handed helicity (i.e. they come in only one of the two possible spin states), while the neutrinos are all left-handed. That’s why beta-decay is said to not respect parity symmetry, aka as mirror symmetry. Hence, in the case of beta decay, Nature does distinguish between the world and the mirror world ! I’ll come back on that but let me first lighten up the discussion somewhat with a graphical illustration of that neutron-proton transformation.

2000px-Beta-minus_Decay

As for magnesium-sodium transformation, we’d have something similar but so we’d just have a positron instead of an electron (a positron is just an electron with a positive charge for all practical purposes) and a regular neutrino. So we’d just have the anti-matter counterparts of the electron and the neutrino. [Don’t be put off by the term ‘anti-matter’: anti-matter is really just like regular matter – except that the charges have opposite sign. For example, the anti-matter counterpart of a blue quark is an anti-blue quark, and the anti-matter counterpart of neutrino has right-handed helicity – or spin – as opposed to the ‘left-handed’ ‘ordinary’ neutrinos.]

Now, you surely will have several serious questions. The most obvious question is what happens with the electron and the neutrino? Well… Those spooky neutrinos are gone before you know it and so don’t worry about them. As for the electron, the carbon had only six electrons but the nitrogen needs seven to be electrically neutral… So you might think the new atom will take care of it. Well… No. Sorry. Because of its kinetic energy, the electron is likely to just explore the world and crash into something else, and so we’re left with a positively charged nitrogen ion indeed. So I should have added a little + sign next to the N in the formula above. Of course, one cannot exclude the possibility that this ion will pick up the electron later – but don’t bet on it: the ion might have to absorb another electron, or not find any free electrons !

As for the positron (in a β+ decay), that will just grab the nearest electron around and auto-destruct—thereby generating two high-energy photons (so that’s a little light flash). The net result is that we do not have an ion but a neutral sodium atom. Because the nearest electron will usually be found on some shell around the nucleus (the K or L shell for example), such process is often described as electron capture, and the ‘transformation equation’ can then be written p + e– → n + ve (with p and n denoting a proton and a neutron respectively).

The more important question is: where are the W and Z bosons in this story?

Ah ! Yes! Sorry I forgot about them. The Feynman diagram below shows how it really works—and why the name of intermediate vector bosons for these three strange ‘particles’ (W+, W, and Z0) is so apt. These W bosons are just a short trace of ‘something’ indeed: their half-life is about 3×10−25 s, and so that’s the same order of magnitude (or minitude I should say) as the mean lifetime of other resonances observed in particle collisions. 

Feynman diagram beta decay

Indeed, you’ll notice that, in this so-called Feynman diagram, there’s no space axis. That’s because the distances involved are so tiny that we have to distort the scale—so we are not using equivalent time and distance units here, as Feynman diagrams should. That’s in line with a more prosaic description of what may be happening: W bosons mediate the weak force by seemingly absorbing an awful lot of momentum, spin, and whatever other energy related to all of the qubits describing the particles involved, to then eject an electron (or positron) and a neutrino (or an anti-neutrino).

Hmm… That’s not a standard description of a W boson as a force carrying particle, you’ll say. You’re right. This is more the description of a Z boson. What’s the Z boson again? Well… I haven’t explained it yet. It’s not involved in beta decay. There’s a process called elastic scattering of neutrinos. Elastic scattering means that some momentum is exchanged but neither the target (an electron or a nucleus) nor the incident particle (the neutrino) are affected as such (so there’s no break-up of the nucleus for example). In other words, things bounce back and/or get deflected but there’s no destruction and/or creation of particles, which is what you would have with inelastic collisions. Let’s examine what happens here.

W and Z bosons in neutrino scattering experiments

It’s easy to generate neutrino beams: remember their existence was confirmed in 1956 because nuclear reactors create a huge flux of them ! So it’s easy to send lots of high-energy neutrinos into a cloud or bubble chamber and see what happens. Cloud and bubble chambers are prehistoric devices which were built and used to detect electrically charged particles moving through it. I won’t go into too much detail but I can’t resist inserting a few historic pictures here.

The first two pictures below document the first experimental confirmation of the existence of positrons by Carl Anderson, back in 1932 (and, no, he’s not Danish but American), for which he got a Nobel Prize. The magnetic field which gives the positron some curvature—the trace of which can be seen in the image on the right—is generated by the coils around the chamber. Note the opening in the coils, which allows for taking a picture when the supersaturated vapor is suddenly being decompressed – and so the charged particle that goes through it leaves a trace of ionized atoms behind that act as ‘nucleation centers’ around which the vapor condenses, thereby forming tiny droplets. Quite incredible, isn’t it? One can only admire the perseverance of these early pioneers.

Carl Anderson Positron

The picture below is another historical first: it’s the first detection of a neutrino in a bubble chamber. It’s fun to analyze what happens here: we have a mu-meson – aka as a muon – coming out of the collision here (that’s just a heavier version of the electron) and then a pion – which should (also) be electrically charged because the muon carries electric charge… But I will let you figure this one out. I need to move on with the main story. 🙂

FirstNeutrinoEventAnnotated

The point to note is that these spooky neutrinos collide with other matter particles. In the image above, it’s a proton, but so when you’re shooting neutrino beams through a bubble chamber, a few of these neutrinos can also knock electrons out of orbit, and so that electron will seemingly appear out of nowhere in the image and move some distance with some kinetic energy (which can all be measured because magnetic fields around it will give the electron some curvature indeed, and so we can calculate its momentum and all that).

Of course, they will tend to move in the same direction – more or less at least – as the neutrinos that knocked them loose. So it’s like the Compton scattering which we discussed earlier (from which we could calculate the so-called classical radius of the electron – or its size if you will)—but with one key difference: the electrons get knocked loose not by photons, but by neutrinos.

But… How can they do that? Photons carry the electromagnetic field so the interaction between them and the electrons is electromagnetic too. But neutrinos? Last time I checked, they were matter particles, not bosons. And they carry no charge. So what makes them scatter electrons?

You’ll say that’s a stupid question: it’s the neutrino, dummy ! Yes, but how? Well, you’ll say, they collide—don’t they? Yes. But we are not talking tiny billiard balls here: if particles scatter, one of the fundamental forces of Nature must be involved, and usually it’s the electromagnetic force: it’s the electron density around nuclei indeed that explains why atoms will push each other away if they meet each other and, as explained above, it’s also the electromagnetic force that explains Compton scattering. So billiard balls bounce back because of the electromagnetic force too and…

OK-OK-OK. I got it ! So here it must be the strong force or something. Well… No. Neutrinos are not made of quarks. You’ll immediately ask what they are made of – but the answer is simple: they are what they are – one of the four matter particles in the Standard Model – and so they are not made of anything else. Capito?

OK-OK-OK. I got it ! It must be gravity, no? Perhaps these neutrinos don’t really hit the electron: perhaps they skim near it and sort of drag it along as they pass? No. It’s not gravity either. It can’t be. We have no exact measurement of the mass of a neutrino but it’s damn close to zero – and, hence, way too small to exert any such influence on an electron. It’s just not consistent with those traces.

OK-OK-OK. I got it ! It’s that weak force, isn’t it? YES ! The Feynman diagrams below show the mechanism involved. As far as terminology goes (remember Feynman’s complaints about the up, down, strange, charm, beauty and truths quarks?), I think this is even worse. The interaction is described as a current, and when the neutral Z boson is involved, it’s called a neutral current – as opposed to…  Well… Charged currents. Neutral and charged currents? That sounds like sweet and sour candy, isn’t it? But isn’t candy supposed to be sweet? Well… No. Sour candy is pretty common too. And so neutral currents are pretty common too.

neutrino_scattering

You obviously don’t believe a word of what I am saying and you’ll wonder what the difference is between these charged and neutral currents. The end result is the same in the first two pictures: an electron and a neutrino interact, and they exchange momentum. So why is one current neutral and the other charged? In fact, when you ask that question, you are actually wondering whether we need that neutral Z boson. W bosons should be enough, no?

No. The first and second picture are “the same but different”—and you know what that means in physics: it means it’s not the same. It’s different. Full stop. In the second picture, there is electron absorption (only for a very brief moment obviously, but so that’s what it is, and you don’t have that in the first diagram) and then electron emission, and there’s also neutrino absorption and emission. […] I can sense your skepticism – and I actually share it – but that’s what I understand of it !

[…] So what’s the third picture? Well… That’s actually beta decay: a neutron becomes a proton, and there’s emission of an electron and… Hey ! Wait a minute ! This is interesting: this is not what we wrote above: we have an incoming neutrino instead of an outgoing anti-neutrino here. So what’s this?

Well… I got this illustration from a blog on physics (Galileo’s Pendulum – The Flavor of Neutrinos) which, in turn, mentions Physics Today as its source. The incoming neutrino has nothing to do with the usual representation of an anti-matter particle as a particle traveling backwards in time. It’s something different, and it triggers a very interesting question: could beta decay possibly be ‘triggered’ by neutrinos? Who knows?

I googled it, and there seems to be some evidence supporting such thesis. However, this ‘evidence’ is flimsy (the only real ‘clue’ is that the activity of the Sun, as measured by the intensity of solar flares, seems to be having some (tiny) impact on the rate of decay of radioactive elements on Earth) and, hence, most ‘serious’ scientists seem to reject that possibility. I wonder why: it would make the ‘weird force’ somewhat less weird in my view. So… What to say? Well… Nothing much at this moment. Let me move on and examine the question a bit more in detail in a Post Scriptum.

The odd one out

You may wonder if neutrino-electron interaction always involve the weak force. The answer to that question is simple: Yes ! Because they do not carry any electric charge, and because they are not quarks, neutrinos are only affected by the weak force. However, as evidenced by all the stuff I wrote on beta decay, you cannot turn this statement on its head: the weak force is relevant not only for neutrinos but for electrons and quarks as well ! That gives us the following connection between forces and matter:

forces and matter

[Specialists reading this post may say they’ve not seen this diagram before. That might be true. I made it myself – for a change – but I am sure it’s around somewhere.]

It is a weird asymmetry: almost massless particles (neutrinos) interact with other particles through massive bosons, and these massive ‘things’ are supposed to be ‘bosons’, i.e. force carrying particles ! These physicists must be joking, right? These bosons can hardly carry themselves – as evidenced by the fact they peter out just like all of those other ‘resonances’ !

Hmm… Not sure what to say. It’s true that their honorific title – ‘intermediate vectors’ – seems to be quite apt: they are very intermediate indeed: they only appear as a short-lived stage in between the initial and final state of the system. Again, it leads one to think that these W bosons may just reflect some kind of energy blob caused by some neutrino – or anti-neutrino – crashing into another matter particle (a quark or an electron). Whatever it is, this weak force is surely the odd one out.

Odd one out

In my previous post, I mentioned other asymmetries as well. Let’s revisit them.

Time irreversibility

In Nature, uranium is usually found as uranium-238. Indeed, that’s the most abundant isotope of uranium: about 99.3% of all uranium is U-238. There’s also some uranium-235 out there: some 0.7%. And there are also trace amounts of U-234. And that’s it really. So where is the U-232 we introduced above when talking about alpha decay? Well… We said it has a half-life of 68.9 years only and so it’s rather normal U-232 cannot be found in Nature. What? Yes: 68.9 years is nothing compared to the half-life of U-238 (4.47 billion years) or U-235 (704 million years), and so it’s all gone. In fact, the tiny proportion of U-235 on this Earth is what allows us to date the Earth. The math and physics involved resemble the math and physics involved in carbon-dating but so carbon-dating is used for organic materials only, because the carbon-14 that’s used also has a fairly short half-time: 5,730 years—so that’s like a thousand times more than U-232 but… Well… Not like millions or billions of years. [You’ll immediately ask why this C-14 is still around if it’s got such a short life-time. The answer to that is easy: C-14 is continually being produced in the atmosphere and, hence, unlike U-232, it doesn’t just disappear.]

Hmm… Interesting. Radioactive decay suggests time irreversibility. Indeed, it’s wonderful and amazing – but sad at the same time:

  1. There’s so much diversity – a truly incredible range of chemical elements making life what it is.
  2. But so all these chemical elements have been produced through a process of nuclear fusion in stars (stellar nucleosynthesis), which were then blasted into space by supernovae, and so they then coagulated into planets like ours.
  3. However, all of the heavier atoms will decay back into some lighter element because of radioactive decay, as shown in the graph below.
  4. So we are doomed !

Overview of decay modes

In fact, some of the GUT theorists think that there is no such thing as ‘stable nuclides’ (that’s the black line in the graph above): they claim that all atomic species will decay because – according to their line of reasoning – the proton itself is NOT stable.

WHAT? Yeah ! That’s what Feynman complained about too: he obviously doesn’t like these GUT theorists either. Of course, there is an expensive experiment trying to prove spontaneous proton decay: the so-called Super-K under Mount Kamioka in Japan. It’s basically a huge tank of ultra-pure water with a lot of machinery around it… Just google it. It’s fascinating. If, one day, it would be able to prove that there’s proton decay, our Standard Model would be in very serious problems – because it doesn’t cater for unstable protons. That being said, I am happy that has not happened so far – because it would mean our world would really be doomed.   

What do I mean with that? We’re all doomed, aren’t we? If only because of the Second Law of Thermodynamics. Huh? Yes. That ‘law’ just expresses a universal principle: all kinetic and potential energy observable in nature will, in the end, dissipate: differences in temperature, pressure, and chemical potential will even out. Entropy increases. Time is NOT reversible: it points in the direction of increasing entropy – till all is the same once again. Sorry? 

Don’t worry about it. When everything is said and done, we humans – or life in general – are an amazing negation of the Second Law of Thermodynamics: temperature, pressure, chemical potential and what have you – it’s all super-organized and super-focused in our body ! But it’s temporary indeed – and we actually don’t negate the Second Law of Thermodynamics: we create order by creating disorder. In any case, I don’t want to dwell on this point. Time reversibility in physics usually refers to something else: time reversibility would mean that all basic laws of physics (and with ‘basic’, I am excluding this higher-level Second Law of Thermodynamics) would be time-reversible: if we’d put in minus t (–t) instead of t, all formulas would still make sense, wouldn’t they? So we could – theoretically – reverse our clock and stopwatches and go back in time.

Can we do that?

Well… We can reverse a lot. For example, U-232 decays into a lot of other stuff BUT we can produce U-232 from scratch once again—from thorium to be precise. In fact, that’s how we got it in the first place: as mentioned above, any natural U-232 that might have been produced in those stellar nuclear fusion reactors is gone. But so that means that alpha decay is reversible: we’re producing stable stuff – U-232 lasts for dozens of years – that probably existed long time ago but so it decayed and now we’re reversing the arrow of time using our nuclear science and technology.

Now, you may object that you don’t see Nature spontaneously assemble the nuclear technology we’re using to produce U-232, except if Nature would go for that Big Crunch everyone’s predicting so it can repeat the Big Bang once again (so that’s the oscillating Universe scenario)—and you’re obviously right in that assessment. That being said, from some kind of weird existential-philosophical point of view, it’s kind of nice to know that – in theory at least – there is time reversibility indeed (or T symmetry as it’s called by scientists). 

[Voice booming from the sky] STOP DREAMING ! TIME REVERSIBILITY DOESN’T EXIST !

What? That’s right. For beta decay, we don’t have T symmetry. The weak force breaks all kinds of symmetries, and time symmetry is only one of them. I talked about these in my previous post (Loose Ends) – so please have a look at that, and let me just repeat the basics:

  1. Parity (P) symmetry or mirror symmetry revolves around the notion that Nature should not distinguish between right- and left-handedness, so everything that works in our world, should also work in the mirror world. Now, the weak force does not respect P symmetry: we need right-handed neutrinos for β decay, and we’d also need right-handed neutrinos to reverse the process – which actually happens: so, yes, beta decay might be time-reversible but so it doesn’t work with left-handed neutrinos – which is what our ‘right-handed’ neutrinos would be in the ‘mirror world’. Full stop. Our world is different from the mirror world because the weak force knows the difference between left and right – and some stuff only works with left-handed stuff (and then some other stuff only works with right-handed stuff). In short, the weak force doesn’t work the same in the mirror world. In the mirror world, we’d need to throw in left-handed neutrinos for β decay. Not impossible but a bit of a nuisance, you’ll agree.  
  2. Charge conjugation or charge (C) symmetry revolves around the notion that a world in which we reverse all (electric) charge signs. Now, the weak force also does not respect C symmetry. I’ll let you go through the reasoning for that, but it’s the same really. Just reversing all signs would not make the weak force ‘work’ in the mirror world: we’d have to ‘keep’ some of the signs – notably those of our W bosons !
  3. Initially, it was thought that the weak force respected the combined CP symmetry (and, therefore, that the principle of P and C symmetry could be substituted by a combined CP symmetry principle) but two experimenters – Val Fitch and James Cronin – got a Nobel Prize when they proved that this was not the case. To be precise, the spontaneous decay of neutral kaons (which is a type of decay mediated by the weak force) does not respect CP symmetry. Now, that was the death blow to time reversibility (T symmetry). Why? Can’t we just make a film of those experiments not respecting P, C or CP symmetry, and then just press the ‘reverse’ button? We could but one can show that the relativistic invariance in Einstein’s relativity theory implies a combined CPT symmetry. Hence, if CP is a broken symmetry, then the T symmetry is also broken. So we could play that film, but the laws of physics would not make sense ! In other words, the weak force does not respect T symmetry either !

To summarize this rather lengthy philosophical digression: a full CPT sequence of operations would work. So we could – in sequence – (1) change all particles to antiparticles (C), (2) reflect the system in a mirror (P), and (3) change the sign of time (T), and we’d have a ‘working’ anti-world that would be just as real as ours. HOWEVER, we do not live in a mirror world. We live in OUR world – and so left-handed is left-handed, and right-handed is right-handed, and positive is positive and negative is negative, and so THERE IS NO TIME REVERSIBILITY: the weak force does not respect T symmetry.

Do you understand now why I call the weak force the weird force? Penrose devotes a whole chapter to time reversibility in his Road to Reality, but he does not focus on the weak force. I wonder why. All that rambling on the Second Law of Thermodynamics is great – but one should relate that ‘principle’ to the fundamental forces and, most notably, to the weak force.

Post scriptum 1:

In one of my previous posts, I complained about not finding any good image of the Higgs particle. The problem is that these super-duper particle accelerators don’t use bubble chambers anymore. The scales involved have become incredibly small and so all that we have is electronic data, it seems, and that is then re-assembled into some kind of digital image but – when everything is said and done – these images are only simulations. Not the real thing. I guess I am just an old grumpy guy – a 45-year old economist: what do you expect? – but I’ll admit that those black-and-white pictures above make my heart race a bit more than those colorful simulations. But so I found a good simulation. It’s the cover image of Wikipedia’s Physics beyond the Standard Model (I should have looked there in the first place, I guess). So here it is: the “simulated Large Hadron Collider CMS particle detector data depicting a Higgs boson (produced by colliding protons) decaying into hadron jets and electrons.”

CMS_Higgs-event (1)

So that’s what gives mass to our massive W bosons. The Higgs particle is a massive particle itself: an estimated 125-126 GeV/c2, so that’s about 1.5 times the mass of the W bosons. I tried to look into decay widths and all that, but it’s all quite confusing. In short, I have no doubt that the Higgs theory is correct – the data is all what we have and then, when everything is said and done, we have an honorable Nobel Prize Committee thinking the evidence is good enough (which – in light of their rather conservative approach (which I fully subscribe too: don’t get me wrong !) – usually means that it’s more than good enough !) – but I can’t help thinking this is a theory which has been designed to match experiment. 

Wikipedia writes the following about the Higgs field:

“The Higgs field consists of four components, two neutral ones and two charged component fields. Both of the charged components and one of the neutral fields are Goldstone bosons, which act as the longitudinal third-polarization components of the massive W+, W– and Z bosons. The quantum of the remaining neutral component corresponds to (and is theoretically realized as) the massive Higgs boson.”

Hmm… So we assign some qubits to W bosons (sorry for the jargon: I am talking these ‘longitudinal third-polarization components’ here), and to W bosons only, and then we find that the Higgs field gives mass to these bosons only? I might be mistaken – I truly hope so (I’ll find out when I am somewhat stronger in quantum-mechanical math) – but, as for now, it all smells somewhat fishy to me. It’s all consistent, yes – and I am even more skeptical about GUT stuff ! – but it does look somewhat artificial.

But then I guess this rather negative appreciation of the mathematical beauty (or lack of it) of the Standard Model is really what is driving all these GUT theories – and so I shouldn’t be so skeptical about them ! 🙂

Oh… And as I’ve inserted some images of collisions already, let me insert some more. The ones below document the discovery of quarks. They come out of the above-mentioned coffee table book of Lederman and Schramm (1989). The accompanying texts speak for themselves.

Quark - 1

Quark - 2

Quark - 3

 

Post scriptum 2:

I checked the source of that third diagram showing how an incoming neutrino could possibly cause a neutron to become a proton. It comes out of the August 2001 issue of Physics Today indeed, and it describes a very particular type of beta decay. This is the original illustration:

inverse beta decay

The article (and the illustration above) describes how solar neutrinos traveling through heavy water – also known as deuterium – can interact with the deuterium nucleus – which is referred to as deuteron, and which we’ll represent by the symbol d in the process descriptions below. The nucleus of deuterium – which is an isotope of hydrogen – consists of one proton and one neutron, as opposed to the much more common protium isotope of hydrogen, which has just one proton in the nucleus. Deuterium occurs naturally (0.0156% of all hydrogen atoms in the Earth’s oceans is deuterium), but it can also be produced industrially – for use in heavy-water nuclear reactors for example. In any case, the point is that deuteron can respond to solar neutrinos by breaking up in one of two ways:

  1. Quasi-elastically: ve + d → ve + p + n. So, in this case, the deuteron just breaks up in its two components: one proton and one neutron. That seems to happen pretty frequently because the nuclear forces holding the proton and the neutron together are pretty weak it seems.
  2. Alternatively, the solar neutrino can turn a deuteron’s neutron into a second proton, and so that’s what’s depicted in the third diagram above: ve + d → e + p + p. So what happens really is ve + n → e + p.

The author of this article – which basically presents the basics of how a new neutrino detector – the Sudbury Neutrino Observatory – is supposed to work – refers to the second process as inverse beta decay – but that’s a rather generic and imprecise term it seems. The conclusion is that the weak force seems to have myriad ways of expressing itself. However, the connection between neutrinos and the weak force seems to need further exploring. As for myself, I’d like to know why the hypothesis that any form of beta decay – or, for that matter, any other expression of the weak force – is actually being triggered by these tiny neutrinos crashing into (other) matter particles would not be reasonable.

In such scenario, the W bosons would be reduced to a (very) temporary messy ‘blob’ of energy, combining kinetic, electromagnetic as well as the strong binding energy between quarks if protons and neutrons are involved. Could this ‘odd one out’ be nothing but a pseudo-force? I am no doubt being very simplistic here – but then it’s an interesting possibility, isn’t it? In order to firmly deny it, I’ll need to learn a lot more about neutrinos no doubt – and about how the results of all these collisions in particle accelerators are actually being analyzed and interpreted.