Is the weak force a force?

012345678

Time reversal and CPT symmetry (I)

In my previous posts, I introduced the concept of time symmetry, and parity and charge symmetry as well. However, let’s try to explore T-symmetry first. It’s not an easy concept – contrary to what one might think at first.

The arrow of time

Let me start with a very ‘common-sense’ introduction. What do we see when we play a movie backwards? […]

We reverse time. When playing some movie backwards, we look at where things are coming from. And we see phenomena that don’t make sense, such as: (i) cars racing backwards, (ii) old people becoming younger (and dead people coming back to life), (iii) shattered glass assembling itself back into some man-made shape, and (iv) falling objects defying gravity to get back to where they were. Let’s briefly say something about these unlikely or even impossible phenomena before a more formal treatment of the matter:

  1. The first phenomenon – cars racing backwards – is unlikely to happen in real life but quite possible, and some crazies actually do organize such races.
  2. The last example – objects defying gravity – is plain impossible because of Newton’s universal law of gravitation.
  3. The other examples – the old becoming young (and the dead coming back to life), and glass shards coming back together into one piece – are also plain impossible because of some other ‘law’: the law of ever increasing entropy.

However, there’s a distinct difference between the two ‘laws’ (gravity versus increasing entropy). As one entry on Physics Stack Exchange notes, the entropy law – better known as the second law of thermodynamics – “only describes what is most likely to happen in macroscopic systems, rather than what has to happen”, but then the author immediately qualifies this apparent lack of determinism, and rightly so: “It is true that a system may spontaneously decrease its entropy over some time period, with a small but non-zero probability. However, the probability of this happening over and over again tends to zero over long times, so is completely impossible in the limit of very long times.” Hence, while one will find some people wondering whether this entropy law is a ‘real law’ of Nature – in the sense that they would question that it’s always true no matter what – there is actually no room for such doubts.

That being said, the character of the entropy law and the universal law of gravitation is obviously somewhat different – because they describe different realities: the entropy law is a law at the level of a system (a room full of air, for example), while the law of gravitation describes one of the four fundamental forces.

I will now be a bit more formal. What’s time symmetry in physics? The Wikipedia definition is the following: “T-symmetry is the theoretical symmetry (invariance) of physical laws under a time reversal (T) transformation.” Huh?

A ‘time reversal transformation’ amounts to inserting –t (minus t) instead of t in all of our equations describing trajectories or physical laws. Such transformation is illustrated below. The blue curve might represent a car or a rocket accelerating (in this particular example, we have a constant acceleration a = 2). The vertical axis measures the displacement (x) as a function of time (t). , and the red curve is its T-transformation. The two curves are each other’s mirror image, with the vertical axis (i.e. the axis measuring the displacement x) as the mirror axis.

Time reversal 2

This view of things is quite static and, hence, somewhat primitive I should say. However, we can make a number of remarks already. For example, we can see that the slope (of the tangent) of the red curve is negative. This slope is the velocity (v) of the particle: v = dx/dt. Hence, a T-transformation is said to negate the velocity variable (in classical physics that is), just like it negates the time variable. [The verb ‘to negate’ is used here in its mathematical sense: it means ‘to take the additive inverse of a number’ — but you’ll agree that’s too lengthy to be useful as an expression.]

Note that velocity (and mass) determines (linear and angular) momentum and, hence, a T-transformation will also negate p and l, i.e. the linear and angular momentum of a particle.

Such variables – i.e. variables that are negated by the T-transformation – are referred to as odd variables, as opposed to even variables, which are not impacted by the T-transformation: the position of the particle or object (x) is an example of an even variable, and the force acting on a particle (F) is not being negated either: it just remains what it is, i.e. an external force acting on some mass or some charge. The acceleration itself is another ‘even’ variable.

This all makes sense: why would the force or acceleration change? When we put a minus sign in front of the time variable, we are basically just changing the direction of an axis measuring an independent variable. In a way, the only thing that we are doing is introducing some non-standard way of measuring time, isn’t it? Instead of counting from 0 to T, we count from 0 to minus T.

Well… No. In this post, I want to discuss actual time reversal. Can we go back in time? Can we put a genie back into a bottle? Can we reverse all processes in Nature and, if not, why not?

Time reversal and time symmetry are two different things: doing a T-transformation is a mathematical operation; trying to reverse time is something real. Let’s take an example from kinematics to illustrate the matter.

Kinematics

Kinematics can be summed up in one equation, best known as Newton’s Second Law: F = ma = m(dv/dt) = d(mv)/dt.  In words: the time-rate-of-change of a quantity called momentum (mv) is proportional to the force on an object. In other words: the acceleration (a) of an object is proportional to the force (F), and the factor of proportionality is the mass of the object (m). Hence, the mass of an object is nothing but a measure of its inertia.

The numbering of laws (first, second, etcetera) – usually combining some name of a scientist – is often quite arbitrary but, in this case (Newton’s Laws), one can really learn something from listing and discussing them in the right order:

  1. Newton’s First Law is the principle of inertia: if there’s no (other) force acting on an object, it will just continue doing what it does–i.e. nothing or, else, move in some straight line according to the direction of its momentum (i.e. the product of its mass and its velocity)–or further engage with the force it was already engaged with.
  2. Newton’s Second Law is the law of kinematics. In kinematics, we analyze the motion of an object without caring about the origin of the force causing the motion. So we just describe how some force impacts the motion of the object on which it is acting without asking any questions about the force itself. We’ve written this law above: F = ma.
  3. Finally, Newton’s Third Law is the law of gravitation, which describes the origin, the nature and the strength of the gravitational force. That’s part of dynamics, i.e. the study of the forces themselves – as opposed to kinematics, which only looks at the motion caused by those forces.

With these definitions and clarifications, we are now well armed to tackle the subject of T-symmetry in kinematics (we’ll discuss dynamics later). Suppose some object – perhaps an elementary particle but it could also be a car or a rocket indeed – is moving through space with some constant acceleration a (so we can write a(t) = a). This means that v(t) – the velocity as a function of time – will not be constant: v(t) = at. [Note that we make abstraction of the direction here and, hence, our notation does not use any bold letters (which would denote vector quantities): v(t) and a(t) are just simple scalar quantities in this example.]

Of course, when we – i.e. you and me right here and right now – are talking time reversal, we obviously do it from some kind of vantage point. That vantage point will usually be the “now” (and quite often also the “here”), and so let’s use that as our reference frame indeed and we will refer to it as the zero time point: t = 0. So it’s not the origin of time: it’s just ‘now’–the start of our analysis.

Now, the idea of going back in time also implies the idea of looking forward – and vice versa. Let’s first do what we’re used to do and so that’s to look forward.

At some point in the future, let’s call it t = T, the velocity of our object will be equal to v(T) = v(0) + aT. Why the v(0)? Well… We defined the zero time point (t = 0) in a totally random way and, hence, our object is unlikely to stop for that. On the contrary: it is likely to already have some velocity and so that’s why we’re adding this v(0) here. As for the space coordinate, our object may also not be at the exact same spot as we are (we don’t want to be to close to a departing rocket I would assume), so we can also not assume that x(0) = 0 and so we will also incorporate that term somehow. It’s not essential to the analysis though.

OK. Now we are ready to calculate the distance that our object will have traveled at point T. Indeed, you’ll remember that the distance traveled is an infinite sum of infinitesimally small products vΔt: the velocity at each point of time multiplied by an infinitesimally small interval of time. You’ll remember that we write such infinite sum as an integral:

Eq 1

[In case you wonder why we use the letter ‘s’ for distance traveled: it’s because the ‘d’ symbol is already used to denote a differential and, hence, ‘s’ is supposed to stand for ‘spatium’, which is the Latin word for distance or space. As for the integral sign, you know that’s an elongated S really, don’t you? So its stands for an infinite sum indeed. But lets go back to the main story.]

We have a functional form for v(t), namely v(t) = v(0) + at, and so we can easily work out this integral to find s as a function of time. We get the following equation:

Eq 2

When we re-arrange this equation, we get the position of our object as a function of time:

Eq 3

Let us now reverse time by inserting –T everywhere:

Eq 4

Does that still make sense? Yes, of course, because we get the same result when doing our integral:

Eq 5

So that ‘makes sense’. However, I am not talking mathematical consistency when I am asking if it still ‘makes sense’. Let us interpret all of this by looking at what’s happening with the velocity. At t = 0, the velocity of the object is v(0), but T seconds ago, i.e. at point t = -T, the velocity of the object was v(-T) = v(0) – aT. This velocity is less than v(0) and, depending on the value of -T, it might actually be negative. Hence, when we’re looking back in time, we see the object decelerating (and we should immediately add that the deceleration is – just like the acceleration – a constant). In fact, it’s the very same constant a which determines when the velocity becomes zero and then, when going even further back in time, when it becomes negative.

Huh? Negative velocity? Here’s the difference with the movie: in that movie that we are playing backwards, our car, our rocket, or the glass falling from a table or a pedestal would come to rest at some point back in time. We can calculate that point from our velocity equation v(t) = v(0) + at. In the example below, our object started accelerating 2.5 seconds ago, at point t = –2.5. But, unlike what we would see happening in our backwards-playing movie, we see that object not only stopping but also reversing its direction, to go in the same direction as we saw it going when we’re watching the movie before we hit the ‘Play Backwards’ button. So, yes, the velocity of our object changes sign as it starts following the trajectory on the left side of the graph.

time reversal

What’s going on here? Well… Rest assured: it’s actually quite simple: because the car or that rocket in our movie are real-life objects which were actually at rest before t = –2.5, the left side of the graph above is – quite simply – not relevant: it’s just a mathematical thing. So it does not depict the real-life trajectory of an accelerating car or rocket. The real-life trajectory of that car or rocket is depicted below.

real-life car

So we also have a ‘left side’ here: a horizontal line representing no movement at all. Our movie may or may not have included this status quo. If it did, you should note that we would not be able to distinguish whether or not it would be playing forward or backwards. In fact, we wouldn’t be able to tell whether the movie was playing at all: we might just as well have hit the ‘pause’ button and stare at a frozen screenshot.

Does that make sense? Yes. There are no forces acting on this object here and, hence, there is no arrow of time.

Dynamics

The numerical example above is confusing because our mind is not only thinking about the trajectory as such but also about the force causing the particle—or the car or the rocket in the example above—to move in this or that direction. When it’s a rocket, we know it ignited its boosters 2.5 seconds ago (because that’s what we saw – in reality or in a movie of the event) and, hence, seeing that same rocket move backwards – both in time as well as in space – while its boosters operate at full thrust does not make sense to us. Likewise, an obstacle escaping gravity with no other forces acting on it does not make sense either.

That being said, reversing the trajectory and, hence, actually reversing the effects of time, should not be a problem—from a purely theoretical point at least: we should just apply twice the force produced by the boosters to give that rocket the same acceleration in the reverse direction. That would obviously means we would force it to crash back into the Earth. Because that would be rather complicated (we’d need twice as many boosters but mounted in the opposite direction), and because it would also be somewhat evil from a moral point of view, let us consider some less destructive examples.

Let’s take gravity, or electrostatic attraction or repulsion. These two forces also cause uniform acceleration or deceleration on objects. Indeed, one can describe the force field of a large mass (e.g. the Earth)—or, in electrostatics, some positive or negative charge in space— using field vectors. The field vectors for the electric field are denoted by E, and, in his famous Lectures on Physics, Feynman uses a C for the gravitational field. The forces on some other mass m and on some other charge q can then be written as F = mC and F = qE respectively. The similarity with the F = ma equation – Newton’s Second Law in other words – is obvious, except that F = mC and F = qE are an expression of the origin, the nature and the strength of the force:

  1. In the case of the electrostatic force (remember that likes repel and opposites attract), the magnitude of E is equal to E = qc/4πε0r2. In this equation, εis the electric constant, which we’ve encountered before, and r is the distance between the charge q and the charge qcausing the field).
  2. For the gravitational field we have something similar, except that there’s only attraction between masses, no repulsion. The magnitude of C will be equal to C = –GmE/r2, with mE the mass causing the gravitational field (e.g. the mass of the Earth) and G the universal gravitational constant. [Note that the minus sign makes the direction of the force come out alright taking the existing conventions: indeed, it’s repulsion that gets the positive sign – but that should be of no concern to us here.]

So now we’ve explained the dynamics behind that x(t) = x(0) + v(0)·t + (a/2)·tcurve above, and it’s these dynamics that explain why looking back in time does not make sense—not in a mathematical way but in philosophical way. Indeed, it’s the nature of the force that gives time (or the direction of motion, which is the very same ‘arrow of time’) one–and only one–logical direction.

OK… But so what is time reversibility then – or time symmetry as it’s referred to? Let me defer an answer to this question by first introducing another topic.

Even and odd functions

I already introduced the concept of even and odd variables above. It’s obviously linked to some symmetry/asymmetry. The x(t) curve above is symmetric. It is obvious that, if we would change our coordinate system to let x(0) equal x(0) = 0, and also choose the origin of time such that v(0) = 0, then we’d have a nice symmetry with respect to the vertical axis. The graph of the quadratic function below illustrates such symmetry.

Even functionFunctions with a graph such as the one above are called even functions. A (real-valued) function f(t) of a (real) variable t is defined as even if, for all t and –t in the domain of f, we find that f(t) = f(–t).

We also have odd functions, such as the one depicted below. An odd function is a function for which f(-t) = –f(t).

Odd function

The function below gives the velocity as a function of time, and it’s clear that this would be an odd function if we would choose the zero time point such that v(0) = 0. In that case, we’d have a line through the origin and the graph would show an odd function. So that’s why we refer to v as an odd variable under time reversal.

Velocity curve

A very particular and very interesting example of an even function is the cosine function – as illustrated below.

Cosine functionNow, we said that the left side of the graph of the trajectory of our car or our rocket (i.e. the side with a negative slope and, hence, negative velocity) did not make much sense, because – as we play our movie backwards – it would depict a car or a rocket accelerating in the absence of a force. But let’s look at another situation here: a cosine function like the one above could actually represent the trajectory of a mass oscillating on a spring, as illustrated below.

oscillating springIn the case of a spring, the force causing the oscillation pulls back when the spring is stretched, and it pushes back when it’s compressed, so the mechanism is such that the direction of the force is being reversed continually. According to Hooke’s Law, this force is proportional to the amount of stretch. If x is the displacement of the mass m, and k that factor of proportionality, then the following equality must hold at all times:

F = ma = m(d2x/dt2) = –kx ⇔ d2x/dt= –(k/m)x

Is there also a logical arrow of time here? Look at the illustration below. If we follow the green arrow, we can readily imagine what’s happening: the spring gets stretched and, hence, the mass on the spring (at maximum speed as it passes the equilibrium position) encounters resistance: the spring pulls it back and, hence, it slows down and then reverses direction. In the reverse direction – i.e. the direction of the red arrow – we have the reverse logic: the spring gets compressed (x is negative), the mass slows down (as evidence by the curvature of the graph), and – at some point – it also reverses its direction of movement. [I could note that the force equation above is actually a second-order linear differential equation, and that the cosine function is its solution, but that’s a rather pedantic and, hence, totally superfluous remark here.]

temp

What’s important is that, in this case, the ‘arrow of time’ could point either way, and both make sense. In other words, when we would make a movie of this oscillating movement, we could play it backwards and it would still make sense. 

Huh? Yes. Just in case you would wonder whether this conclusion depends on our starting point, it doesn’t. Just look at the illustration below, in which I assume we are starting to watch that movie (which is being played backwards without us knowing it is being played backwards) of the oscillating spring when the mass is not in the equilibrium position. It makes perfect sense: the spring is stretched, and we see the mass accelerating to the equilibrium position, as it should.

temp2

What’s going on here? Why can we reverse the arrow of time in the case of the spring, and why can’t we do that in the case of that particle being attracted or repelled by another? Are there two realities here? No. There’s only. I’ve been playing a trick on you. Just think about what is actually happening and then think about that so-called ‘time reversal’:

  1. At point A, the spring is still being stretched further, in reality that is, and so the mass is moving away from the equilibrium position. Hence, in reality, it will not move to point B but further away from the equilibrium position.
  2. However, we could imagine it moving from point A to B if we would reverse the direction of the force. Indeed, the force is equal to –kx and reversing its direction is equivalent to flipping our graph around the horizontal axis (i.e. the time axis), or to shifting the time axis left or right with an amount equal to π (note that the ‘time’ axis is actually represented by the phase, but that’s a minor technical detail and it does not change the analysis: we just measure time in radians here instead of seconds).

It’s a visual trick. There is no ‘real’ symmetry. The flipped graph corresponds to another situation (i.e. some other spring that started oscillating a bit earlier or later than ours here). Hence, our conclusion that it is the force that gives time direction, still holds.

Hmm… Let’s think about this. What makes our ‘trick’ work is that the force is allowed to change direction. Well… If we go back to our previous example of an object falling towards the center of some gravitational field, or a charge being attracted by some other (opposite) charge, then you’ll note that we can make sense of the ‘left side’ of the graph if we would change the sign of the force.

Huh? Yes, I know. This is getting complicated. But think about it. The graph below might represent a charged particle being repelled by another (stationary) particle: that’s the green arrow. We can then go back in time (i.e. we reverse the green arrow) if we reverse the direction of the force from repulsion to attraction. Now, that would usually lead to a dramatic event—the end of the story to be precise. Indeed, once the two particles get together, they’re glued together and so we’d have to draw another horizontal line going in the minus t direction (i.e. to the left side of our time axis) representing the status quo. Indeed, if the two particles sit right on top of each other, or if they would literally fuse or annihilate each other (like a particle and an anti-particle), then there’s no force or anything left at all… except ifwe would alter the direction of the force once again, in which case the two particles would fly apart again (OK. OK. You’re right in noting that’s not true in the annihilation case – but that’s a minor detail).

arrow of time

Is this story getting too complicated? It shouldn’t. The point to note is that reversibility – i.e. time reversal in the philosophical meaning of the word (not that mathematical business of inserting negative variables instead of positive ones) – is all about changing the direction of the force: going back in time implies that we reverse the effects of time, and reversing the effects of time, requires forces acting in the opposite direction.

Now, when it’s only kinetic energy that is involved, then it should be easy but when charges are involved, which is the case for all fundamental forces, then it’s not so easy. That’s when charge (C) and parity (P) symmetry come into the picture.

CP symmetry

Hooke’s ‘Law’ – i.e. the law describing the force on a mass on a stretched or compressed spring – is not a fundamental law: eventually the spring will stop. Yes. It will stop even if when it’s in a horizontal position and with the mass moving on a frictionless surface, as assumed above: the forces between the atoms and/or molecules in the spring give the spring the elasticity which causes the mass to oscillate around some equilibrium position, but some of the energy of that continuous movement gets lost in heat energy (yes, an oscillating spring does actually get warmer!) and, hence, eventually the movement will peter out and stop.

Nevertheless, the lesson we learned above is a valuable one: when it comes to the fundamental forces, we can reverse the arrow of time and still make sense of it all if we also reverse the ‘charges’. The term ‘charges’ encompasses anything measuring a propensity to interact through one of the four fundamental forces here. That’s where CPT symmetry comes in: if we reverse time, we should also reverse the charges.

But how can we change the ‘sign’ of mass: mass is always positive, isn’t it? And what about the P-symmetry – this thing about left-handed and right-handed neutrinos?

Well… I don’t know. That’s the kind of stuff I am currently exploring in my quest. I’ll just note the following:

1. Gravity might be a so-called pseudo force – because it’s proportional to mass. I won’t go into the details of that – if only because I don’t master them as yet – but Einstein’s gut instinct that gravity is not a ‘real’ fundamental force (we just have to adjust our reference frame and work with curved spacetime) – and, hence, that ‘mass’ is not like the other force ‘charges’ – is something I want to further explore. [Apart from being a measure for inertia, you’ll remember that (rest) mass can also be looked at as equivalent to a very dense chunk of energy, as evidenced by Einstein’s energy-mass equivalence formula: E = mc2.]

As for now, I can only note that the particles in an ‘anti-world’ would have the same mass. In that sense, anti-matter is not ‘anti’-matter: it just carries opposite electromagnetic, strong and weak charges. Hence, our C-world (so the world we get when applying a charge transformation) would have all ‘charges’ reversed, but mass would still be mass.

2. As for parity symmetry (i.e. left- and right-handedness, aka as mirror symmetry), I note that it’s raised primarily in relation to the so-called weak force and, hence, it’s also a ‘charge’ of sorts—in my primitive view of the world at least. The illustration below shows what P symmetry is all about really and may or may not help you to appreciate the point.

muon decay

OK. What is this? Let’s just go step by step here.

The ‘cylinder’ (both in (a), the upper part of the illustration, and in (b), the lower part) represents a muon—or a bunch of muons actually. A muon is an unstable particle in the lepton family. Think of it as a very heavy electron for all practical purposes: it’s about 200 times the mass of an electron indeed. Its lifetime is fairly short from our (human) point of view–only 2.2 microseconds on average–but that’s actually an eternity when compared to other unstable particles.

In any case, the point to note is that it usually decays into (i) two neutrinos (one muon-neutrino and one electron-antineutrino to be precise) and – importantly – (ii) one electron, so electric charge is preserved (indeed, neutrinos got the name they have because they carry no electric charge).

Now, we have left- and right-handed muons, and we can actually line them up in one of these two directions. I would need to check how that’s done, but muons do have a magnetic moment (just like electrons) and so I must assume it’s done in the same way as in Wu’s cobalt-60 experiment: through a uniform magnetic field. In other words, we know their spin directions in an experiment like this.

Now, if the weak force would respect mirror symmetry (but we already know it doesn’t), we would not be able to distinguish (i) the muon decay process in the ‘mirror world’ (i.e. the reflection of what’s going on in the (imaginary) mirror in the illustration above) from (ii) the decay process in ‘our’ (real) world. So that would be situation (a): the number of decay electrons being emitted in an upward direction would be the same (more or less) as the amount of decay electrons being emitted in a downward direction.

However, the actual laboratory experiments show that situation (b) is actually the case: most of the electrons are being emitted in only one direction (i.e. the upward direction in the illustration above) and, hence, the weak force does not respect mirror symmetry.

So what? Is that a problem?

For eminent physicists such as Feynman, it is. As he writes in his concluding Lecture on mechanics, radiation and heat (Vol. I, Chapter 52: Symmetry in Physical Laws): “It’s like seeing small hairs growing on the north pole of a magnet but not on its south pole.” [He means it allows us to distinguish the north and the south pole of a magnet in some absolute sense. Indeed, if we’re not able to tell right from left, we’re also not able to tell north from south – in any absolute sense that is. But so the experiment shows we actually can distinguish the two in some kind of absolute sense.]

I should also note that Wolfgang Pauli, one of the pioneers of quantum mechanics, said that it was “total nonsense” when he was informed about Wu’s experimental results, and that repeated experiments were needed to actually convince him that we cannot just create a mirror world out of ours. 

For me, it is not a problem.I like to think of left- and right-handedness as some charge itself, and of the combined CPT symmetry as the only symmetry that matters really. That should be evident from my rather intuitive introduction on time symmetry above.

Consider it and decide for yourself how logical or illogical it is. We could define what Feynman refers to as an axial vector: watching that muon ‘from below’, we see that its spin is clockwise, and let’s use that fact to define an axial vector pointing in the same direction as the thick black arrow (it’s the so-called ‘right-hand screw rule’ really), as shown below.

Axial vector

Now, let’s suppose that mirror world actually exists, in some corner in the universe, and that a guy living in that ‘mirror world’ would use that very same ‘right-hand-screw rule’: his axial vector when doing this experiment would point in the opposite direction (see the thick black arrow in the mirror, which points in the opposite direction indeed). So what’s wrong with that?

Nothing – in my modest view at least. Left- and right-handedness can just be looked at as any other ‘charge’ – I think – and, hence, if we would be able to communicate with that guy in the ‘mirror world’, the two experiments would come out the same. So the other guy would also notice that the weak force does not respect mirror symmetry but so there’s nothing wrong with that: he and I should just get over it and continue to do business as usual, wouldn’t you agree?

After all, there could be a zillion reasons for the experiment giving the results it does: perhaps the ‘right-handed’ spin of the muon is sort of transferred to the electron as the muon decays, thereby giving it the same type of magnetic moment as the one that made the muon line up in the first place. Or – in a much wilder hypothesis which no serious physicist would accept – perhaps we actually do not yet understand everything of the weak decay process: perhaps we’ve got all these solar neutrinos (which all share the same spin direction) interfering in the process.

Whatever it is: Nature knows the difference between left and right, and I think there’s nothing wrong with that. Full stop.

But then what is ‘left’ and ‘right’ really? As the experiment pointed out, we can actually distinguish between the two in some kind of absolute sense. It’s not just a convention. As Feynman notes, we could decide to label ‘right’ as ‘left’, and ‘left’ as ‘right’ right here and right now – and impose the new convention everywhere – but then these physics experiments will always yield the same physical results, regardless of our conventions. So, while we’d put different stickers on the results, the laws of physics would continue to distinguish between left and right in the same absolute sense as Wu’s cobalt-60 decay experiment did back in 1956.

The really interesting thing in this rather lengthy discussion–in my humble opinion at least–is that imaginary ‘guy in the mirror world’. Could such mirror world exist? Why not? Let’s suppose it does really exist and that we can establish some conversation with that guy (or whatever other intelligent life form inhabiting that world).

We could then use these beta decay processes to make sure his ‘left’ and ‘right’ definitions are equal to our ‘left’ and ‘right’ definitions. Indeed, we would tell him that the muons can be left- or right-handed, and we would ask him to check his definition of ‘right-handed’ by asking him to repeat Wu’s experiment. And, then, when finally inviting him over and preparing to physically meet with him, we should tell him he should use his “right” hand to greet us. Yes. We should really do that.

Why? Well… As Feynman notes, he (or she or whatever) might actually be living in an anti-matter world, i.e. a world in which all charges are reversed, i.e. a world in which protons carry negative charge and electrons carry positive charge, and in which the quarks have opposite color charge. In that case, we would have been updating each other on all kinds of things in a zillion exchanges, and we would have been trying hard to assure each other that our worlds are not all that different (including that crucial experiment to make sure his left and right are the same as ours), but – if he would happen to live in an anti-matter world – then he would put out his left hand – not his right – when getting out of his spaceship. Touching it would not be wise. 🙂

[Let me be much more pedantic than Feynman is and just point out that his spaceship would obviously have been annihilated by ‘our’ matter long before he would have gotten to the meeting place. As soon as he’d get out of his ‘anti-matter’ world, we’d see a big flash of light and that would be it.]

Symmetries and conservation laws

A final remark should be made on the relation between all those symmetries and conservation laws. When everything is said and done, all that we’ve got is some nice graphs and then some axis or plane of symmetry (in two and three dimensions respectively). Is there anything more to it? There is.

There’s a “deep connection”, it seems, between all these symmetries and the various ‘laws of conservation’. In our examples of ‘time symmetry’, we basically illustrated the law of energy conservation:

  1. When describing a particle traveling through an electrostatic or gravitation field, we basically just made the case that potential energy is converted into kinetic energy, or vice versa.
  2. When describing an oscillating mass on a spring, we basically looked at the spring as a reservoir of energy – releasing and absorbing kinetic energy as the mass oscillates around its zero energy point – but, once again, all we described was a system in which the total amount of energy – kinetic and elastic – remained the same.

In fact, the whole discussion on CPT symmetry above has been quite simplistic and can be summarized as follows:

Energy is being conserved. Therefore, if you want to reverse time, you’ll need to reverse the forces as well. And reversing the forces implies a change of sign of the charges causing those forces.

In short, one should not be fascinated by T-symmetry alone. Combined CPT symmetry is much more intuitive as a concept and, hence, much more interesting. So, what’s left?

Quite a lot. I know you have many more questions at this point. At least I do:

  1. What does it mean in quantum mechanics? How does the Uncertainty Principle come into play?
  2. How does it work exactly for the strong force, or for the weak force? [I guess I’d need to find out more about neutrino physics here…]
  3. What about the other ‘conservation laws’ (such as the conservation of linear or angular momentum, for example)? How are they related to these ‘symmetries’.

Well… That’s complicated business it seems, and even Feynman doesn’t explore these topics in the above-mentioned final Lecture on (classical) mechanics. In any case, this post has become much too long already so I’ll just say goodbye for the moment. I promise I’ll get back to you on all of this.

Post scriptum:

If you have read my previous post (The Weird Force), you’ll wonder why – in the example of how a mirror world would relate to ours – I assume that the combined CP symmetry holds. Indeed, when discussing the ‘weird force’ (i.e. the weak force), I mentioned that it does not respect any of the symmetries, except for the combined CPT symmetry. So it does not respect (i) C symmetry, (ii) P symmetry and – importantly – it also does not respect the combined CP symmetry. This is a deep philosophical point which I’ll talk about in my next post. However, I needed this post as an ‘introduction’ to the next one.

The weird force

In my previous post (Loose Ends), I mentioned the weak force as the weird force. Indeed, unlike photons or gluons (i.e. the presumed carriers of the electromagnetic and strong force respectively), the weak force carriers (W bosons) have (1) mass and (2) electric charge:

  1. W bosons are very massive. The equivalent mass of a W+ and W– boson is some 86.3 atomic mass units (amu): that’s about the same as a rubidium or strontium atom. The mass of a Z boson is even larger: roughly equivalent to the mass of a molybdenium atom (98 amu). That is extremely heavy: just compare with iron or silver, which have a mass of  about 56 amu and 108 amu respectively. Because they are so massive, W bosons cannot travel very far before disintegrating (they actually go (almost) nowhere), which explains why the weak force is very short-range only, and so that’s yet another fundamental difference as compared to the other fundamental forces.
  2. The electric charge of W and Z bosons explains why we have a trio of weak force carriers rather than just one: W+, W– and Z0. Feynman calls them “the three W’s”.

The electric charge of W and Z bosons is what it is: an electric charge – just like protons and electrons. Hence, one has to distinguish it from the the weak charge as such: the weak charge (or, to be correct, I should say the weak isospin number) of a particle (such as a proton or a neutron for example) is related to the propensity of that particle to interact through the weak force — just like the electric charge is related to the propensity of a particle to interact through the electromagnetic force (think about Coulomb’s law for example: likes repel and opposites attract), and just like the so-called color charge (or the (strong) isospin number I should say) is related to the propensity of quarks (and gluons) to interact with each other through the strong force.

In short, as compared to the electromagnetic force and the strong force, the weak force (or Fermi’s interaction as it’s often called) is indeed the odd one out: these W bosons seem to mix just about everything: mass, charge and whatever else. In his 1985 Lectures on Quantum Electrodynamics, Feynman writes the following about this:

“The observed coupling constant for W’s is much the same as that for the photon. Therefore, the possibility exists that the three W’s and the photon are all different aspects of the same thing. Stephen Weinberg and Abdus Salam tried to combine quantum electrodynamics with what’s called ‘the weak interactions’ into one quantum theory, and they did it. But if you look at the results they get, you can see the glue—so to speak. It’s very clear that the photon and the three W’s are interconnected somehow, but at the present level of understanding, the connection is difficult to see clearly—you can still see the ‘seams’ in the theories; they have not yet been smoothed out so that the connection becomes more beautiful and, therefore, probably more correct.” (Feynman, 1985, p. 142)

Well… That says it all, I think. And from what I can see, the (tentative) confirmation of the existence of the Higgs field has not made these ‘seams’ any less visible. However, before criticizing eminent scientists such as Weinberg and Salam, we should obviously first have a closer look at those W bosons without any prejudice.

Alpha decay, potential wells and quantum tunneling

The weak force is usually explained as the force behind a process referred to as beta decay. However, because beta decay is just one form of radioactive decay, I need to say something about alpha decay too. [There is also gamma decay but that’s like a by-product of alpha and beta decay: when a nucleus emits an α or β particle (i.e. when we have alpha or beta decay), the nucleus will usually be left in an excited state, and so it can then move to a lower energy state by emitting a gamma ray photon (gamma radiation is very hard (i.e. very high-energy) radiation) – in the same way that an atomic electron can jump to a lower energy state by emitting a (soft) light ray photon. But so I won’t talk about gamma decay.]

Atomic decay, in general, is a loss of energy accompanying a transformation of the nucleus of the atom. Alpha decay occurs when the nucleus ejects an alpha particle: an α-particle consist of two protons and two neutrons bound together and, hence, it’s identical to a helium nucleus. Alpha particles are commonly emitted by all of the larger radioactive nuclei, such as uranium (which becomes thorium as a result of the decay process), or radium (which becomes radon gas). However, alpha decay is explained by a mechanism not involving the weak force: the electromagnetic force and the nuclear force (i.e. the strong force) will do. The reasoning is as follows: the alpha particle can be looked at as a stable but somewhat separate particle inside the nucleus. Because of their charge (both positive), the alpha particle inside of the nucleus and ‘the rest of the nucleus’ are subject to strong repulsive electromagnetic forces between them. However, these strong repulsive electromagnetic forces are not as strong as the strong force between the quarks that make up matter and, hence, that’s what keeps them together – most of the time that is.

Let me be fully complete here. The so-called nuclear force between composite particles such as protons and neutrons – or between clusters of protons and neutrons in this case – is actually the residual effect of the strong force. The strong force itself is between quarks – and between them only – and so that’s what binds them together in protons and neutrons (so that’s the next level of aggregation you might say). Now, the strong force is mostly neutralized within those protons and neutrons, but there is some residual force, and so that’s what keeps a nucleus together and what is referred to as the nuclear force.

There is a very helpful analogy here: the electromagnetic forces between neutral atoms (and/or molecules)—referred to as van der Waals forces (that’s what explains the liquid shape of water, among other things)— are also the residual of the (much stronger) electromagnetic forces that tie the electrons to the nucleus.

Now, that residual strong force – i.e. the nuclear force – diminishes in strength with distance but, within a certain distance, that residual force is strong enough to do what it does, and that’s to keep the nucleus together. This stable situation is usually depicted by what is referred to as a potential well:

Potential wellThe name is obvious: a well is a hole in the ground from which you can get water (or oil or gas or whatever). Now, the sea level might actually be lower than the bottom of a well, but the water would still stay in the well. In the illustration above, we are not depicting water levels but energy levels, but it’s equally obvious it would require some energy to kick a particle out of this well: if it would be water, we’d require a pump to get it out but, of course, it would be happy to flow to the sea once it’s out. Indeed, once a charged particle would be out (I am talking our alpha particle now), it will obviously stay out because of the repulsive electromagnetic forces coming into play (positive charges reject each other).

But so how can it escape the nuclear force and go up on the side of the well? [A potential pond or lake would have been a better term – but then that doesn’t sound quite right, does it? :-)]

Well, the energy may come from outside – that’s what’s referred to as induced radioactive decay (just Google it and you will tons of articles on experiments involving laser-induced accelerated alpha decay) – or, and that’s much more intriguing, the Uncertainty Principle comes into play.

Huh? Yes. According to the Uncertainty Principle, the energy of our alpha particle inside of the nucleus wiggles around some mean value but our alpha particle would also have an amplitude to have some higher energy level. That results not only in a theoretical probability for it to escape out of the well but also into something actually happening if we wait long enough: the amplitude (and, hence, the probability) is tiny, but it’s what explains the decay process – and what gives U-232 a half-life of 68.9 years, and also what gives the more common U-238 a much more comfortable 4.47 billion years as the half-life period.

[…]

Now that we’re talking about wells and all that, we should also mention that this phenomenon of getting out of the well is referred to as quantum tunneling. You can easily see why: it’s like the particle dug its way out. However, it didn’t: instead of digging under the sidewall, it sort of ‘climbed over’ it. Think of it being stuck and trying and trying and trying – a zillion times – to escape, until it finally did. So now you understand this fancy word: quantum tunneling. However, this post is about the weak force and so let’s discuss beta decay now.

Beta decay and intermediate vector bosons

Beta decay also involves transmutation of nuclei, but not by the emission of an α-particle but by a β-particle. A beta particle is just a different name for an electron (β) and/or its anti-matter counterpart: the positron (β+). [Physicists usually simplify stuff but in this case, they obviously didn’t: why don’t they just write e and ehere?]

An example of β decay is the decay of carbon-14 (C-14) into nitrogen-14 (N-14), and an example of β+ decay is the decay of magnesium-23 into sodium-23. C-14 and N-14 have the same mass but they are different atoms. The decay process is described by the equations below:

Beta decay

You’ll remember these formulas from your high-school days: beta decay does not change the mass number (carbon and nitrogen have the same mass: 14 units) but it does change the atomic (or proton) number: nitrogen has an extra proton. So one of the neutrons became a proton ! [The second equation shows the opposite: a proton became a neutron.] In order to do that, the carbon atom had to eject a negative charge: that’s the electron you see in the equation above.

In addition, there is also the ejection of a anti-neutrino (that’s what the bar above the ve symbol stands for: antimatter). You’ll wonder what an antineutrino could possibly be. Don’t worry about it: it’s not any spookier than the neutrino. Neutrinos and anti-neutrinos have no electric charge and so you cannot distinguish them on that account (electric charge). However, all antineutrinos have right-handed helicity (i.e. they come in only one of the two possible spin states), while the neutrinos are all left-handed. That’s why beta-decay is said to not respect parity symmetry, aka as mirror symmetry. Hence, in the case of beta decay, Nature does distinguish between the world and the mirror world ! I’ll come back on that but let me first lighten up the discussion somewhat with a graphical illustration of that neutron-proton transformation.

2000px-Beta-minus_Decay

As for magnesium-sodium transformation, we’d have something similar but so we’d just have a positron instead of an electron (a positron is just an electron with a positive charge for all practical purposes) and a regular neutrino. So we’d just have the anti-matter counterparts of the electron and the neutrino. [Don’t be put off by the term ‘anti-matter’: anti-matter is really just like regular matter – except that the charges have opposite sign. For example, the anti-matter counterpart of a blue quark is an anti-blue quark, and the anti-matter counterpart of neutrino has right-handed helicity – or spin – as opposed to the ‘left-handed’ ‘ordinary’ neutrinos.]

Now, you surely will have several serious questions. The most obvious question is what happens with the electron and the neutrino? Well… Those spooky neutrinos are gone before you know it and so don’t worry about them. As for the electron, the carbon had only six electrons but the nitrogen needs seven to be electrically neutral… So you might think the new atom will take care of it. Well… No. Sorry. Because of its kinetic energy, the electron is likely to just explore the world and crash into something else, and so we’re left with a positively charged nitrogen ion indeed. So I should have added a little + sign next to the N in the formula above. Of course, one cannot exclude the possibility that this ion will pick up the electron later – but don’t bet on it: the ion might have to absorb another electron, or not find any free electrons !

As for the positron (in a β+ decay), that will just grab the nearest electron around and auto-destruct—thereby generating two high-energy photons (so that’s a little light flash). The net result is that we do not have an ion but a neutral sodium atom. Because the nearest electron will usually be found on some shell around the nucleus (the K or L shell for example), such process is often described as electron capture, and the ‘transformation equation’ can then be written p + e– → n + ve (with p and n denoting a proton and a neutron respectively).

The more important question is: where are the W and Z bosons in this story?

Ah ! Yes! Sorry I forgot about them. The Feynman diagram below shows how it really works—and why the name of intermediate vector bosons for these three strange ‘particles’ (W+, W, and Z0) is so apt. These W bosons are just a short trace of ‘something’ indeed: their half-life is about 3×10−25 s, and so that’s the same order of magnitude (or minitude I should say) as the mean lifetime of other resonances observed in particle collisions. 

Feynman diagram beta decay

Indeed, you’ll notice that, in this so-called Feynman diagram, there’s no space axis. That’s because the distances involved are so tiny that we have to distort the scale—so we are not using equivalent time and distance units here, as Feynman diagrams should. That’s in line with a more prosaic description of what may be happening: W bosons mediate the weak force by seemingly absorbing an awful lot of momentum, spin, and whatever other energy related to all of the qubits describing the particles involved, to then eject an electron (or positron) and a neutrino (or an anti-neutrino).

Hmm… That’s not a standard description of a W boson as a force carrying particle, you’ll say. You’re right. This is more the description of a Z boson. What’s the Z boson again? Well… I haven’t explained it yet. It’s not involved in beta decay. There’s a process called elastic scattering of neutrinos. Elastic scattering means that some momentum is exchanged but neither the target (an electron or a nucleus) nor the incident particle (the neutrino) are affected as such (so there’s no break-up of the nucleus for example). In other words, things bounce back and/or get deflected but there’s no destruction and/or creation of particles, which is what you would have with inelastic collisions. Let’s examine what happens here.

W and Z bosons in neutrino scattering experiments

It’s easy to generate neutrino beams: remember their existence was confirmed in 1956 because nuclear reactors create a huge flux of them ! So it’s easy to send lots of high-energy neutrinos into a cloud or bubble chamber and see what happens. Cloud and bubble chambers are prehistoric devices which were built and used to detect electrically charged particles moving through it. I won’t go into too much detail but I can’t resist inserting a few historic pictures here.

The first two pictures below document the first experimental confirmation of the existence of positrons by Carl Anderson, back in 1932 (and, no, he’s not Danish but American), for which he got a Nobel Prize. The magnetic field which gives the positron some curvature—the trace of which can be seen in the image on the right—is generated by the coils around the chamber. Note the opening in the coils, which allows for taking a picture when the supersaturated vapor is suddenly being decompressed – and so the charged particle that goes through it leaves a trace of ionized atoms behind that act as ‘nucleation centers’ around which the vapor condenses, thereby forming tiny droplets. Quite incredible, isn’t it? One can only admire the perseverance of these early pioneers.

Carl Anderson Positron

The picture below is another historical first: it’s the first detection of a neutrino in a bubble chamber. It’s fun to analyze what happens here: we have a mu-meson – aka as a muon – coming out of the collision here (that’s just a heavier version of the electron) and then a pion – which should (also) be electrically charged because the muon carries electric charge… But I will let you figure this one out. I need to move on with the main story. 🙂

FirstNeutrinoEventAnnotated

The point to note is that these spooky neutrinos collide with other matter particles. In the image above, it’s a proton, but so when you’re shooting neutrino beams through a bubble chamber, a few of these neutrinos can also knock electrons out of orbit, and so that electron will seemingly appear out of nowhere in the image and move some distance with some kinetic energy (which can all be measured because magnetic fields around it will give the electron some curvature indeed, and so we can calculate its momentum and all that).

Of course, they will tend to move in the same direction – more or less at least – as the neutrinos that knocked them loose. So it’s like the Compton scattering which we discussed earlier (from which we could calculate the so-called classical radius of the electron – or its size if you will)—but with one key difference: the electrons get knocked loose not by photons, but by neutrinos.

But… How can they do that? Photons carry the electromagnetic field so the interaction between them and the electrons is electromagnetic too. But neutrinos? Last time I checked, they were matter particles, not bosons. And they carry no charge. So what makes them scatter electrons?

You’ll say that’s a stupid question: it’s the neutrino, dummy ! Yes, but how? Well, you’ll say, they collide—don’t they? Yes. But we are not talking tiny billiard balls here: if particles scatter, one of the fundamental forces of Nature must be involved, and usually it’s the electromagnetic force: it’s the electron density around nuclei indeed that explains why atoms will push each other away if they meet each other and, as explained above, it’s also the electromagnetic force that explains Compton scattering. So billiard balls bounce back because of the electromagnetic force too and…

OK-OK-OK. I got it ! So here it must be the strong force or something. Well… No. Neutrinos are not made of quarks. You’ll immediately ask what they are made of – but the answer is simple: they are what they are – one of the four matter particles in the Standard Model – and so they are not made of anything else. Capito?

OK-OK-OK. I got it ! It must be gravity, no? Perhaps these neutrinos don’t really hit the electron: perhaps they skim near it and sort of drag it along as they pass? No. It’s not gravity either. It can’t be. We have no exact measurement of the mass of a neutrino but it’s damn close to zero – and, hence, way too small to exert any such influence on an electron. It’s just not consistent with those traces.

OK-OK-OK. I got it ! It’s that weak force, isn’t it? YES ! The Feynman diagrams below show the mechanism involved. As far as terminology goes (remember Feynman’s complaints about the up, down, strange, charm, beauty and truths quarks?), I think this is even worse. The interaction is described as a current, and when the neutral Z boson is involved, it’s called a neutral current – as opposed to…  Well… Charged currents. Neutral and charged currents? That sounds like sweet and sour candy, isn’t it? But isn’t candy supposed to be sweet? Well… No. Sour candy is pretty common too. And so neutral currents are pretty common too.

neutrino_scattering

You obviously don’t believe a word of what I am saying and you’ll wonder what the difference is between these charged and neutral currents. The end result is the same in the first two pictures: an electron and a neutrino interact, and they exchange momentum. So why is one current neutral and the other charged? In fact, when you ask that question, you are actually wondering whether we need that neutral Z boson. W bosons should be enough, no?

No. The first and second picture are “the same but different”—and you know what that means in physics: it means it’s not the same. It’s different. Full stop. In the second picture, there is electron absorption (only for a very brief moment obviously, but so that’s what it is, and you don’t have that in the first diagram) and then electron emission, and there’s also neutrino absorption and emission. […] I can sense your skepticism – and I actually share it – but that’s what I understand of it !

[…] So what’s the third picture? Well… That’s actually beta decay: a neutron becomes a proton, and there’s emission of an electron and… Hey ! Wait a minute ! This is interesting: this is not what we wrote above: we have an incoming neutrino instead of an outgoing anti-neutrino here. So what’s this?

Well… I got this illustration from a blog on physics (Galileo’s Pendulum – The Flavor of Neutrinos) which, in turn, mentions Physics Today as its source. The incoming neutrino has nothing to do with the usual representation of an anti-matter particle as a particle traveling backwards in time. It’s something different, and it triggers a very interesting question: could beta decay possibly be ‘triggered’ by neutrinos? Who knows?

I googled it, and there seems to be some evidence supporting such thesis. However, this ‘evidence’ is flimsy (the only real ‘clue’ is that the activity of the Sun, as measured by the intensity of solar flares, seems to be having some (tiny) impact on the rate of decay of radioactive elements on Earth) and, hence, most ‘serious’ scientists seem to reject that possibility. I wonder why: it would make the ‘weird force’ somewhat less weird in my view. So… What to say? Well… Nothing much at this moment. Let me move on and examine the question a bit more in detail in a Post Scriptum.

The odd one out

You may wonder if neutrino-electron interaction always involve the weak force. The answer to that question is simple: Yes ! Because they do not carry any electric charge, and because they are not quarks, neutrinos are only affected by the weak force. However, as evidenced by all the stuff I wrote on beta decay, you cannot turn this statement on its head: the weak force is relevant not only for neutrinos but for electrons and quarks as well ! That gives us the following connection between forces and matter:

forces and matter

[Specialists reading this post may say they’ve not seen this diagram before. That might be true. I made it myself – for a change – but I am sure it’s around somewhere.]

It is a weird asymmetry: almost massless particles (neutrinos) interact with other particles through massive bosons, and these massive ‘things’ are supposed to be ‘bosons’, i.e. force carrying particles ! These physicists must be joking, right? These bosons can hardly carry themselves – as evidenced by the fact they peter out just like all of those other ‘resonances’ !

Hmm… Not sure what to say. It’s true that their honorific title – ‘intermediate vectors’ – seems to be quite apt: they are very intermediate indeed: they only appear as a short-lived stage in between the initial and final state of the system. Again, it leads one to think that these W bosons may just reflect some kind of energy blob caused by some neutrino – or anti-neutrino – crashing into another matter particle (a quark or an electron). Whatever it is, this weak force is surely the odd one out.

Odd one out

In my previous post, I mentioned other asymmetries as well. Let’s revisit them.

Time irreversibility

In Nature, uranium is usually found as uranium-238. Indeed, that’s the most abundant isotope of uranium: about 99.3% of all uranium is U-238. There’s also some uranium-235 out there: some 0.7%. And there are also trace amounts of U-234. And that’s it really. So where is the U-232 we introduced above when talking about alpha decay? Well… We said it has a half-life of 68.9 years only and so it’s rather normal U-232 cannot be found in Nature. What? Yes: 68.9 years is nothing compared to the half-life of U-238 (4.47 billion years) or U-235 (704 million years), and so it’s all gone. In fact, the tiny proportion of U-235 on this Earth is what allows us to date the Earth. The math and physics involved resemble the math and physics involved in carbon-dating but so carbon-dating is used for organic materials only, because the carbon-14 that’s used also has a fairly short half-time: 5,730 years—so that’s like a thousand times more than U-232 but… Well… Not like millions or billions of years. [You’ll immediately ask why this C-14 is still around if it’s got such a short life-time. The answer to that is easy: C-14 is continually being produced in the atmosphere and, hence, unlike U-232, it doesn’t just disappear.]

Hmm… Interesting. Radioactive decay suggests time irreversibility. Indeed, it’s wonderful and amazing – but sad at the same time:

  1. There’s so much diversity – a truly incredible range of chemical elements making life what it is.
  2. But so all these chemical elements have been produced through a process of nuclear fusion in stars (stellar nucleosynthesis), which were then blasted into space by supernovae, and so they then coagulated into planets like ours.
  3. However, all of the heavier atoms will decay back into some lighter element because of radioactive decay, as shown in the graph below.
  4. So we are doomed !

Overview of decay modes

In fact, some of the GUT theorists think that there is no such thing as ‘stable nuclides’ (that’s the black line in the graph above): they claim that all atomic species will decay because – according to their line of reasoning – the proton itself is NOT stable.

WHAT? Yeah ! That’s what Feynman complained about too: he obviously doesn’t like these GUT theorists either. Of course, there is an expensive experiment trying to prove spontaneous proton decay: the so-called Super-K under Mount Kamioka in Japan. It’s basically a huge tank of ultra-pure water with a lot of machinery around it… Just google it. It’s fascinating. If, one day, it would be able to prove that there’s proton decay, our Standard Model would be in very serious problems – because it doesn’t cater for unstable protons. That being said, I am happy that has not happened so far – because it would mean our world would really be doomed.   

What do I mean with that? We’re all doomed, aren’t we? If only because of the Second Law of Thermodynamics. Huh? Yes. That ‘law’ just expresses a universal principle: all kinetic and potential energy observable in nature will, in the end, dissipate: differences in temperature, pressure, and chemical potential will even out. Entropy increases. Time is NOT reversible: it points in the direction of increasing entropy – till all is the same once again. Sorry? 

Don’t worry about it. When everything is said and done, we humans – or life in general – are an amazing negation of the Second Law of Thermodynamics: temperature, pressure, chemical potential and what have you – it’s all super-organized and super-focused in our body ! But it’s temporary indeed – and we actually don’t negate the Second Law of Thermodynamics: we create order by creating disorder. In any case, I don’t want to dwell on this point. Time reversibility in physics usually refers to something else: time reversibility would mean that all basic laws of physics (and with ‘basic’, I am excluding this higher-level Second Law of Thermodynamics) would be time-reversible: if we’d put in minus t (–t) instead of t, all formulas would still make sense, wouldn’t they? So we could – theoretically – reverse our clock and stopwatches and go back in time.

Can we do that?

Well… We can reverse a lot. For example, U-232 decays into a lot of other stuff BUT we can produce U-232 from scratch once again—from thorium to be precise. In fact, that’s how we got it in the first place: as mentioned above, any natural U-232 that might have been produced in those stellar nuclear fusion reactors is gone. But so that means that alpha decay is reversible: we’re producing stable stuff – U-232 lasts for dozens of years – that probably existed long time ago but so it decayed and now we’re reversing the arrow of time using our nuclear science and technology.

Now, you may object that you don’t see Nature spontaneously assemble the nuclear technology we’re using to produce U-232, except if Nature would go for that Big Crunch everyone’s predicting so it can repeat the Big Bang once again (so that’s the oscillating Universe scenario)—and you’re obviously right in that assessment. That being said, from some kind of weird existential-philosophical point of view, it’s kind of nice to know that – in theory at least – there is time reversibility indeed (or T symmetry as it’s called by scientists). 

[Voice booming from the sky] STOP DREAMING ! TIME REVERSIBILITY DOESN’T EXIST !

What? That’s right. For beta decay, we don’t have T symmetry. The weak force breaks all kinds of symmetries, and time symmetry is only one of them. I talked about these in my previous post (Loose Ends) – so please have a look at that, and let me just repeat the basics:

  1. Parity (P) symmetry or mirror symmetry revolves around the notion that Nature should not distinguish between right- and left-handedness, so everything that works in our world, should also work in the mirror world. Now, the weak force does not respect P symmetry: we need right-handed neutrinos for β decay, and we’d also need right-handed neutrinos to reverse the process – which actually happens: so, yes, beta decay might be time-reversible but so it doesn’t work with left-handed neutrinos – which is what our ‘right-handed’ neutrinos would be in the ‘mirror world’. Full stop. Our world is different from the mirror world because the weak force knows the difference between left and right – and some stuff only works with left-handed stuff (and then some other stuff only works with right-handed stuff). In short, the weak force doesn’t work the same in the mirror world. In the mirror world, we’d need to throw in left-handed neutrinos for β decay. Not impossible but a bit of a nuisance, you’ll agree.  
  2. Charge conjugation or charge (C) symmetry revolves around the notion that a world in which we reverse all (electric) charge signs. Now, the weak force also does not respect C symmetry. I’ll let you go through the reasoning for that, but it’s the same really. Just reversing all signs would not make the weak force ‘work’ in the mirror world: we’d have to ‘keep’ some of the signs – notably those of our W bosons !
  3. Initially, it was thought that the weak force respected the combined CP symmetry (and, therefore, that the principle of P and C symmetry could be substituted by a combined CP symmetry principle) but two experimenters – Val Fitch and James Cronin – got a Nobel Prize when they proved that this was not the case. To be precise, the spontaneous decay of neutral kaons (which is a type of decay mediated by the weak force) does not respect CP symmetry. Now, that was the death blow to time reversibility (T symmetry). Why? Can’t we just make a film of those experiments not respecting P, C or CP symmetry, and then just press the ‘reverse’ button? We could but one can show that the relativistic invariance in Einstein’s relativity theory implies a combined CPT symmetry. Hence, if CP is a broken symmetry, then the T symmetry is also broken. So we could play that film, but the laws of physics would not make sense ! In other words, the weak force does not respect T symmetry either !

To summarize this rather lengthy philosophical digression: a full CPT sequence of operations would work. So we could – in sequence – (1) change all particles to antiparticles (C), (2) reflect the system in a mirror (P), and (3) change the sign of time (T), and we’d have a ‘working’ anti-world that would be just as real as ours. HOWEVER, we do not live in a mirror world. We live in OUR world – and so left-handed is left-handed, and right-handed is right-handed, and positive is positive and negative is negative, and so THERE IS NO TIME REVERSIBILITY: the weak force does not respect T symmetry.

Do you understand now why I call the weak force the weird force? Penrose devotes a whole chapter to time reversibility in his Road to Reality, but he does not focus on the weak force. I wonder why. All that rambling on the Second Law of Thermodynamics is great – but one should relate that ‘principle’ to the fundamental forces and, most notably, to the weak force.

Post scriptum 1:

In one of my previous posts, I complained about not finding any good image of the Higgs particle. The problem is that these super-duper particle accelerators don’t use bubble chambers anymore. The scales involved have become incredibly small and so all that we have is electronic data, it seems, and that is then re-assembled into some kind of digital image but – when everything is said and done – these images are only simulations. Not the real thing. I guess I am just an old grumpy guy – a 45-year old economist: what do you expect? – but I’ll admit that those black-and-white pictures above make my heart race a bit more than those colorful simulations. But so I found a good simulation. It’s the cover image of Wikipedia’s Physics beyond the Standard Model (I should have looked there in the first place, I guess). So here it is: the “simulated Large Hadron Collider CMS particle detector data depicting a Higgs boson (produced by colliding protons) decaying into hadron jets and electrons.”

CMS_Higgs-event (1)

So that’s what gives mass to our massive W bosons. The Higgs particle is a massive particle itself: an estimated 125-126 GeV/c2, so that’s about 1.5 times the mass of the W bosons. I tried to look into decay widths and all that, but it’s all quite confusing. In short, I have no doubt that the Higgs theory is correct – the data is all what we have and then, when everything is said and done, we have an honorable Nobel Prize Committee thinking the evidence is good enough (which – in light of their rather conservative approach (which I fully subscribe too: don’t get me wrong !) – usually means that it’s more than good enough !) – but I can’t help thinking this is a theory which has been designed to match experiment. 

Wikipedia writes the following about the Higgs field:

“The Higgs field consists of four components, two neutral ones and two charged component fields. Both of the charged components and one of the neutral fields are Goldstone bosons, which act as the longitudinal third-polarization components of the massive W+, W– and Z bosons. The quantum of the remaining neutral component corresponds to (and is theoretically realized as) the massive Higgs boson.”

Hmm… So we assign some qubits to W bosons (sorry for the jargon: I am talking these ‘longitudinal third-polarization components’ here), and to W bosons only, and then we find that the Higgs field gives mass to these bosons only? I might be mistaken – I truly hope so (I’ll find out when I am somewhat stronger in quantum-mechanical math) – but, as for now, it all smells somewhat fishy to me. It’s all consistent, yes – and I am even more skeptical about GUT stuff ! – but it does look somewhat artificial.

But then I guess this rather negative appreciation of the mathematical beauty (or lack of it) of the Standard Model is really what is driving all these GUT theories – and so I shouldn’t be so skeptical about them ! 🙂

Oh… And as I’ve inserted some images of collisions already, let me insert some more. The ones below document the discovery of quarks. They come out of the above-mentioned coffee table book of Lederman and Schramm (1989). The accompanying texts speak for themselves.

Quark - 1

Quark - 2

Quark - 3

 

Post scriptum 2:

I checked the source of that third diagram showing how an incoming neutrino could possibly cause a neutron to become a proton. It comes out of the August 2001 issue of Physics Today indeed, and it describes a very particular type of beta decay. This is the original illustration:

inverse beta decay

The article (and the illustration above) describes how solar neutrinos traveling through heavy water – also known as deuterium – can interact with the deuterium nucleus – which is referred to as deuteron, and which we’ll represent by the symbol d in the process descriptions below. The nucleus of deuterium – which is an isotope of hydrogen – consists of one proton and one neutron, as opposed to the much more common protium isotope of hydrogen, which has just one proton in the nucleus. Deuterium occurs naturally (0.0156% of all hydrogen atoms in the Earth’s oceans is deuterium), but it can also be produced industrially – for use in heavy-water nuclear reactors for example. In any case, the point is that deuteron can respond to solar neutrinos by breaking up in one of two ways:

  1. Quasi-elastically: ve + d → ve + p + n. So, in this case, the deuteron just breaks up in its two components: one proton and one neutron. That seems to happen pretty frequently because the nuclear forces holding the proton and the neutron together are pretty weak it seems.
  2. Alternatively, the solar neutrino can turn a deuteron’s neutron into a second proton, and so that’s what’s depicted in the third diagram above: ve + d → e + p + p. So what happens really is ve + n → e + p.

The author of this article – which basically presents the basics of how a new neutrino detector – the Sudbury Neutrino Observatory – is supposed to work – refers to the second process as inverse beta decay – but that’s a rather generic and imprecise term it seems. The conclusion is that the weak force seems to have myriad ways of expressing itself. However, the connection between neutrinos and the weak force seems to need further exploring. As for myself, I’d like to know why the hypothesis that any form of beta decay – or, for that matter, any other expression of the weak force – is actually being triggered by these tiny neutrinos crashing into (other) matter particles would not be reasonable.

In such scenario, the W bosons would be reduced to a (very) temporary messy ‘blob’ of energy, combining kinetic, electromagnetic as well as the strong binding energy between quarks if protons and neutrons are involved. Could this ‘odd one out’ be nothing but a pseudo-force? I am no doubt being very simplistic here – but then it’s an interesting possibility, isn’t it? In order to firmly deny it, I’ll need to learn a lot more about neutrinos no doubt – and about how the results of all these collisions in particle accelerators are actually being analyzed and interpreted.

Loose ends…

It looks like I am getting ready for my next plunge into Roger Penrose’s Road to Reality. I still need to learn more about those Hamiltonian operators and all that, but I can sort of ‘see’ what they are supposed to do now. However, before I venture off on another series of posts on math instead of physics, I thought I’d briefly present what Feynman identified as ‘loose ends’ in his 1985 Lectures on Quantum Electrodynamics – a few years before his untimely death – and then see if any of those ‘loose ends’ appears less loose today, i.e. some thirty years later.

The three-forces model and coupling constants

All three forces in the Standard Model (the electromagnetic force, the weak force and the strong force) are mediated by force carrying particles: bosons. [Let me talk about the Higgs field later and – of course – I leave out the gravitational force, for which we do not have a quantum field theory.]

Indeed, the electromagnetic force is mediated by the photon; the strong force is mediated by gluons; and the weak force is mediated by W and/or Z bosons. The mechanism is more or less the same for all. There is a so-called coupling (or a junction) between a matter particle (i.e. a fermion) and a force-carrying particle (i.e. the boson), and the amplitude for this coupling to happen is given by a number that is related to a so-called coupling constant

Let’s give an example straight away – and let’s do it for the electromagnetic force, which is the only force we have been talking about so far. The illustration below shows three possible ways for two electrons moving in spacetime to exchange a photon. This involves two couplings: one emission, and one absorption. The amplitude for an emission or an absorption is the same: it’s –j. So the amplitude here will be (–j)(j) = j2. Note that the two electrons repel each other as they exchange a photon, which reflects the electromagnetic force between them from a quantum-mechanical point of view !

Photon exchangeWe will have a number like this for all three forces. Feynman writes the coupling constant for the electromagnetic force as  j and the coupling constant for the strong force (i.e. the amplitude for a gluon to be emitted or absorbed by a quark) as g. [As for the weak force, he is rather short on that and actually doesn’t bother to introduce a symbol for it. I’ll come back on that later.]

The coupling constant is a dimensionless number and one can interpret it as the unit of ‘charge’ for the electromagnetic and strong force respectively. So the ‘charge’ q of a particle should be read as q times the coupling constant. Of course, we can argue about that unit. The elementary charge for electromagnetism was or is – historically – the charge of the proton (q = +1), but now the proton is no longer elementary: it consists of quarks with charge –1/3 and +2/3 (for the d and u quark) respectively (a proton consists of two u quarks and one d quark, so you can write it as uud). So what’s j then? Feynman doesn’t give its precise value but uses an approximate value of –0.1. It is an amplitude so it should be interpreted as a complex number to be added or multiplied with other complex numbers representing amplitudes – so –0.1 is “a shrink to about one-tenth, and half a turn.” [In these 1985 Lectures on QED, which he wrote for a lay audience, he calls amplitudes ‘arrows’, to be combined with other ‘arrows.’ In complex notation, –0.1 = 0.1eiπ = 0.1(cosπ + isinπ).]

Let me give a precise number. The coupling constant for the electromagnetic force is the so-called fine-structure constant, and it’s usually denoted by the alpha symbol (α). There is a remarkably easy formula for α, which becomes even easier if we fiddle with units to simplify the matter even more. Let me paraphrase Wikipedia on α here, because I have no better way of summarizing it (the summary is also nice as it shows how changing units – replacing the SI units by so-called natural units – can simplify equations):

1. There are three equivalent definitions of α in terms of other fundamental physical constants:

\alpha = \frac{k_\mathrm{e} e^2}{\hbar c} = \frac{1}{(4 \pi \varepsilon_0)} \frac{e^2}{\hbar c} = \frac{e^2 c \mu_0}{2 h}
where e is the elementary charge (so that’s the electric charge of the proton); ħ = h/2π is the reduced Planck constant; c is the speed of light (in vacuum); ε0 is the electric constant (i.e. the so-called permittivity of free space); µ0 is the magnetic constant (i.e. the so-called permeability of free space); and ke is the Coulomb constant.

2. In the old centimeter-gram-second variant of the metric system (cgs), the unit of electric charge is chosen such that the Coulomb constant (or the permittivity factor) equals 1. Then the expression of the fine-structure constant just becomes:

\alpha = \frac{e^2}{\hbar c}

3. When using so-called natural units, we equate ε0 , c and ħ to 1. [That does not mean they are the same, but they just become the unit for measurement for whatever is measured in them. :-)] The value of the fine-structure constant then becomes:

 \alpha = \frac{e^2}{4 \pi}.

Of course, then it just becomes a matter of choosing a value for e. Indeed, we still haven’t answered the question as to what we should choose as ‘elementary’: 1 or 1/3? If we take 1, then α is just a bit smaller than 0.08 (around 0.0795775 to be somewhat more precise). If we take 1/3 (the value for a quark), then we get a much smaller value: about 0.008842 (I won’t bother too much about the rest of the decimals here). Feynman’s (very) rough approximation of –0.1 obviously uses the historic proton charge, so e = +1.

The coupling constant for the strong force is much bigger. In fact, if we use the SI units (i.e. one of the three formulas for α under point 1 above), then we get an alpha equal to some 7.297×10–3. In fact, its value will usually be written as 1/α, and so we get a value of (roughly) 1/137. In this scheme of things, the coupling constant for the strong force is 1, so that’s 137 times bigger.

Coupling constants, interactions, and Feynman diagrams

So how does it work? The Wikipedia article on coupling constants makes an extremely useful distinction between the kinetic part and the proper interaction part of an ‘interaction’. Indeed, before we just blindly associate qubits with particles, it’s probably useful to not only look at how photon absorption and/or emission works, but also at how a process as common as photon scattering works (so we’re talking Compton scattering here – discovered in 1923, and it earned Compton a Nobel Prize !).

The illustration below separates the kinetic and interaction part properly: the photon and the electron are both deflected (i.e. the magnitude and/or direction of their momentum (p) changes) – that’s the kinetic part – but, in addition, the frequency of the photon (and, hence, its energy – cf. E = hν) is also affected – so that’s the interaction part I’d say.

Compton scattering

With an absorption or an emission, the situation is different, but it also involves frequencies (and, hence, energy levels), as show below: an electron absorbing a higher-energy photon will jump two or more levels as it absorbs the energy by moving to a higher energy level (i.e. a so-called excited state), and when it re-emits the energy, the emitted photon will have higher energy and, hence, higher frequency.

Absorption-1

This business of frequencies and energy levels may not be so obvious when looking at those Feynman diagrams, but I should add that these Feynman diagrams are not just sketchy drawings: the time and space axis is precisely defined (time and distance are measured in equivalent units) and so the direction of travel of particles (photons, electrons, or whatever particle is depicted) does reflect the direction of travel and, hence, conveys precious information about both the direction as well as the magnitude of the momentum of those particles. That being said, a Feynman diagram does not care about a photon’s frequency and, hence, its energy (its velocity will always be c, and it has no mass, so we can’t get any information from its trajectory).

Let’s look at these Feynman diagrams now, and the underlying force model, which I refer to as the boson exchange model.

The boson exchange model

The quantum field model – for all forces – is a boson exchange model. In this model, electrons, for example, are kept in orbit through the continuous exchange of (virtual) photons between the proton and the electron, as shown below.

Electron-protonNow, I should say a few words about these ‘virtual’ photons. The most important thing is that you should look at them as being ‘real’. They may be derided as being only temporary disturbances of the electromagnetic field but they are very real force carriers in the quantum field theory of electromagnetism. They may carry very low energy as compared to ‘real’ photons, but they do conserve energy and momentum – in quite a strange way obviously: while it is easy to imagine a photon pushing an electron away, it is a bit more difficult to imagine it pulling it closer, which is what it does here. Nevertheless, that’s how forces are being mediated by virtual particles in quantum mechanics: we have matter particles carrying charge but neutral bosons taking care of the exchange between those charges.

In fact, note how Feynman actually cares about the possibility of one of those ‘virtual’ photons briefly disintegrating into an electron-positron pair, which underscores the ‘reality’ of photons mediating the electromagnetic force between a proton and an electron,  thereby keeping them close together. There is probably no better illustration to explain the difference between quantum field theory and the classical view of forces, such as the classical view on gravity: there are no gravitons doing for gravity what photons are doing for electromagnetic attraction (or repulsion).

Pandora’s Box

I cannot resist a small digression here. The ‘Box of Pandora’ to which Feynman refers in the caption of the illustration above is the problem of calculating the coupling constants. Indeed, j is the coupling constant for an ‘ideal’ electron to couple with some kind of ‘ideal’ photon, but how do we calculate that when we actually know that all possible paths in spacetime have to be considered and that we have all of these ‘virtual’ mess going on? Indeed, in experiments, we can only observe probabilities for real electrons to couple with real photons.

In the ‘Chapter 4’ to which the caption makes a reference, he briefly explains the mathematical procedure, which he invented and for which he got a Nobel Prize. He calls it a ‘shell game’. It’s basically an application of ‘perturbation theory’, which I haven’t studied yet. However, he does so with skepticism about its mathematical consistency – skepticism which I mentioned and explored somewhat in previous posts, so I won’t repeat that here. Here, I’ll just note that the issue of ‘mathematical consistency’ is much more of an issue for the strong force, because the coupling constant is so big.

Indeed, terms with j2, j3jetcetera (i.e. the terms involved in adding amplitudes for all possible paths and all possible ways in which an event can happen) quickly become very small as the exponent increases, but terms with g2, g3getcetera do not become negligibly small. In fact, they don’t become irrelevant at all. Indeed, if we wrote α for the electromagnetic force as 7.297×10–3, then the α for the strong force is one, and so none of these terms becomes vanishingly small. I won’t dwell on this, but just quote Wikipedia’s very succinct appraisal of the situation: “If α is much less than 1 [in a quantum field theory with a dimensionless coupling constant α], then the theory is said to be weakly coupled. In this case it is well described by an expansion in powers of α called perturbation theory. [However] If the coupling constant is of order one or larger, the theory is said to be strongly coupled. An example of the latter [the only example as far as I am aware: we don’t have like a dozen different forces out there !] is the hadronic theory of strong interactions, which is why it is called strong in the first place. [Hadrons is just a difficult word for particles composed of quarks – so don’t worry about it: you understand what is being said here.] In such a case non-perturbative methods have to be used to investigate the theory.”

Hmm… If Feynman thought his technique for calculating weak coupling constants was fishy, then his skepticism about whether or not physicists actually know what they are doing when calculating stuff using the strong coupling constant is probably justified. But let’s come back on that later. With all that we know here, we’re ready to present a picture of the ‘first-generation world’.

The first-generation world

The first-generation is our world, excluding all that goes in those particle accelerators, where they discovered so-called second- and third-generation matter – but I’ll come back to that. Our world consists of only four matter particles, collectively referred to as (first-generation) fermions: two quarks (a u and a d type), the electron, and the neutrino. This is what is shown below.

first-generation matter

Indeed, u and d quarks make up protons and neutrons (a proton consists of two u quarks and one d quark, and a neutron must be neutral, so it’s two d quarks and one u quark), and then there’s electrons circling around them and so that’s our atoms. And from atoms, we make molecules and then you know the rest of the story. Genesis ! 

Oh… But why do we need the neutrino? [Damn – you’re smart ! You see everything, don’t you? :-)] Well… There’s something referred to as beta decay: this allows a neutron to become a proton (and vice versa). Beta decay explains why carbon-14 will spontaneously decay into nitrogen-14. Indeed, carbon-12 is the (very) stable isotope, while carbon-14 has a life-time of 5,730 ± 40 years ‘only’ and, hence, measuring how much carbon-14 is left in some organic substance allows us to date it (that’s what (radio)carbon-dating is about). Now, a beta particle can refer to an electron or a positron, so we can have β decay (e.g. the above-mentioned carbon-14 decay) or β+ decay (e.g. magnesium-23 into sodium-23). If we have β decay, then some electron will be flying out in order to make sure the atom as a whole stays electrically neutral. If it’s β+ decay, then emitting a positron will do the job (I forgot to mention that each of the particles above also has a anti-matter counterpart – but don’t think I tried to hide anything else: the fermion picture above is pretty complete). That being said, Wolfgang Pauli, one of those geniuses who invented quantum theory, noted, in 1930 already, that some momentum and energy was missing, and so he predicted the emission of this mysterious neutrinos as well. Guess what? These things are very spooky (relatively high-energy neutrinos produced by stars (our Sun in the first place) are going through your and my my body, right now and right here, at a rate of some hundred trillion per second) but, because they are so hard to detect, the first actual trace of their existence was found in 1956 only. [Neutrino detection is fairly standard business now, however.] But back to quarks now.

Quarks are held together by gluons – as you probably know. Quarks come in flavors (u and d), but gluons come in ‘colors’. It’s a bit of a stupid name but the analogy works great. Quarks exchange gluons all of the time and so that’s what ‘glues’ them so strongly together. Indeed, the so-called ‘mass’ that gets converted into energy when a nuclear bomb explodes is not the mass of quarks (their mass is only 2.4 and 4.8 MeV/c2. Nuclear power is binding energy between quarks that gets converted into heat and radiation and kinetic energy and whatever else a nuclear explosion unleashes. That binding energy is reflected in the difference between the mass of a proton (or a neutron) – around 938 MeV/c2 – and the mass figure you get when you add two u‘s and one d, which is them 9.6 MeV/c2 only. This ratio – a factor of one hundred – illustrates once again the strength of the strong force: 99% of the ‘mass’ of a proton or an electron is due to the strong force.    

But I am digressing too much, and I haven’t even started to talk about the bosons associated with the weak force. Well… I won’t just now. I’ll just move on the second- and third-generation world.

Second- and third-generation matter

When physicists started to look for those quarks in their particle accelerators, Nature had already confused them by producing lots of other particles in these accelerators: in the 1960s, there were more than four hundred of them. Yes. Too much. But they couldn’t get them back in the box. 🙂

Now, all these ‘other particles’ are unstable but they survive long enough – a muon, for example, disintegrates after 2.2 millionths of a second (on average) – to deserve the ‘particle’ title, as opposed to a ‘resonance’, whose lifetime can be as short as a billionth of a trillionth of a second. And so, yes, the physicists had to explain them too. So the guys who devised the quark-gluon model (the model is usually associated with Murray Gell-Mann but – as usual with great ideas – some others worked hard on it as well) had already included heavier versions of their quarks to explain (some of) these other particles. And so we do not only have heavier quarks, but also a heavier version of the electron (that’s the muon I mentioned) as well as a heavier version of the neutrino (the so-called muon neutrino). The two new ‘flavors’ of quarks were called s and c. [Feynman hates these names but let me give them: u stands for up, d for down, s for strange and c for charm. Why? Well… According to Feynman: “For no reason whatsoever.”]

Traces of the second-generation and c quarks were found in experiments in 1968 and 1974 respectively (it took six years to boost the particle accelerators sufficiently), and the third-generation quark (for beauty or bottom – whatever) popped up in Fermilab‘s particle accelerator in 1978. To be fully complete, it then took 17 years to detect the super-heavy t quark – which stands for truth.  [Of all the quarks, this name is probably the nicest: “If beauty, then truth” – as Lederman and Schramm write in their 1989 history of all of this.]

What’s next? Will there be a fourth or even fifth generation? Back in 1985, Feynman didn’t exclude it (and actually seemed to expect it), but current assessments are more prosaic. Indeed, Wikipedia writes that, According to the results of the statistical analysis by researchers from CERN and the Humboldt University of Berlin, the existence of further fermions can be excluded with a probability of 99.99999% (5.3 sigma).” If you want to know why… Well… Read the rest of the Wikipedia article. It’s got to do with the Higgs particle.

So the complete model of reality is the one I already inserted in a previous post and, if you find it complicated, remember that the first generation of matter is the one that matters and, among the bosons, it’s the photons and gluons. If you focus on these only, it’s not complicated at all – and surely a huge improvement over those 400+ particles no one understood in the 1960s.

Standard_Model_of_Elementary_Particles

As for the interactions, quarks stick together – and rather firmly so – by interchanging gluons. They thereby ‘change color’ (which is the same as saying there is some exchange of ‘charge’). I copy Feynman’s original illustration hereunder (not because there’s no better illustration: the stuff you can find on Wikipedia has actual colors !) but just because it’s reflects the other illustrations above (and, perhaps, maybe I also want to make sure – with this black-and-white thing – that you don’t think there’s something like ‘real’ color inside of a nucleus).

quark gluon exchange

So what are the loose ends then? The problem of ‘mathematical consistency’ associated with the techniques used to calculate (or estimate) these coupling constants – which Feynman identifies as a key defect in 1985 – is is a form of skepticism about the Standard Model that is not shared by others. It’s more about the other forces. So let’s now talk about these.

The weak force as the weird force: about symmetry breaking

I included the weak force in the title of one of the sub-sections above (“The three-forces model”) and then talked about the other two forces only. The W, W and Z bosons – usually referred to, as a group, as the W bosons, or the ‘intermediate vector bosons’ – are an odd bunch. First, note that they are the only ones that do not only have a (rest) mass (and not just a little bit: they’re almost 100 times heavier than a the proton or neutron – or a hydrogen atom !) but, on top of that, they also have electric charge (except for the Z boson). They are really the odd ones out.  Feynman does not doubt their existence (a Fermilab team produced them in 1983, and they got a Nobel Prize for it, so little room for doubts here !), but it is obvious he finds the weak force interaction model rather weird.

He’s not the only one: in a wonderful publication designed to make a case for more powerful particle accelerators (probably successful, because the Large Hadron Collider came through – and discovered credible traces of the Higgs field, which is involved in the story that is about to follow), Leon Lederman and David Schramm look at the asymmety involved in having massive W bosons and massless photons and gluons, as just one of the many asymmetries associated with the weak force. Let me develop this point.

We like symmetries. They are aesthetic. But so I am talking something else here: in classical physics, characterized by strict causality and determinism, we can – in theory – reverse the arrow of time. In practice, we can’t – because of entropy – but, in theory, so-called reversible machines are not a problem. However, in quantum mechanics we cannot reverse time for reasons that have nothing to do with thermodynamics. In fact, there are several types of symmetries in physics:

  1. Parity (P) symmetry revolves around the notion that Nature should not distinguish between right- and left-handedness, so everything that works in our world, should also work in the mirror world. Now, the weak force does not respect P symmetry. That was shown by experiments on the decay of pions, muons and radioactive cobalt-60 in 1956 and 1957 already.
  2. Charge conjugation or charge (C) symmetry revolves around the notion that a world in which we reverse all (electric) charge signs (so protons would have minus one as charge, and electrons have plus one) would also just work the same. The same 1957 experiments showed that the weak force does also not respect C symmetry.
  3. Initially, smart theorists noted that the combined operation of CP was respected by these 1957 experiments (hence, the principle of P and C symmetry could be substituted by a combined CP symmetry principle) but, then, in 1964, Val Fitch and James Cronin, proved that the spontaneous decay of neutral kaons (don’t worry if you don’t know what particle this is: you can look it up) into pairs of pions did not respect CP symmetry. In other words, it was – again – the weak force not respecting symmetry. [Fitch and Cronin got a Nobel Prize for this, so you can imagine it did mean something !]
  4. We mentioned time reversal (T) symmetry: how is that being broken? In theory, we can imagine a film being made of those events not respecting P, C or CP symmetry and then just pressing the ‘reverse’ button, can’t we? Well… I must admit I do not master the details of what I am going to write now, but let me just quote Lederman (another Nobel Prize physicist) and Schramm (an astrophysicist): “Years before this, [Wolfgang] Pauli [Remember him from his neutrino prediction?] had pointed out that a sequence of operations like CPT could be imagined and studied; that is, in sequence, change all particles to antiparticles, reflect the system in a mirror, and change the sign of time. Pauli’s theorem was that all nature respected the CPT operation and, in fact, that this was closely connected to the relativistic invariance of Einstein’s equations. There is a consensus that CPT invariance cannot be broken – at least not at energy scales below 1019 GeV [i.e. the Planck scale]. However, if CPT is a valid symmetry, then, when Fitch and Cronin showed that CP is a broken symmetry, they also showed that T symmetry must be similarly broken.” (Lederman and Schramm, 1989, From Quarks to the Cosmos, p. 122-123)

So the weak force doesn’t care about symmetries. Not at all. That being said, there is an obvious difference between the asymmetries mentioned above, and the asymmetry involved in W bosons having mass and other bosons not having mass. That’s true. Especially because now we have that Higgs field to explain why W bosons have mass – and not only W bosons but also the matter particles (i.e. the three generations of leptons and quarks discussed above). The diagram shows what interacts with what.

2000px-Elementary_particle_interactions.svgBut so the Higgs field does not interact with photons and gluons. Why? Well… I am not sure. Let me copy the Wikipedia explanation: “The Higgs field consists of four components, two neutral ones and two charged component fields. Both of the charged components and one of the neutral fields are Goldstone bosons, which act as the longitudinal third-polarization components of the massive W+, W– and Z bosons. The quantum of the remaining neutral component corresponds to (and is theoretically realized as) the massive Higgs boson.”

Huh? […] This ‘answer’ probably doesn’t answer your question. What I understand from the explanation above, is that the Higgs field only interacts with W bosons because its (theoretical) structure is such that it only interacts with W bosons. Now, you’ll remember Feynman’s oft-quoted criticism of string theory: I don’t like that for anything that disagrees with an experiment, they cook up an explanation–a fix-up to say.” Is the Higgs theory such cooked-up explanation? No. That kind of criticism would not apply here, in light of the fact that – some 50 years after the theory – there is (some) experimental confirmation at least !

But you’ll admit it does all look ‘somewhat ugly.’ However, while that’s a ‘loose end’ of the Standard Model, it’s not a fundamental defect or so. The argument is more about aesthetics, but then different people have different views on aesthetics – especially when it comes to mathematical attractiveness or unattractiveness.

So… No real loose end here I’d say.

Gravity

The other ‘loose end’ that Feynman mentions in his 1985 summary is obviously still very relevant today (much more than his worries about the weak force I’d say). It is the lack of a quantum theory of gravity. There is none. Of course, the obvious question is: why would we need one? We’ve got Einstein’s theory, don’t we? What’s wrong with it?

The short answer to the last question is: nothing’s wrong with it – on the contrary ! It’s just that it is – well… – classical physics. No uncertainty. As such, the formalism of quantum field theory cannot be applied to gravity. That’s it. What’s Feynman’s take on this? [Sorry I refer to him all the time, but I made it clear in the introduction of this post that I would be discussing ‘his’ loose ends indeed.] Well… He makes two points – a practical one and a theoretical one:

1. “Because the gravitation force is so much weaker than any of the other interactions, it is impossible at the present time to make any experiment that is sufficiently delicate to measure any effect that requires the precision of a quantum theory to explain it.”

Feynman is surely right about gravity being ‘so much weaker’. Indeed, you should note that, at a scale of 10–13 cm (that’s the picometer scale – so that’s the relevant scale indeed at the sub-atomic level), the coupling constants compare as follows: if the coupling constant of the strong force is 1, the coupling constant of the electromagnetic force is approximately 1/137, so that’s a factor of 10–2 approximately. The strength of the weak force as measured by the coupling constant would be smaller with a factor 10–13 (so that’s 1/10000000000000 smaller). Incredibly small, but so we do have a quantum field theory for the weak force ! However, the coupling constant for the gravitational force involves a factor 10–38. Let’s face it: this is unimaginably small.

However, Feynman wrote this in 1985 (i.e. thirty years ago) and scientists wouldn’t be scientists if they would not at least try to set up some kind of experiment. So there it is: LIGO. Let me quote Wikipedia on it:

LIGO, which stands for the Laser Interferometer Gravitational-Wave Observatory, is a large-scale physics experiment aiming to directly detect gravitation waves. […] At the cost of $365 million (in 2002 USD), it is the largest and most ambitious project ever funded by the NSF. Observations at LIGO began in 2002 and ended in 2010; no unambiguous detections of gravitational waves have been reported. The original detectors were disassembled and are currently being replaced by improved versions known as “Advanced LIGO”.

So, let’s see what comes out of that. I won’t put my money on it just yet. 🙂 Let’s go to the theoretical problem now.

2. “Even though there is no way to test them, there are, nevertheless, quantum theories of gravity that involve ‘gravitons’ (which would appear under a new category of polarizations, called spin “2”) and other fundamental particles (some with spin 3/2). The best of these theories is not able to include the particles that we do find, and invents a lot of particles that we don’t find. [In addition] The quantum theories of gravity also have infinities in the terms with couplings [Feynman does not refer to a coupling constant but to a factor n appearing in the so-called propagator for an electron – don’t worry about it: just note it’s a problem with one of those constants actually being larger than one !], but the “dippy process” that is successful in getting rid of the infinities in quantum electrodynamics doesn’t get rid of them in gravitation. So not only have we no experiments with which to check a quantum theory of gravitation, we also have no reasonable theory.”

Phew ! After reading that, you wouldn’t apply for a job at that LIGO facility, would you? That being said, the fact that there is a LIGO experiment would seem to undermine Feynman’s practical argument. But then is his theoretical criticism still relevant today? I am not an expert, but it would seem to be the case according to Wikipedia’s update on it:

“Although a quantum theory of gravity is needed in order to reconcile general relativity with the principles of quantum mechanics, difficulties arise when one attempts to apply the usual prescriptions of quantum field theory. From a technical point of view, the problem is that the theory one gets in this way is not renormalizable and therefore cannot be used to make meaningful physical predictions. As a result, theorists have taken up more radical approaches to the problem of quantum gravity, the most popular approaches being string theory and loop quantum gravity.”

Hmm… String theory and loop quantum gravity? That’s the stuff that Penrose is exploring. However, I’d suspect that for these (string theory and loop quantum gravity), Feynman’s criticism probably still rings true – to some extent at least: 

I don’t like that they’re not calculating anything. I don’t like that they don’t check their ideas. I don’t like that for anything that disagrees with an experiment, they cook up an explanation–a fix-up to say, “Well, it might be true.” For example, the theory requires ten dimensions. Well, maybe there’s a way of wrapping up six of the dimensions. Yes, that’s all possible mathematically, but why not seven? When they write their equation, the equation should decide how many of these things get wrapped up, not the desire to agree with experiment. In other words, there’s no reason whatsoever in superstring theory that it isn’t eight out of the ten dimensions that get wrapped up and that the result is only two dimensions, which would be completely in disagreement with experience. So the fact that it might disagree with experience is very tenuous, it doesn’t produce anything; it has to be excused most of the time. It doesn’t look right.”

What to say by way of conclusion? Not sure. I think my personal “research agenda” is reasonably simple: I just want to try to understand all of the above somewhat better and then, perhaps, I might be able to understand some of what Roger Penrose is writing. 🙂