Rutherford’s idea of an electron

Pre-scriptum (dated 27 June 2020): Two illustrations in this post were deleted by the dark force. We will not substitute them. The reference is given and it will help you to look them up yourself. In fact, we think it will greatly advance your understanding if you do so. Mr. Gottlieb may actually have done us a favor by trying to pester us.

Electrons, atoms, elementary particles and wave equations

The New Zealander Ernest Rutherford came to be known as the father of nuclear physics. He was the first to provide a reliable estimate of the order of magnitude of the size of the nucleus. To be precise, in the 1921 paper which we will discuss here, he came up with an estimate of about 15 fm for massive nuclei, which is the current estimate for the size of an uranium nucleus. His experiments also helped to significantly enhance the Bohr model of an atom, culminating – just before WW I started – in the Bohr-Rutherford model of an atom (E. Rutherford, Phil. Mag. 27, 488).

The Bohr-Rutherford model of an atom explained the (gross structure of the) hydrogen spectrum perfectly well, but it could not explain its finer structure—read: the orbital sub-shells which, as we all know now (but not very well then), result from the different states of angular momentum of an electron and the associated magnetic moment.

The issue is probably best illustrated by the two diagrams below, which I copied from Feynman’s Lectures. As you can see, the idea of subshells is not very relevant when looking at the gross structure of the hydrogen spectrum because the energy levels of all subshells are (very nearly) the same. However, the Bohr model of an atom—which is nothing but an exceedingly simple application of the E = h·f equation (see p. 4-6 of my paper on classical quantum physics)—cannot explain the splitting of lines for a lithium atom, which is shown in the diagram on the right. Nor can it explain the splitting of spectral lines when we apply a stronger or weaker magnetic field while exciting the atoms so as to induce emission of electromagnetic radiation.

Schrödinger’s wave equation solves that problem—which is why Feynman and other modern physicists claim this equation is “the most dramatic success in the history of the quantum mechanics” or, more modestly, a “key result in quantum mechanics” at least!

Such dramatic statements are exaggerated. First, an even finer analysis of the emission spectrum (of hydrogen or whatever other atom) reveals that Schrödinger’s wave equation is also incomplete: the hyperfine splitting, the Zeeman splitting (anomalous or not) or the (in)famous Lamb shift are to be explained not only in terms of the magnetic moment of the electron but also in terms of the magnetic moment of the nucleus and its constituents (protons and neutrons)—or of the coupling between those magnetic moments (we may refer to our paper on the Lamb shift here). This cannot be captured in a wave equation: second-order differential equations are – quite simply – not sophisticated enough to capture the complexity of the atomic system here.

Also, as we pointed out previously, the current convention in regard to the use of the imaginary unit (i) in the wavefunction does not capture the spin direction and, therefore, makes abstraction of the direction of the magnetic moment too! The wavefunction therefore models theoretical spin-zero particles, which do not exist. In short, we cannot hope to represent anything real with wave equations and wavefunctions.

More importantly, I would dare to ask this: what use is an ‘explanation’ in terms of a wave equation if we cannot explain what the wave equation actually represents? As Feynman famously writes: “Where did we get it from? Nowhere. It’s not possible to derive it from anything you know. It came out of the mind of Schrödinger, invented in his struggle to find an understanding of the experimental observations of the real world.” Our best guess is that it, somehow, models (the local diffusion of) energy or mass densities as well as non-spherical orbital geometries. We explored such interpretations in our very first paper(s) on quantum mechanics, but the truth is this: we do not think wave equations are suitable mathematical tools to describe simple or complex systems that have some internal structure—atoms (think of Schrödinger’s wave equation here), electrons (think of Dirac’s wave equation), or protons (which is what some others tried to do, but I will let you do some googling here yourself).

We need to get back to the matter at hand here, which is Rutherford’s idea of an electron back in 1921. What can we say about it?

Rutherford’s contributions to the 1921 Solvay Conference

From what you know, and from what I write above, you will understand that Rutherford’s research focus was not on electrons: his prime interest was in explaining the atomic structure and in solving the mysteries of nuclear radiation—most notably the emission of alpha– and beta-particles as well as highly energetic gamma-rays by unstable or radioactive nuclei. In short, the nature of the electron was not his prime interest. However, this intellectual giant was, of course, very much interested in whatever experiment or whatever theory that might contribute to his thinking, and that explains why, in his contribution to the 1921 Solvay Conference—which materialized as an update of his seminal 1914 paper on The Structure of the Atom—he devotes considerable attention to Arthur Compton’s work on the scattering of light from electrons which, at the time (1921), had not even been published yet (Compton’s seminal article on (Compton) scattering was published in 1923 only).

It is also very interesting that, in the very same 1921 paper—whose 30 pages are more than a multiple of his 1914 article and later revisions of it (see, for example, the 1920 version of it, which actually has wider circulation on the Internet)—Rutherford also offers some short reflections on the magnetic properties of electrons while referring to Parson’s ring current model which, in French, he refers to as “l’électron annulaire de Parson.” Again, it is very strange that we should translate Rutherford’s 1921 remarks back in English—as we are sure the original paper must have been translated from English to French rather than the other way around.

However, it is what it is, and so here we do what we have to do: we give you a free translation of Rutherford’s remarks during the 1921 Solvay Conference on the state of research regarding the electron at that time. The reader should note these remarks are buried in a larger piece on the emission of β particles by radioactive nuclei which, as it turns out, are nothing but high-energy electrons (or their anti-matter counterpart—positrons). In fact, we should—before we proceed—draw attention to the fact that the physicists at the time had no clear notion of the concepts of protons and neutrons.

This is, indeed, another remarkable historical contribution of the 1921 Solvay Conference because, as far as I know, this is the first time Rutherford talks about the neutron hypothesis. It is quite remarkable he does not advance the neutron hypothesis to explain the atomic mass of atoms combining what we know think of as protons and neutrons (Rutherford regularly talks of a mix of ‘positive and negative electrons’ in the nucleus—neither the term proton or neutron was in use at the time) but as part of a possible explanation of nuclear fusion reactions in stars or stellar nebulae. This is, indeed, his response to a question during the discussions on Rutherford’s paper on the possibility of nuclear synthesis in stars or nebulae from the French physicist Jean Baptise Perrin who, independently from the American chemist William Draper Harkins, had proposed the possibility of hydrogen fusion just the year before (1919):

“We can, in fact, think of enormous energies being released from hydrogen nuclei merging to form helium—much larger energies than what can come from the Kelvin-Helmholtz mechanism. I have been thinking that the hydrogen in the nebulae might come from particles which we may refer to as ‘neutrons’: these would consist of a positive nucleus with an electron at an exceedingly small distance (“un noyau positif avec un électron à toute petite distance”). These would mediate the assembly of the nuclei of more massive elements. It is, otherwise, difficult to understand how the positively charged particles could come together against the repulsive force that pushes them apart—unless we would envisage they are driven by enormous velocities.”

We may add that, just to make sure he get this right, Rutherford is immediately requested to elaborate his point by the Danish physicist Martin Knudsen: “What’s the difference between a hydrogen atom and this neutron?”—which Rutherford simply answers as follows: “In a neutron, the electron would be very much closer to the nucleus.” In light of the fact that it was only in 1932 that James Chadwick would experimentally prove the existence of neutrons (and positively charged protons), we are, once again, deeply impressed by the the foresight of Rutherford and the other pioneers here: the predictive power of their theories and ideas is, effectively, truly amazing by any standard—including today’s. I should, perhaps, also add that I fully subscribe to Rutherford’s intuition that a neutron should be a composite particle consisting of a proton and an electron—but that’s a different discussion altogether.

We must come back to the topic of this post, which we will do now. Before we proceed, however, we should highlight one other contextual piece of information here: at the time, very little was known about the nature of α and β particles. We now know that beta-particles are electrons, and that alpha-particles combine two protons and two neutrons. That was not known in the 1920s, however: Rutherford and his associates could basically only see positive or negative particles coming out of these radioactive processes. This further underscores how much knowledge they were able to gain from rather limited sets of data.

Rutherford’s idea of an electron in 1921

So here is the translation of some crucial text. Needless to say, the italics, boldface and additions between [brackets] are not Rutherford’s but mine, of course.

“We may think the same laws should apply in regard to the scattering [“diffusion”] of α and β particles. [Note: Rutherford noted, earlier in his paper, that, based on the scattering patterns and other evidence, the force around the nucleus must respect the inverse square law near the nucleus—moreover, it must also do so very near to it.] However, we see marked differences. Anyone who has carefully studied the trajectories [photographs from the Wilson cloud chamber] of beta-particles will note the trajectories show a regular curvature. Such curved trajectories are even more obvious when they are illuminated by X-rays. Indeed, A.H. Compton noted that these trajectories seem to end in a converging helical path turning right or left. To explain this, Compton assumes the electron acts like a magnetic dipole whose axis is more or less fixed, and that the curvature of its path is caused by the magnetic field [from the (paramagnetic) materials that are used].

Further examination would be needed to make sure this curvature is not some coincidence, but the general impression is that the hypothesis may be quite right. We also see similar curvature and helicity with α particles in the last millimeters of their trajectories. [Note: α-particles are, obviously, also charged particles but we think Rutherford’s remark in regard to α particles also following a curved or helical path must be exaggerated: the order of magnitude of the magnetic moment of protons and neutrons is much smaller and, in any case, they tend to cancel each other out. Also, because of the rather enormous mass of α particles (read: helium nuclei) as compared to electrons, the effect would probably not be visible in a Wilson cloud chamber.]

The idea that an electron has magnetic properties is still sketchy and we would need new and more conclusive experiments before accepting it as a scientific fact. However, it would surely be natural to assume its magnetic properties would result from a rotation of the electron. Parson’s ring electron model [“électron annulaire“] was specifically imagined to incorporate such magnetic polarity [“polarité magnétique“].

A very interesting question here would be to wonder whether such rotation would be some intrinsic property of the electron or if it would just result from the rotation of the electron in its atomic orbital around the nucleus. Indeed, James Jeans usefully reminded me any asymmetry in an electron should result in it rotating around its own axis at the same frequency of its orbital rotation. [Note: The reader can easily imagine this: think of an asymmetric object going around in a circle and returning to its original position. In order to return to the same orientation, it must rotate around its own axis one time too!]

We should also wonder if an electron might acquire some rotational motion from being accelerated in an electric field and if such rotation, once acquired, would persist when decelerating in an(other) electric field or when passing through matter. If so, some of the properties of electrons would, to some extent, depend on their past.”

Each and every sentence in these very brief remarks is wonderfully consistent with modern-day modelling of electron behavior. We should add, of course, non-mainstream modeling of electrons but the addition is superfluous because mainstream physicists stubbornly continue to pretend electrons have no internal structure, and nor would they have any physical dimension. In light of the numerous experimental measurements of the effective charge radius as well as of the dimensions of the physical space in which photons effectively interfere with electrons, such mainstream assumptions seem completely ridiculous. However, such is the sad state of physics today.

Thinking backward and forward

We think that it is pretty obvious that Rutherford and others would have been able to adapt their model of an atom to better incorporate the magnetic properties not only of electrons but also of the nucleus and its constituents (protons and neutrons). Unfortunately, scientists at the time seem to have been swept away by the charisma of Bohr, Heisenberg and others, as well as by the mathematical brilliance of the likes of Sommerfeld, Dirac, and Pauli.

The road then was taken then has not led us very far. We concur with Oliver Consa’s scathing but essentially correct appraisal of the current sorry state of physics:

“QED should be the quantized version of Maxwell’s laws, but it is not that at all. QED is a simple addition to quantum mechanics that attempts to justify two experimental discrepancies in the Dirac equation: the Lamb shift and the anomalous magnetic moment of the electron. The reality is that QED is a bunch of fudge factors, numerology, ignored infinities, hocus-pocus, manipulated calculations, illegitimate mathematics, incomprehensible theories, hidden data, biased experiments, miscalculations, suspicious coincidences, lies, arbitrary substitutions of infinite values and budgets of 600 million dollars to continue the game. Maybe it is time to consider alternative proposals. Winter is coming.”

I would suggest we just go back where we went wrong: it may be warmer there, and thinking both backward as well as forward must, in any case, be a much more powerful problem solving technique than relying only on expert guessing on what linear differential equation(s) might give us some S-matrix linking all likely or possible initial and final states of some system or process. 🙂

Post scriptum: The sad state of physics is, of course, not limited to quantum electrodynamics only. We were briefly in touch with the PRad experimenters who put an end to the rather ridiculous ‘proton radius puzzle’ by re-confirming the previously established 0.83-0.84 range for the effective charge radius of a proton: we sent them our own classical back-of-the-envelope calculation of the Compton scattering radius of a proton based on the ring current model (see p. 15-16 of our paper on classical physics), which is in agreement with these measurements and courteously asked what alternative theories they were suggesting. Their spokesman replied equally courteously:

“There is no any theoretical prediction in QCD. Lattice [theorists] are trying to come up [with something] but that will take another decade before any reasonable  number [may come] from them.”

This e-mail exchange goes back to early February 2020. There has been no news since. One wonders if there is actually any real interest in solving puzzles. The physicist who wrote the above may have been nominated for a Nobel Prize in Physics—I surely hope so because, in contrast to some others, he and his team surely deserve one— but I think it is rather incongruous to finally firmly establish the size of a proton while, at the same time, admit that protons should not have any size according to mainstream theory—and we are talking the respected QCD sector of the equally respected Standard Model here!

We understand, of course! As Freddy Mercury famously sang: The Show Must Go On.

A realist interpretation of quantum physics

Feyerabend was a rather famous philosopher. He was of the opinion that ‘anything goes’. We disagree. Let me know your views on my latest paper. 🙂 Let me know your views on my latest paper. 🙂 Also check out this one: https://www.academia.edu/40226046/Neutrinos_as_the_photons_of_the_strong_force.

An introduction to virtual particles

Pre-script (dated 26 June 2020): Our ideas have evolved into a full-blown realistic (or classical) interpretation of all things quantum-mechanical. In addition, I note the dark force has amused himself by removing some material. So no use to read this. Read my recent papers instead. 🙂

Original post:

We are going to venture beyond quantum mechanics as it is usually understood – covering electromagnetic interactions only. Indeed, all of my posts so far – a bit less than 200, I think 🙂 – were all centered around electromagnetic interactions – with the model of the hydrogen atom as our most precious gem, so to speak.

In this post, we’ll be talking the strong force – perhaps not for the first time but surely for the first time at this level of detail. It’s an entirely different world – as I mentioned in one of my very first posts in this blog. Let me quote what I wrote there:

“The math describing the ‘reality’ of electrons and photons (i.e. quantum mechanics and quantum electrodynamics), as complicated as it is, becomes even more complicated – and, important to note, also much less accurate – when it is used to try to describe the behavior of  quarks. Quantum chromodynamics (QCD) is a different world. […] Of course, that should not surprise us, because we’re talking very different order of magnitudes here: femtometers (10–15 m), in the case of electrons, as opposed to attometers (10–18 m) or even zeptometers (10–21 m) when we’re talking quarks.”

In fact, the femtometer scale is used to measure the radius of both protons as well as electrons and, hence, is much smaller than the atomic scale, which is measured in nanometer (1 nm = 10−9 m). The so-called Bohr radius for example, which is a measure for the size of an atom, is measured in nanometer indeed, so that’s a scale that is a million times larger than the femtometer scale. This gap in the scale effectively separates entirely different worlds. In fact, the gap is probably as large a gap as the gap between our macroscopic world and the strange reality of quantum mechanics. What happens at the femtometer scale, really?

The honest answer is: we don’t know, but we do have models to describe what happens. Moreover, for want of better models, physicists sort of believe these models are credible. To be precise, we assume there’s a force down there which we refer to as the strong force. In addition, there’s also a weak force. Now, you probably know these forces are modeled as interactions involving an exchange of virtual particles. This may be related to what Aitchison and Hey refer to as the physicist’s “distaste for action-at-a-distance.” To put it simply: if one particle – through some force – influences some other particle, then something must be going on between the two of them.

Of course, now you’ll say that something is effectively going on: there’s the electromagnetic field, right? Yes. But what’s the field? You’ll say: waves. But then you know electromagnetic waves also have a particle aspect. So we’re stuck with this weird theoretical framework: the conceptual distinction between particles and forces, or between particle and field, are not so clear. So that’s what the more advanced theories we’ll be looking at – like quantum field theory – try to bring together.

Note that we’ve been using a lot of confusing and/or ambiguous terms here: according to at least one leading physicist, for example, virtual particles should not be thought of as particles! But we’re putting the cart before the horse here. Let’s go step by step. To better understand the ‘mechanics’ of how the strong and weak interactions are being modeled in physics, most textbooks – including Aitchison and Hey, which we’ll follow here – start by explaining the original ideas as developed by the Japanese physicist Hideki Yukawa, who received a Nobel Prize for his work in 1949.

So what is it all about? As said, the ideas – or the model as such, so to speak – are more important than Yukawa’s original application, which was to model the force between a proton and a neutron. Indeed, we now explain such force as a force between quarks, and the force carrier is the gluon, which carries the so-called color charge. To be precise, the force between protons and neutrons – i.e. the so-called nuclear force – is now considered to be a rather minor residual force: it’s just what’s left of the actual strong force that binds quarks together. The Wikipedia article on this has some good text and a really nice animation on this. But… Well… Again, note that we are only interested in the model right now. So how does that look like?

First, we’ve got the equivalent of the electric charge: the nucleon is supposed to have some ‘strong’ charge, which we’ll write as gs. Now you know the formulas for the potential energy – because of the gravitational force – between two masses, or the potential energy between two charges – because of the electrostatic force. Let me jot them down once again:

  1. U(r) = –G·M·m/r
  2. U(r) = (1/4πε0)·q1·q2/r

The two formulas are exactly the same. They both assume U = 0 for → ∞. Therefore, U(r) is always negative. [Just think of q1 and q2 as opposite charges, so the minus sign is not explicit – but it is also there!] We know that U(r) curve will look like the one below: some work (force times distance) is needed to move the two charges some distance away from each other – from point 1 to point 2, for example. [The distance r is x here – but you got that, right?]potential energy

Now, physics textbooks – or other articles you might find, like on Wikipedia – will sometimes mention that the strong force is non-linear, but that’s very confusing because… Well… The electromagnetic force – or the gravitational force – aren’t linear either: their strength is inversely proportional to the square of the distance and – as you can see from the formulas for the potential energy – that 1/r factor isn’t linear either. So that isn’t very helpful. In order to further the discussion, I should now write down Yukawa’s hypothetical formula for the potential energy between a neutron and a proton, which we’ll refer to, logically, as the n-p potential:n-p potentialThe −gs2 factor is, obviously, the equivalent of the q1·q2 product: think of the proton and the neutron having equal but opposite ‘strong’ charges. The 1/4π factor reminds us of the Coulomb constant: k= 1/4πε0. Note this constant ensures the physical dimensions of both sides of the equation make sense: the dimension of ε0 is N·m2/C2, so U(r) is – as we’d expect – expressed in newton·meter, or joule. We’ll leave the question of the units for gs open – for the time being, that is. [As for the 1/4π factor, I am not sure why Yukawa put it there. My best guess is that he wanted to remind us some constant should be there to ensure the units come out alright.]

So, when everything is said and done, the big new thing is the er/a/factor, which replaces the usual 1/r dependency on distance. Needless to say, e is Euler’s number here – not the electric charge. The two green curves below show what the er/a factor does to the classical 1/r function for = 1 and = 0.1 respectively: smaller values for a ensure the curve approaches zero more rapidly. In fact, for = 1, er/a/is equal to 0.368 for = 1, and remains significant for values that are greater than 1 too. In contrast, for = 0.1, er/a/is equal to 0.004579 (more or less, that is) for = 4 and rapidly goes to zero for all values greater than that.

graph 1graph 2Aitchison and Hey call a, therefore, a range parameter: it effectively defines the range in which the n-p potential has a significant value: outside of the range, its value is, for all practical purposes, (close to) zero. Experimentally, this range was established as being more or less equal to ≤ 2 fm. Needless to say, while this range factor may do its job, it’s obvious Yukawa’s formula for the n-p potential comes across as being somewhat random: what’s the theory behind? There’s none, really. It makes one think of the logistic function: the logistic function fits many statistical patterns, but it is (usually) not obvious why.

Next in Yukawa’s argument is the establishment of an equivalent, for the nuclear force, of the Poisson equation in electrostatics: using the E = –Φ formula, we can re-write Maxwell’s ∇•E = ρ/ε0 equation (aka Gauss’ Law) as ∇•E = –∇•∇Φ = –2Φ ⇔ 2Φ= –ρ/ε0 indeed. The divergence operator the • operator gives us the volume density of the flux of E out of an infinitesimal volume around a given point. [You may want to check one of my post on this. The formula becomes somewhat more obvious if we re-write it as ∇•E·dV = –(ρ·dV)/ε0: ∇•E·dV is then, quite simply, the flux of E out of the infinitesimally small volume dV, and the right-hand side of the equation says this is given by the product of the charge inside (ρ·dV) and 1/ε0, which accounts for the permittivity of the medium (which is the vacuum in this case).] Of course, you will also remember the Φ notation: is just the gradient (or vector derivative) of the (scalar) potential Φ, i.e. the electric (or electrostatic) potential in a space around that infinitesimally small volume with charge density ρ. So… Well… The Poisson equation is probably not so obvious as it seems at first (again, check my post on it on it for more detail) and, yes, that • operator – the divergence operator – is a pretty impressive mathematical beast. However, I must assume you master this topic and move on. So… Well… I must now give you the equivalent of Poisson’s equation for the nuclear force. It’s written like this:Poisson nuclearWhat the heck? Relax. To derive this equation, we’d need to take a pretty complicated détour, which we won’t do. [See Appendix G of Aitchison and Grey if you’d want the details.] Let me just point out the basics:

1. The Laplace operator (∇2) is replaced by one that’s nearly the same: ∇2 − 1/a2. And it operates on the same concept: a potential, which is a (scalar) function of the position r. Hence, U(r) is just the equivalent of Φ.

2. The right-hand side of the equation involves Dirac’s delta function. Now that’s a weird mathematical beast. Its definition seems to defy what I refer to as the ‘continuum assumption’ in math.  I wrote a few things about it in one of my posts on Schrödinger’s equation – and I could give you its formula – but that won’t help you very much. It’s just a weird thing. As Aitchison and Grey write, you should just think of the whole expression as a finite range analogue of Poisson’s equation in electrostatics. So it’s only for extremely small that the whole equation makes sense. Outside of the range defined by our range parameter a, the whole equation just reduces to 0 = 0 – for all practical purposes, at least.

Now, of course, you know that the neutron and the proton are not supposed to just sit there. They’re also in these sort of intricate dance which – for the electron case – is described by some wavefunction, which we derive as a solution from Schrödinger’s equation. So U(r) is going to vary not only in space but also in time and we should, therefore, write it as U(r, t). Now, we will, of course, assume it’s going to vary in space and time as some wave and we may, therefore, suggest some wave equation for it. To appreciate this point, you should review some of the posts I did on waves. More in particular, you may want to review the post I did on traveling fields, in which I showed you the following: if we see an equation like:f8then the function ψ(x, t) must have the following general functional form:solutionAny function ψ like that will work – so it will be a solution to the differential equation – and we’ll refer to it as a wavefunction. Now, the equation (and the function) is for a wave traveling in one dimension only (x) but the same post shows we can easily generalize to waves traveling in three dimensions. In addition, we may generalize the analyse to include complex-valued functions as well. Now, you will still be shocked by Yukawa’s field equation for U(r, t) but, hopefully, somewhat less so after the above reminder on how wave equations generally look like:Yukawa wave equationAs said, you can look up the nitty-gritty in Aitchison and Grey (or in its appendices) but, up to this point, you should be able to sort of appreciate what’s going on without getting lost in it all. Yukawa’s next step – and all that follows – is much more baffling. We’d think U, the nuclear potential, is just some scalar-valued wave, right? It varies in space and in time, but… Well… That’s what classical waves, like water or sound waves, for example do too. So far, so good. However, Yukawa’s next step is to associate a de Broglie-type wavefunction with it. Hence, Yukawa imposes solutions of the type:potential as particleWhat? Yes. It’s a big thing to swallow, and it doesn’t help most physicists refer to U as a force field. A force and the potential that results from it are two different things. To put it simply: the force on an object is not the same as the work you need to move it from here to there. Force and potential are related but different concepts. Having said that, it sort of make sense now, doesn’t it? If potential is energy, and if it behaves like some wave, then we must be able to associate it with a de Broglie-type particle. This U-quantum, as it is referred to, comes in two varieties, which are associated with the ongoing absorption-emission process that is supposed to take place inside of the nucleus (depicted below):

p + U → n and n + U+ → p

absorption emission

It’s easy to see that the U and U+ particles are just each other’s anti-particle. When thinking about this, I can’t help remembering Feynman, when he enigmatically wrote – somewhere in his Strange Theory of Light and Matter – that an anti-particle might just be the same particle traveling back in time. In fact, the exchange here is supposed to happen within a time window that is so short it allows for the brief violation of the energy conservation principle.

Let’s be more precise and try to find the properties of that mysterious U-quantum. You’ll need to refresh what you know about operators to understand how substituting Yukawa’s de Broglie wavefunction in the complicated-looking differential equation (the wave equation) gives us the following relation between the energy and the momentum of our new particle:mass 1Now, it doesn’t take too many gimmicks to compare this against the relativistically correct energy-momentum relation:energy-momentum relationCombining both gives us the associated (rest) mass of the U-quantum:rest massFor ≈ 2 fm, mU is about 100 MeV. Of course, it’s always to check the dimensions and calculate stuff yourself. Note the physical dimension of ħ/(a·c) is N·s2/m = kg (just think of the F = m·a formula). Also note that N·s2/m = kg = (N·m)·s2/m= J/(m2/s2), so that’s the [E]/[c2] dimension. The calculation – and interpretation – is somewhat tricky though: if you do it, you’ll find that:

ħ/(a·c) ≈ (1.0545718×10−34 N·m·s)/[(2×10−15 m)·(2.997924583×108 m/s)] ≈ 0.176×10−27 kg

Now, most physics handbooks continue that terrible habit of writing particle weights in eV, rather than using the correct eV/c2 unit. So when they write: mU is about 100 MeV, they actually mean to say that it’s 100 MeV/c2. In addition, the eV is not an SI unit. Hence, to get that number, we should first write 0.176×10−27 kg as some value expressed in J/c2, and then convert the joule (J) into electronvolt (eV). Let’s do that. First, note that c2 ≈ 9×1016 m2/s2, so 0.176×10−27 kg ≈ 1.584×10−11 J/c2. Now we do the conversion from joule to electronvolt. We get: (1.584×10−11 J/c2)·(6.24215×1018 eV/J) ≈ 9.9×107 eV/c2 = 99 MeV/c2Bingo! So that was Yukawa’s prediction for the nuclear force quantum.

Of course, Yukawa was wrong but, as mentioned above, his ideas are now generally accepted. First note the mass of the U-quantum is quite considerable: 100 MeV/c2 is a bit more than 10% of the individual proton or neutron mass (about 938-939 MeV/c2). While the binding energy causes the mass of an atom to be less than the mass of their constituent parts (protons, neutrons and electrons), it’s quite remarkably that the deuterium atom – a hydrogen atom with an extra neutron – has an excess mass of about 13.1 MeV/c2, and a binding energy with an equivalent mass of only 2.2 MeV/c2. So… Well… There’s something there.

As said, this post only wanted to introduce some basic ideas. The current model of nuclear physics is represented by the animation below, which I took from the Wikipedia article on it. The U-quantum appears as the pion here – and it does not really turn the proton into a neutron and vice versa. Those particles are assumed to be stable. In contrast, it is the quarks that change color by exchanging gluons between each other. And we know look at the exchange particle – which we refer to as the pion – between the proton and the neutron as consisting of two quarks in its own right: a quark and a anti-quark. So… Yes… All weird. QCD is just a different world. We’ll explore it more in the coming days and/or weeks. 🙂Nuclear_Force_anim_smallerAn alternative – and simpler – way of representing this exchange of a virtual particle (a neutral pion in this case) is obtained by drawing a so-called Feynman diagram:Pn_scatter_pi0OK. That’s it for today. More tomorrow. 🙂

Some content on this page was disabled on June 20, 2020 as a result of a DMCA takedown notice from Michael A. Gottlieb, Rudolf Pfeiffer, and The California Institute of Technology. You can learn more about the DMCA here:

https://en.support.wordpress.com/copyright-and-the-dmca/
Some content on this page was disabled on June 20, 2020 as a result of a DMCA takedown notice from Michael A. Gottlieb, Rudolf Pfeiffer, and The California Institute of Technology. You can learn more about the DMCA here:

https://en.support.wordpress.com/copyright-and-the-dmca/