The mystery of the elementary charge

As part of my ‘debunking quantum-mechanical myths’ drive, I re-wrote Feynman’s introductory lecture on quantum mechanics. Of course, it has got nothing to do with Feynman’s original lecture—titled: on Quantum Behavior: I just made some fun of Feynman’s preface and that’s basically it in terms of this iconic reference. Hence, Mr. Gottlieb should not make too much of a fuss—although I hope he will, of course, because it would draw more attention to the paper. It was a fun exercise because it encouraged me to join an interesting discussion on ResearchGate (I copied the topic and some up and down below) which, in turn, made me think some more about what I wrote about the form factor in the explanation of the electron, muon and proton. Let me copy the relevant paragraph:

When we talked about the radius of a proton, we promised you we would talk some more about the form factor. The idea is very simple: an angular momentum (L) can always be written as the product of a moment of inertia (I) and an angular frequency (ω). We also know that the moment of inertia for a rotating mass or a hoop is equal to I = mr2, while it is equal to I = mr2/4 for a solid disk. So you might think this explains the 1/4 factor: a proton is just an anti-muon but in disk version, right? It is like a muon because of the strong force inside, but it is even smaller because it packs its charge differently, right?

Maybe. Maybe not. We think probably not. Maybe you will have more luck when playing with the formulas but we could not demonstrate this. First, we must note, once again, that the radius of a muon (about 1.87 fm) and a proton (0.83-0.84 fm) are both smaller than the radius of the pointlike charge inside of an electron (α·ħ/mec ≈ 2.818 fm). Hence, we should start by suggesting how we would pack the elementary charge into a muon first!

Second, we noted that the proton mass is 8.88 times that of the muon, while the radius is only 2.22 times smaller – so, yes, that 1/4 ratio once more – but these numbers are still weird: even if we would manage to, somehow, make abstraction of this form factor by accounting for the different angular momentum of a muon and a proton, we would probably still be left with a mass difference we cannot explain in terms of a unique force geometry.

Perhaps we should introduce other hypotheses: a muon is, after all, unstable, and so there may be another factor there: excited states of electrons are unstable too and involve an n = 2 or some other number in Planck’s E = n·h·f equation, so perhaps we can play with that too.

Our answer to such musings is: yes, you can. But please do let us know if you have more luck then us when playing with these formulas: it is the key to the mystery of the strong force, and we did not find it—so we hope you do!

So… Well… This is really as far as a realist interpretation of quantum mechanics will take you. One can solve most so-called mysteries in quantum mechanics (interference of electrons, tunneling and what have you) with plain old classical equations (applying Planck’s relation to electromagnetic theory, basically) but here we are stuck: the elementary charge itself is a most mysterious thing. When packing it into an electron, a muon or a proton, Nature gives it a very different shape and size.

The shape or form factor is related to the angular momentum, while the size has got to do with scale: the scale of a muon and proton is very different than that of an electron—smaller even than the pointlike Zitterbewegung charge which we used to explain the electron. So that’s where we are. It’s like we’ve got two quanta—rather than one only: Planck’s quantum of action, and the elementary charge. Indeed, Planck’s quantum of action may also be said to express itself itself very differently in space or in time (h = E·T versus h = p·λ). Perhaps there is room for additional simplification, but I doubt it. Something inside of me says that, when everything is said and done, I will just have to accept that electrons are electrons, and protons are protons, and a muon is a weird unstable thing in-between—and all other weird unstable things in-between are non-equilibrium states which one cannot explain with easy math.

Would that be good enough? For you? I cannot speak for you. Is it a good enough explanation for me? I am not sure. I have not made my mind up yet. I am taking a bit of a break from physics for the time being, but the question will surely continue to linger in the back of my mind. We’ll keep you updated on progress ! Thanks for staying tuned ! JL

PS: I realize the above might sound a bit like crackpot theory but that is just because it is very dense and very light writing at the same time. If you read the paper in full, you should be able to make sense of it. 🙂 You should also check the formulas for the moments of inertia: the I = mr2/4 formula for a solid disk depends on your choice of the axis of symmetry.

Research Gate

Peter Jackson

Dear Peter – Thanks so much for checking the paper and your frank comments. That is very much appreciated. I know I have gone totally overboard in dismissing much of post-WW II developments in quantum physics – most notably the idea of force-carrying particles (bosons – including Higgs, W/Z bosons and gluons). My fundamental intuition here is that field theories should be fine for modeling interactions (I’ll quote Dirac’s 1958 comments on that at the very end of my reply here) and, yes, we should not be limiting the idea of a field to EM fields only. So I surely do not want to give the impression I think classical 19th/early 20th century physics – Planck’s relation, electromagnetic theory and relativity – can explain everything.

Having said that, the current state of physics does resemble the state of scholastic philosophy before it was swept away by rationalism: I feel there has been a multiplication of ill-defined concepts that did not add much additional explanation of what might be the case (the latter expression is Wittgenstein’s definition of reality). So, yes, I feel we need some reincarnation of William of Occam to apply his Razor and kick ass. Fortunately, it looks like there are many people trying to do exactly that now – a return to basics – so that’s good: I feel like I can almost hear the tectonic plates moving. 🙂

My last paper is a half-serious rewrite of Feynman’s first Lecture on Quantum Mechanics. Its intention is merely provocative: I want to highlight what of the ‘mystery’ in quantum physics is truly mysterious and what is humbug or – as Feynman would call it – Cargo Cult Science. The section on the ‘form factor’ (what is the ‘geometry’ of the strong force?) in that paper is the shortest and most naive paragraph in that text but it actually does highlight the one and only question that keeps me awake: what is that form factor, what different geometry do we need to explain a proton (or a muon) as opposed to, say, an electron? I know I have to dig into the kind of stuff that you are highlighting – and Alex Burinskii’s Dirac-Kerr-Newman models (also integrating gravity) to find elements that – one day – may explain why a muon is not an electron, and why a proton is not a positron.

Indeed, I think the electron and photon model are just fine: classical EM and Planck’s relation are all that’s needed and so I actually don’t waste to more time on the QED sector. But a decent muon and proton model will, obviously, require ”something else’ than Planck’s relation, the electric charge and electromagnetic theory. The question here is: what is that ‘something else’, exactly?

Even if we find another charge or another field theory to explain the proton, then we’re just at the beginning of explaining the QCD sector. Indeed, the proton and muon are stable (fairly stable – I should say – in case of the muon – which I want to investigate because of the question of matter generations). In contrast, transient particles and resonances do not respect Planck’s relation – that’s why they are unstable – and so we are talking non-equilibrium states and so that’s an entirely different ballgame. In short, I think Dirac’s final words in the very last (fourth) edition of his ‘Principles of Quantum Mechanics’ still ring very true today. They were written in 1958 so Dirac was aware of the work of Gell-Man and Nishijima (the contours of quark-gluon theory) and, clearly, did not think much of it (I understand he also had conversations with Feynman on this):

“Quantum mechanics may be defined as the application of equations of motion to particles. […] The domain of applicability of the theory is mainly the treatment of electrons and other charged particles interacting with the electromagnetic field⎯a domain which includes most of low-energy physics and chemistry.

Now there are other kinds of interactions, which are revealed in high-energy physics and are important for the description of atomic nuclei. These interactions are not at present sufficiently well understood to be incorporated into a system of equations of motion. Theories of them have been set up and much developed and useful results obtained from them. But in the absence of equations of motion these theories cannot be presented as a logical development of the principles set up in this book. We are effectively in the pre-Bohr era with regard to these other interactions. It is to be hoped that with increasing knowledge a way will eventually be found for adapting the high-energy theories into a scheme based on equations of motion, and so unifying them with those of low-energy physics.”

Again, many thanks for reacting and, yes, I will study the references you gave – even if I am a bit skeptical of Wolfram’s new project. Cheers – JL

The wavefunction in a medium: amplitudes as signals

We finally did what we wanted to do for a while already: we produced a paper on the meaning of the wavefunction and wave equations in the context of an atomic lattice (think of a conductor or a semiconductor here). Unsurprisingly, we came to the following conclusions:

1. The concept of the matter-wave traveling through the vacuum, an atomic lattice or any medium can be equated to the concept of an electric or electromagnetic signal traveling through the same medium.

2. There is no need to model the matter-wave as a wave packet: a single wave – with a precise frequency and a precise wavelength – will do.

3. If we do want to model the matter-wave as a wave packet rather than a single wave with a precisely defined frequency and wavelength, then the uncertainty in such wave packet reflects our own limited knowledge about the momentum and/or the velocity of the particle that we think we are representing. The uncertainty is, therefore, not inherent to Nature, but to our limited knowledge about the initial conditions or, what amounts to the same, what happened to the particle(s) in the past.

4. The fact that such wave packets usually dissipate very rapidly, reflects that even our limited knowledge about initial conditions tends to become equally rapidly irrelevant. Indeed, as Feynman puts it, “the tiniest irregularities tend to get magnified very quickly” at the micro-scale.

In short, as Hendrik Antoon Lorentz noted a few months before his demise, there is, effectively, no reason whatsoever “to elevate indeterminism to a philosophical principle.” Quantum mechanics is just what it should be: common-sense physics.

The paper confirms intuitions we had highlighted in previous papers already, but uses the formalism of quantum mechanics itself to demonstrate this.

PS: We put the paper on academia.edu and ResearchGate as well, but Phil Gibbs’ site has easy access (no log-in or membership required). Long live Phil Gibbs!

The metaphysics of physics

I just produced a first draft of the Metaphysics page of my new physics site. It does not only deal with the fundamental concepts we have been developing but – as importantly, if not more – it also offers some thoughts on all of the unanswered questions which, when trying to do science and be logical, are at least as important as the questions we do consider to be solved. Click the link or the tab. Enjoy ! 🙂 As usual, feedback is more than welcome!

The ultimate electron model

A rather eminent professor in physics – who has contributed significantly to solving the so-called ‘proton radius puzzle’ – advised me to not think of the anomalous magnetic moment of the electron as an anomaly. It led to a breakthrough in my thinking of what an electron might actually be. The fine-structure constant should be part and parcel of the model, indeed. Check out my last paper ! I’d be grateful for comments !

I know the title of this post sounds really arrogant. It is what it is. Whatever brain I have has been thinking about these issues consciously and unconsciously for many years now. It looks good to me. When everything is said and done, the function of our mind is to make sense. What’s sense-making? I’d define sense-making as creating consistency between (1) the structure of our ideas and theories (which I’ll conveniently define as ‘mathematical’ here) and (2) what we think of as the structure of reality (which I’ll define as ‘physical’).

I started this blog reading Penrose (see the About page of this blog). And then I just put his books aside and started reading Feynman. I think I should start re-reading Penrose. His ‘mind-physics-math’ triangle makes a lot more sense to me now.

JL

PS: I agree the title of my post is excruciatingly arrogant but – believe me – I could have chosen an even more arrogant title. Why? Because I think my electron model actually explains mass. And it does so in a much more straightforward manner than Higgs, or Brout–Englert–Higgs, or Englert–Brout–Higgs–Guralnik–Hagen–Kibble, Anderson–Higgs, Anderson–Higgs–Kibble, Higgs–Kibble, or ABEGHHK’t (for Anderson, Brout, Englert, Guralnik, Hagen, Higgs, Kibble, and ‘t Hooft) do. [I am just trying to attribute the theory here using the Wikipedia article on it.] 

Taking the magic out of God’s number

Note: I have published a paper that is very coherent and fully explains this so-called God-given number. There is nothing magical about it. It is just a scaling constant. Check it out: The Meaning of the Fine-Structure Constant. No ambiguity. No hocus-pocus.

Jean Louis Van Belle, 23 December 2018

Original post:

I think the post scriptum to my previous post is interesting enough to separate it out as a piece of its own, so let me do that here. You’ll remember that we were trying to find some kind of a model for the electron, picturing it like a tiny little ball of charge, and then we just applied the classical energy formulas to it to see what comes out of it. The key formula is the integral that gives us the energy that goes into assembling a charge. It was the following thing:

U 4

This is a double integral which we simplified in two stages, so we’re looking at an integral within an integral really, but we can substitute the integral over the ρ(2)·dVproduct by the formula we got for the potential, so we write that as Φ(1), and so the integral above becomes:

U 5Now, this integral integrates the ρ(1)·Φ(1)·dVproduct over all of space, so that’s over all points in space, and so we just dropped the index and wrote the whole thing as the integral of ρ·Φ·dV over all of space:

U 6

We then established that this integral was mathematically equivalent to the following equation:

U 7

So this integral is actually quite simple: it just integrates EE = E2 over all of space. The illustration below shows E as a function of the distance for a sphere of radius R filled uniformly with charge.

uniform density

So the field (E) goes as for r ≤ R and as 1/rfor r ≥ R. So, for r ≥ R, the integral will have (1/r2)2 = 1/rin it. Now, you know that the integral of some function is the surface under the graph of that function. Look at the 1/r4 function below: it blows up between 1 and 0. That’s where the problem is: there needs to be some kind of cut-off, because that integral will effectively blow up when the radius of our little sphere of charge gets ‘too small’. So that makes it clear why it doesn’t make sense to use this formula to try to calculate the energy of a point charge. It just doesn’t make sense to do that.

graph

In fact, the need for a ‘cut-off factor’ so as to ensure our energy function doesn’t ‘blow up’ is not because of the exponent in the 1/r4 expression: the need is also there for any 1/r relation, as illustrated below. All 1/rfunction have the same pivot point, as you can see from the simple illustration below. So, yes, we cannot go all the way to zero from there when integrating: we have to stop somewhere.

graph 2So what’s the ‘cut-off point’? What’s ‘too small’ a radius? Let’s look at the formula we got for our electron as a shell of charge (so the assumption here is that the charge is uniformly distributed on the surface of a sphere with radius a):

energy electron

So we’ve got an even simpler formula here: it’s just a 1/relation (a is in this formula), not 1/r4. Why is that? Well… It’s just the way the math turns out: we’re integrating over volumes and so that involves an r3 factor and so it all simplifies to 1/r, and so that gives us this simple inversely proportional relationship between U and r, i.e a, in this case. 🙂 I copied the detail of Feynman’s calculation in my previous post, so you can double-check it. It’s quite wonderful, really. Look at it again: we have a very simple inversely proportional relationship between the radius of our electron and its energy as a sphere of charge. We could write it as:

Uelect  = α/a, with α = e2/2

Still… We need the cut-off point’. Also note that, as I pointed out, we don’t necessarily need to assume that the charge in our little ball of charge (i.e. our electron) sits on the surface only: if we’d assume it’s a uniformly charged sphere of charge, we’d just get another constant of proportionality: our 1/2 factor would become a 3/5 factor, so we’d write: Uelect  = (3/5)·e2/a. But we’re not interested in finding the right model here. We know the Uelect  = (3/5)·e2/a gives us a value for that differs with a 2/5 factor as the classical electron radius. That’s not so bad and so let’s go along with it. 🙂

We’re going to look at the simple structure of this relation, and all of its implications. The simple equation above says that the energy of our electron is (a) proportional to the square of its charge and (b) inversely proportional to its radius. Now, that is a very remarkable result. In fact, we’ve seen something like this before, and we were astonished. We saw it when we were discussing the wonderful properties of that magical number, the fine-structure constant, which we also denoted by α. However, because we used α already, I’ll denote the fine-structure constant as αe here, so you don’t get confused. You’ll remember that the fine-structure constant is a God-like number indeed: it links all of the fundamental properties of the electron, i.e. its charge, its radius, its distance to the nucleus (i.e. the Bohr radius), its velocity, its mass (and, hence, its energy), its de Broglie wavelength. Whatever: all these physical constants are all related through the fine-structure constant. 

In my various posts on this topic, I’ve repeatedly said that, but I never showed why it’s true, and so it was a very magical number indeed. I am going to take some of the magic out now. Not too much but… Well… You can judge for yourself how much of the magic remains after I am done here. 🙂

So, at this stage of the argument, α can be anything, and αcannot, of course. It’s just that magical number out there, which relates everything to everything: it’s the God-given number we don’t understand, or didn’t understand, I should say. Past tense. Indeed, we’re going to get some understanding here because we know that one of the many expressions involving αe was the following one:

me = αe/re

This says that the mass of the electron is equal to the ratio of the fine-structure constant and the electron radius. [Note that we express everything in natural units here, so that’s Planck units. For the detail of the conversion, please see the relevant section on that in my one of my posts on this and other stuff.] In fact, the U = (3/5)·e2/a and me = αe/rrelations looks exactly the same, because one of the other equations involving the fine-structure constant was: αe = eP2. So we’ve got the square of the charge here as well! Indeed, as I’ll explain in a moment, the difference between the two formulas is just a matter of units.

Now, mass is equivalent to energy, of course: it’s just a matter of units, so we can equate me with Ee (this amounts to expressing the energy of the electron in a kg unit—bit weird, but OK) and so we get:

Ee = αe/re

So there we have: the fine-structure constant αe is Nature’s ‘cut-off’ factor, so to speak. Why? Only God knows. 🙂 But it’s now (fairly) easy to see why all the relations involving αe are what they are. As I mentioned already, we also know that αe is the square of the electron charge expressed in Planck units, so we have:

 αe = eP2 and, therefore, Ee = eP2/re

Now, you can check for yourself: it’s just a matter of re-expressing everything in standard SI units, and relating eP2 to e2, and it should all work: you should get the Eelect  = (2/3)·e2/expression. So… Well… At least this takes some of the magic out the fine-structure constant. It’s still a wonderful thing, but so you see that the fundamental relationship between (a) the energy (and, hence, the mass), (b) the radius and (c) the charge of an electron is not something God-given. What’s God-given are Maxwell’s equations, and so the Ee = αe/r= eP2/re is just one of the many wonderful things that you can get out of  them.

So we found God’s ‘cut-off factor’ 🙂 It’s equal to αe ≈ 0.0073 = 7.3×10−3. So 7.3 thousands of… What? Well… Nothing. It’s just a pure ratio between the energy and the radius of an electron (if both are expressed in Planck units, of course). And so it determines the electron charge (again, expressed in Planck units). Indeed, we write:

eP = √αe

Really? Yes. Just do all these formulas:

eP = √α≈ √0.0073·1.9×10−18 coulomb ≈ 1.6 ×10−19 C

Just re-check it with all the known decimals: you’ll see it’s bang on. Let’s look at the E= me = αe/rratio once again. What’s the meaning of it? Let’s first calculate the value of re and me, i.e. the electron radius and electron mass expressed in Planck units. It’s equal to the classical electron radius divided by the Planck length, and then the same for the mass, so we get the following thing:

re ≈ (2.81794×10−15 m)/(1.6162×10−35 m) = 1.7435×1020 

me ≈ (9.1×10−31 kg)/(2.17651×10−8 kg) = 4.18×10−23

αe = (4.18×10−23)·(1.7435×1020) ≈ 0.0073

It works like a charm, but what does it mean? Well… It’s just a ratio between two physical quantities, and the scale you use to measure those quantities matters very much. We’ve explained that the Planck mass is a rather large unit at the atomic scale and, therefore, it’s perhaps not quite appropriate to use it here. In fact, out of the many interesting expressions for αe, I should highlight the following one:

αe = e2/(ħ·c) ≈ (1.60217662×10−19 C)2/(4πε0·[(1.054572×10−34 N·m·s)·(2.998×108 m/s)]) ≈ 0.0073 once more 🙂

Note that the elementary charge e is actually equal to qe/4πε0, which is what I am using in the formula. I know that’s confusing, but it what it is. As for the units, it’s a bit tedious to write it all out, but you’ll get there. Note that ε≈ 8.8542×10−12 C2/(N·m2) so… Well… All the units do cancel out, and we get a dimensionless number indeed, which is what αe is.

The point is: this expression links αe to the the de Broglie relation (p = h/λ), with λ the wavelength that’s associated with the electron. Of course, because of the Uncertainty Principle, we know we’re talking some wavelength range really, so we should write the de Broglie relation as Δp = h·Δ(1/λ). Now, that, in turn, allows us to try to work out the Bohr radius, which is the other ‘dimension’ we associate with an electron. Of course, now you’ll say: why would you do that. Why would you bring in the de Broglie relation here?

Well… We’re talking energy, and so we have the Planck-Einstein relation first: the energy of some particle can always be written as the product of and some frequency f: E = h·f. The only thing that de Broglie relation adds is the Uncertainty Principle indeed: the frequency will be some frequency range, associated with some momentum range, and so that’s what the Uncertainty Principle really says. I can’t dwell too much on that here, because otherwise this post would become a book. 🙂 For more detail, you can check out one of my many posts on the Uncertainty Principle. In fact, the one I am referring to here has Feynman’s calculation of the Bohr radius, so I warmly recommend you check it out. The thrust of the argument is as follows:

  1. If we assume that (a) an electron takes some space – which I’ll denote by r 🙂 – and (b) that it has some momentum p because of its mass m and its velocity v, then the ΔxΔp = ħ relation (i.e. the Uncertainty Principle in its roughest form) suggests that the order of magnitude of r and p should be related in the very same way. Hence, let’s just boldly write r ≈ ħ/p and see what we can do with that.
  2. We know that the kinetic energy of our electron equals mv2/2, which we can write as p2/2m so we get rid of the velocity factor.Well… Substituting our p ≈ ħ/r conjecture, we get K.E. = ħ2/2mr2. So that’s a formula for the kinetic energy. Next is potential.
  3. The formula for the potential energy is U = q1q2/4πε0r12. Now, we’re actually talking about the size of an atom here, so one charge is the proton (+e) and the other is the electron (–e), so the potential energy is U = P.E. = –e2/4πε0r, with r the ‘distance’ between the proton and the electron—so that’s the Bohr radius we’re looking for!
  4. We can now write the total energy (which I’ll denote by E, but don’t confuse it with the electric field vector!) as E = K.E. + P.E. =  ħ2/2mr– e2/4πε0r. Now, the electron (whatever it is) is, obviously, in some kind of equilibrium state. Why is that obvious? Well… Otherwise our hydrogen atom wouldn’t or couldn’t exist. 🙂 Hence, it’s in some kind of energy ‘well’ indeed, at the bottom. Such equilibrium point ‘at the bottom’ is characterized by its derivative (in respect to whatever variable) being equal to zero. Now, the only ‘variable’ here is r (all the other symbols are physical constants), so we have to solve for dE/dr = 0. Writing it all out yields: dE/dr = –ħ2/mr+ e2/4πε0r= 0 ⇔ r = 4πε0ħ2/me2
  5. We can now put the values in: r = 4πε0h2/me= [(1/(9×109) C2/N·m2)·(1.055×10–34 J·s)2]/[(9.1×10–31 kg)·(1.6×10–19 C)2] = 53×10–12 m = 53 pico-meter (pm)

Done. We’re right on the spot. The Bohr radius is, effectively, about 53 trillionths of a meter indeed!

Phew!

Yes… I know… Relax. We’re almost done. You should now be able to figure out why the classical electron radius and the Bohr radius can also be related to each other through the fine-structure constant. We write:

me = α/r= α/α2r = 1/αr

So we get that α/r= 1/αr and, therefore, we get re/r = α2, which explains why α is also equal to the so-called junction number, or the coupling constant, for an electron-photon coupling (see my post on the quantum-mechanical aspects of the photon-electron interaction). It gives a physical meaning to the probability (which, as you know, is the absolute square of the probability amplitude) in terms of the chance of a photon actually ‘hitting’ the electron as it goes through the atom. Indeed, the ratio of the Thomson scattering cross-section and the Bohr size of the atom should be of the same order as re/r, and so that’s α2.

[Note: To be fully correct and complete, I should add that the coupling constant itself is not α2 but √α = eP. Why do we have this square root? You’re right: the fact that the probability is the absolute square of the amplitude explains one square root (√α2 = α), but not two. The thing is: the photon-electron interaction consists of two things. First, the electron sort of ‘absorbs’ the photon, and then it emits another one, that has the same or a different frequency depending on whether or not the ‘collision’ was elastic or not. So if we denote the coupling constant as j, then the whole interaction will have a probability amplitude equal to j2. In fact, the value which Feynman uses in his wonderful popular presentation of quantum mechanics (The Strange Theory of Light and Matter), is −α ≈ −0.0073. I am not quite sure why the minus sign is there. It must be something with the angles involved (the emitted photon will not be following the trajectory of the incoming photon) or, else, with the special arithmetic involved in boson-fermion interactions (we add amplitudes when bosons are involved, but subtract amplitudes when it’s fermions interacting. I’ll probably find out once I am true through Feynman’s third volume of Lectures, which focus on quantum mechanics only.]

Finally, the last bit of unexplained ‘magic’ in the fine-structure constant is that the fine-structure constant (which I’ve started to write as α again, instead of αe) also gives us the (classical) relative speed of an electron, so that’s its speed as it orbits around the nucleus (according to the classical theory, that is), so we write

α = v/= β

I should go through the motions here – I’ll probably do so in the coming days – but you can see we must be able to get it out somehow from all what we wrote above. See how powerful our Uelect  ∼ e2/a relation really is? It links the electron, charge, its radius and its energy, and it’s all we need to all the rest out of it: its mass, its momentum, its speed and – through the Uncertainty Principle – the Bohr radius, which is the size of the atom.

We’ve come a long way. This is truly a milestone. We’ve taken the magic out of God’s number—to some extent at least. 🙂

You’ll have one last question, of course: if proportionality constants are all about the scale in which we measure the physical quantities on either side of an equation, is there some way the fine-structure constant would come out differently? That’s the same as asking: what if we’d measure energy in units that are equivalent to the energy of an electron, and the radius of our electron just as… Well… What if we’d equate our unit of distance with the radius of the electron, so we’d write re = 1? What would happen to α? Well… I’ll let you figure that one out yourself. I am tired and so I should go to bed now. 🙂

[…] OK. OK. Let me tell you. It’s not that simple here. All those relationships involving α, in one form or the other, are very deep. They relate a lot of stuff to a lot of stuff, and we can appreciate that only when doing a dimensional analysis. A dimensional analysis of the Ee = αe/r= eP2/r yields [eP2/r] = C2/m on the right-hand side and [Ee] = J = N·m on the left-hand side. How can we reconcile both? The coulomb is an SI base unit , so we can’t ‘translate’ it into something with N and m. [To be fully correct, for some reason, the ampère (i.e. coulomb per second) was chosen as an SI base unit, but they’re interchangeable in regard to their place in the international system of units: they can’t be reduced.] So we’ve got a problem. Yes. That’s where we sort of ‘smuggled’ the 4πε0 factor in when doing our calculations above. That ε0 constant is, obviously, not ‘as fundamental’ as or α (just think of the c−2 = ε0μ0 relationship to understand what I mean here) but, still, it was necessary to make the dimensions come out alright: we need the reciprocal dimension of ε0, i.e. (N·m2)/C2, to make the dimensional analysis work. We get: (C2/m)·(N·m2)/C2 = N·m = J, i.e. joule, so that’s the unit in which we measure energy or – using the E = mc2 equivalence – mass, which is the aspect of energy emphasizing its inertia.

So the answer is: no. Changing units won’t change alpha. So all that’s left is to play with it now. Let’s try to do that. Let me first plot that E= me = αe/re = 0.00729735256/re:

graph 3Unsurprisingly, we find the pivot point of this curve is at the intersection of the diagonal and the curve itself, so that’s at the (0.00729735256, 0.00729735256) point, where slopes are ± 1, i.e. plus or minus unity. What does this show? Nothing much. What? I can hear you: I should be excited because… Well… Yes! Think of it. If you would have to chose a cut-off point, you’d chose this one, wouldn’t you? 🙂 Sure, you’re right. How exciting! Let me show you. Look at it! It proves that God thinks in terms of logarithms. He has chosen α such that ln(E) = ln(α/r) = lnα – ln= lnα – ln= 0, so ln α = lnr and, therefore, α = r. 🙂

Huh? Excuse me?

I am sorry. […] Well… I am not, of course… 🙂 I just wanted to illustrate the kind of exercise some people are tempted to do. It’s no use. The fine-structure constant is what it is: it sort of summarizes an awful lot of formulas. It basically shows what Maxwell’s equation imply in terms of the structure of an atom defined as a negative charge orbiting around some positive charge. It shows we can get calculate everything as a function of something else, and that’s what the fine-structure constant tells us: it relates everything to everything. However, when everything is said and done, the fine-structure constant shows us two things:

  1. Maxwell’s equations are complete: we can construct a complete model of the electron and the atom, which includes: the electron’s energy and mass, its velocity, its own radius, and the radius of the atom. [I might have forgotten one of the dimensions here, but you’ll add it. :-)]
  2. God doesn’t want our equations to blow up. Our equations are all correct but, in reality, there’s a cut-off factor that ensures we don’t go to the limit with them.

So the fine-structure constant anchors our world, so to speak. In other words: of all the worlds that are possible, we live in this one.

[…] It’s pretty good as far as I am concerned. Isn’t it amazing that our mind is able to just grasp things like that? I know my approach here is pretty intuitive, and with ‘intuitive’, I mean ‘not scientific’ here. 🙂 Frankly, I don’t like the talk about physicists “looking into God’s mind.” I don’t think that’s what they’re trying to do. I think they’re just trying to understand the fundamental unity behind it all. And that’s religion enough for me. 🙂

So… What’s the conclusion? Nothing much. We’ve sort of concluded our description of the classical world… Well… Of its ‘electromagnetic sector’ at least. 🙂 That sector can be summarized in Maxwell’s equations, which describe an infinite world of possible worlds. However, God fixed three constants: hand α. So we live in a world that’s defined by this Trinity of fundamental physical constants. Why is it not two, or four?

My guts instinct tells me it’s because we live in three dimensions, and so there’s three degrees of freedom really. But what about time? Time is the fourth dimension, isn’t it? Yes. But time is symmetric in the ‘electromagnetic’ sector: we can reverse the arrow of time in our equations and everything still works. The arrow of time involves other theories: statistics (physicists refer to it as ‘statistical mechanics‘) and the ‘weak force’ sector, which I discussed when talking about symmetries in physics. So… Well… We’re not done. God gave us plenty of other stuff to try to understand. 🙂