Physical humbug

A good thing and a bad thing today:

1. The good thing is: I expanded my paper which deals with more advanced questions on this realist interpretation of QM (based on mass-without-mass models of elementary particles that I have been pursuing). I think I see everything clearly now: Maxwell’s equations only make sense as soon as the concepts of charge densities (expressed in coulomb per volume or area unit: C/m3 or C/m2) and currents (expressed in C/s) start making sense, which is only above the threshold of Planck’s quantum of action and within the quantization limits set by the Planck-Einstein relation. So, yes, we can, finally, confidently write this:

Quantum Mechanics = All of Physics = Maxwell’s equations + Planck-Einstein relation

2. The bad thing: I had an annoying discussion on ResearchGate on the consistency of quantum physics with one of those people who still seem to doubt both special as well as general relativity theory.

To get my frustration out, I copy the exchange below – as it might be informative when you are confronted with weirdos on some scientific forum too! It starts with a rather non-sensical remark on the reality of infinities, and an equally non-sensical question on how we get quantization from classical equations (Maxwell’s equations and then Gauss and Stokes theorem), to which the answer has to be: we do not, of course! For that, you need to combine them with the Planck-Einstein relation!

Start of the conversation: Jean Louis Van Belle, I found Maxwell quite consistent with, for instance Stokes aether model. Can you explain how he ‘threw it out‘. It was a firm paradigm until Einstein removed it’s power to ‘change‘ light speed, yet said “space without aether is unthinkable.” (Leiden ’21). He then mostly re-instated it in his ’52 paper correcting 1905 interpretations in bounded ‘spaces in motion within spaces) completed in the DFM. ‘QM’ then emerges.

My answer: Dear Peter – As you seem to believe zero-dimensional objects can have properties and, therefore, exist, and also seem to believe infinity is also real (not just a mathematical idealization), then we’re finished talking, because – for example – no sensible interpretation of the Planck-Einstein relation is possible in such circumstances. Also, all of physics revolves around conjugate variables, and these combine in products or product sums that have very small but finite values (think of typical canonic commutator relations, for example): products of infinity and zero are undefined – in mathematics too, by the way! I attach a ‘typically Feynman’ explanation of one of these commutator relations, which talks about the topic rather well. I could also refer to Dirac’s definition of the Dirac function (real probability functions do not collapse into an infinite probability density), or his comments on the infinities appearing in the perturbation theory he himself had developed, and which he then distanced himself from exactly because it generated infinities, which could not be ‘real’ according to him. I’ve got the feeling you’re stuck in 19th century classical physics. Perhaps you missed one or two other points from Einstein as well (apart from the references you give).To relate this discussion to the original question of this thread, I’d say: physicists who mistake mathematical idealizations for reality do obviously not understand quantum mechanics. Cheers – JL

PS: We may, of course, in our private lives believe that God ‘exists’ and that he is infinite and whatever, but that’s personal conviction or opinion: it is not science, nothing empirical that has been verified and can be verified again at any time. Oh – and to answer your specific question on Maxwell’s equations and vector algebra (Gauss and Stokes theorem), they do not incorporate the Planck-Einstein relation. That’s all. Planck-Einstein (quantization of reality) + Maxwell (classical EM) = quantum physics.

Immediate reply: Jean Louis Van Belle , I don’t invoke either zero dimensional objects, infinity or God! Neither the Planck length or Wolframs brilliant 10-93 is ‘zero’. Fermion pair scale is the smallest ‘Condensed Matter‘ but I suggest we must think beyond that to the condensate & ‘vacuum energy’ scales to advance understanding. More 22nd than 19th century! Einstein is easy to ‘cherry pick’ but his search for SR’s ‘physical’ state bore fruit in 1952!

[This Peter actually did refer to infinities and zeroes in math as being more than mathematical idealizations, but then edited out these specific stupidities.]

My answer: Dear Peter – I really cannot understand why you want to disprove SRT. SRT (or, at least, the absoluteness of lightspeed) comes out of Maxwell’s equations. Einstein just provided a rather heuristic argument to ‘prove’ it. Maxwell’s equations are the more ‘real thing’ – so to speak. And then GRT just comes from combining SRT and Mach’s principle. What problem are you trying to solve? I understand that, somehow, QM does NOT come across as ‘consistent’ to you (so I do not suffer from that: all equations look good to me – I just have my own ‘interpretation’ of it, but I do not question their validity). You seem to suspect something is wrong with quantum physics somewhere, but I don’t see exactly where.

Also, can you explain in a few words what you find brilliant about Wolfram’s number? I find the f/m = c2/h = 1.35639248965213E50 number brilliant, because it gives us a frequency per unit mass which is valid for all kinds of mass (electron, proton, or whatever combination of charged and neutral matter you may think of), but so that comes out of the E = mc2 and E = hf, and so it is not some new ‘God-given’ number or something ‘very special’: it is just a straight combination of two fundamental constants of Nature that we already know. I also find the fine-structure constant (and the electric/magnetic constants) ‘brilliant numbers’ but, again, I don’t think they are something mysterious. So what is Wolfram’s number about? What kind of ratio or combination of functions or unexplained explanation or new undiscovered simplification of existing mainstream explanations does it bring? Is it a new proportionality constant – some elasticity of spacetime, perhaps? A combination of Planck-scale units? Does it connect g and the electric constant? An update of (the inverse of) Eddington’s estimate of the number of protons in the Universe based on latest measurements of the cosmological constant? Boltzmann’s number and Avogadro’s constant (or, in light of the negative exponent, their inverse) – through the golden ratio or a whole new ‘holographic’ theory? New numbers are usually easy to explain in terms of existing theory – or in terms of what they propose to change to existing theory, no?

Perhaps an easy start is to give us a physical dimension for Wolfram’s number. My 1.35639248965213E50 number is the (exact) number of oscillations per kg, for example – not oscillations of ‘aether’ or something, but of charge in motion. Except for the fine-structure constant, all numbers in physics have a physical dimension (except if they’re scaling or coupling constants, such as the fine-structure constant), even if it’s only a scalar (plain number), it’s a number describing x units of something) or a density (then it is x per m3 or m2, per J, per kg, per coulomb, per ampere, etcetera – whatever SI unit or combination of SI units you want to choose).

On a very different note, I think that invoking some statement or a late paper of Einstein in an attempt to add ‘authority’ to some kind of disproof of SRT invokes the wrong kind of authority. 🙂 If you would say Heisenberg or Bohr or Dirac or Feynman or Oppenheimer started doubting SRT near the end of their lives, I’d look up and say: what? Now, no. Einstein had the intellectual honesty to speak up, and speak up rather loudly (cf. him persuading the US President to build the bomb).

As for the compatibility between SRT and GRT and quantum mechanics, the relativistically invariant argument of the wavefunction shows no such incompatibility is there (see Annex II and III of The Zitterbewegung hypothesis and the scattering matrix). Cheers – JL

[…]

Personal conclusion: I think I’ll just stay away from ResearchGate discussions for a while. They are not always good for one’s peace of mind. :-/

All of physics

This five-pager has it: all you ever wanted to know about the Universe. Electron mass and proton mass are seen as input to the model. To the most famous failed experiment in all of classical physics – the 1887 Michelson-Morley experiment, which disproved aether theories and established the absoluteness of lightspeed – we should add the Kamioka Nucleon Decay Experiment, which firmly established that protons do not decay. All the rest is history. 🙂

Post scriptum (26 April): I added another five-pager on fundamental concepts on ResearchGate, which may or may not help to truly understand what might be the case (I am paraphrasing Wittgenstein’s definition of reality here). It is on potentials, and it explains why thinking in terms of neat 1/r or 1/r2 functions is not all that helpful: reality is fuzzier than that. Even a simple electrostatic potential may be not very simple. The fuzzy concept of near and far fields remains useful.

I am actually very happy with the paper, because it sort of ‘completes’ my thinking on elementary particles in terms of ring currents. It made me feel like it is the first time I truly understand the complementarity/uncertainty principle – and that I invoke it to make an argument.

The nuclear force and gauge

I just wrapped up a discussion with some mainstream physicists, producing what I think of as a final paper on the nuclear force. I was struggling with the apparent non-conservative nature of the nuclear potential, but now I have the solution. It is just like an electric dipole field: not spherically symmetric. Nice and elegant.

I can’t help copying the last exchange with one of the researchers. He works at SLAC and seems to believe hydrinos might really exist. It is funny, and then it is not. :-/

Me: “Dear X – That is why I am an amateur physicist and don’t care about publication. I do not believe in quarks and gluons. 😊 Do not worry: it does not prevent me from being happy. JL”

X: “Dear Jean Louis – The whole physics establishment believes that neutron is composed of three quarks, gluons and a see of quark-antiquark pairs. How does that fit into your picture? Best regards, X”

Me: “I see the neutron as a tight system between positive and negative electric charge – combining electromagnetic and nuclear force. The ‘proton + electron’ idea is vague. The idea of an elementary particle is confusing in discussions and must be defined clearly: stable, not-reducible, etcetera. Neutrons decay (outside of the nucleus), so they are reducible. I do not agree with Heisenberg on many fronts (especially not his ‘turnaround’ on the essence of the Uncertainty Principle) so I don’t care about who said what – except Schroedinger, who fell out with both Dirac and Heisenberg, I feel. His reason to not show up at the Nobel Prize occasion in 1933 (where Heisenberg received the prize of the year before, and Dirac/Schroedinger the prize of the year itself) was not only practical, I think – but that’s Hineininterpretierung which doesn’t matter in questions like this. JL”

X: “Dear Jean Louis – I want to to make doubly sure. Do I understand you correctly that you are saying that neutron is really a tight system of proton and electron ? If that is so, it is interesting that Heisenberg, inventor of the uncertainty principle, believed the same thing until 1935 (I have it from Pais book). Then the idea died because. Pauli’s argument won, that the neutron spin 1/2 follows the Fermi-Dirac statistics and this decided that the neutron is indeed an elementary particle. This would very hard sell, if you now, after so many years, agree with Heisenberg. By the way, I say in my Phys. Lett. B paper, which uses k1/r + k2/r2 potential, that the radius of the small hydrogen is about 5.671 Fermi. But this is very sensitive to what potential one is using. Best regards, X.”

The physics of the wave equation

The rather high-brow discussions on deep electron orbitals and hydrinos with a separate set of interlocutors, inspired me to write a paper at the K-12 level on wave equations. Too bad Schroedinger did not seem to have left any notes on how he got his wave equation (which I believe to be correct in every way (relativistically correct, too), unlike Dirac’s or others).

The notes must be somewhere in some unexplored archive. If there are Holy Grails to be found in the history of physics, then these notes are surely one of them. There is a book about a mysterious woman, who might have inspired Schrödinger, but I have not read it, yet: it is on my to-read list. I will prioritize it (read: order it right now). 🙂

Oh – as for the math and physics of the wave equation, you should also check the Annex to the paper: I think the nuclear oscillation can only be captured by a wave equation when using quaternion math (an extension to complex math).

Temperature, RIs, and spectral lines

A sympathetic researcher, Steve Langford, sent me some of his papers, as well as a link to his ResearchGate site, where you will find more. Optical minerology is his field. Fascinating work – or, at the very least, a rather refreshing view on the nitty-gritty of actually measuring stuff by gathering huge amounts of data, and then analyzing it in meaningful ways. I learnt a lot of new things already (e.g. kriging or Gaussian regression analysis, and novel ways of applying GLM modelling).

Dr. Langford wrote me because he wants to connect his work to more theory – quantum math, and all that. That is not so easy. He finds interesting relations between temperature and refractive indices (RIs), as measured from a single rock sample in Hawaii. The equipment he used, is shown below. I should buy that stuff too! I find it amazing one can measure light spectra with nanometer precision with these tools (the dial works with 0.1 nm increments, to be precise). He knows all about Bragg’s Law and crystal structures, toys with statistical and graphical software tools such as JMP, Surfer, and talks about equipping K-12 level students with dirt-cheap modular computer-connected optical devices and open software tools to automate the data gathering process. In short, I am slightly jealous of the practical value of his work, and the peace of mind he must have to do all of this! At the very least, he can say he actually did something in his life! 🙂

Having showered all that praise, I must admit I have no clue about how to connect all of this to quantum effects. All I know about temperature – about what it actually is (vibrational motion of molecules and atoms within molecules, with multiple degrees of freedom (n > 3) in that motion) – is based on Feynman’s Lectures (Chapters 40 to 45 of the first Volume). Would all that linear, orbital and vibrational motion generate discernible shifts of spectral lines? Moreover, would it do so in the visible light spectrum (X-rays are usually used – increases measurement precision – but such equipment is more expensive)? I have no idea.

Or… Well, of course I do have some intuitions. Shifts in frequency spectra are well explained by combining statistics and the Planck-Einstein relation. But can we see quantum physics in the data? In the spectral lines themselves? No. Not really. And so that’s what’s got me hooked. Explaining a general shift of the frequency spectrum and discerning quantum effects in RIs in data sets (analyzing shifts of spectral lines) are two very different things. So how could we go about that?

Energy is surely quantized, and any small difference in energy must probably translate into small shifts of the frequencies of the spectral lines themselves (as opposed to the general shift of the spectrum as such, which, as mentioned above, is well-explained by quantum physics) respecting the Planck-Einstein relation for photons (E = hf). I do not know if anyone tried to come up with some kind of quantum-mechanical definition of the concept of entropy, (but I have not googled anything on that, so I expect there must be valuable resources on that out there), and Boltzmann’s constant was re-defined at the occasion of the 2019 revision of the SI system of units – and a careful examination of the rationale of that revision or re-definition should yield deeper insights in this regard, especially because I think that revision firmly anchors what I refer to as a realist interpretation of quantum physics. Thermal radiation is microwave-range radiation, so a 0.1 nm resolution should be enough to capture a shift in spectral lines – if it is there, that is.

I need to think on this. As for now, I look at Langford’s work as art, and one of his interests is, effectively, to connect art and science. Let me quote one of his commentaries on one of his images: “Light and matter dance at 30°C, upon what is essentially a Calcium-Silicate substrate through which light and various chemicals flow. Swirling Yin-Yang patterns reminiscent of Solar flares and magnetic lines of force also remind me of fractal patterns.” [My italics.]

He does phrase it very beautifully, doesn’t he? Maybe I will find some deeper meaning to it later. Dr. Langford’s suggestion to re-phrase quantum-mechanical models in terms of Poynting vectors is one that strikes a note, and there are other ideas there as well. It must be possible to find quantum-mechanical effects by further analyzing, for example, the relation between temperature and RIs, indeed – and to use the formal (consistent and complete!) language of quantum mechanics to (also) explain Dr. Langford’s findings. This would conclusively relate the micro-level of quantum physics to the macro-level of crystals (isotropic or anisotropic structures), and it would not require supercooled condensates or massive investments in new accelerator facilities.

It would also provide amateur physicists with a way to discover and verify all by themselves. That would be a great result in itself. 🙂

Post scriptum (27 March): Looking at the papers again, I do not see a shift in spectral lines. Spectral lines correspond to differences between quantized energies in electron orbitals. These are either atomic orbitals or molecular orbitals (valence electrons), and shifts between orbitals corresponds to spectral lines in the visible spectrum (Rydberg-scale energies) or, in case of molecular orbitals, microwave photons being absorbed or emitted. Temperature just increases the intensity of photon beams going in and out of the system (the rock sample, in this case), and so it causes a shift of the spectrum, but the lines are what they are: their energy is and remains what it is (E = hf). Of course, the superposition principe tells us the energies of microwave and visual-spectrum energies can combine in what resembles a normal distribution around a mean (which, yes, shifts with temperature alright).

As for the gist of the matter, yes, of course, what Dr. Langford is seeing, are quantum-mechanical effects alright.

Post scriptum (9 April 2021): In the preceding week, I found that Dr. Langford seems to find my math too difficult, and turns to pseudo-scientists such as Nassim Haramein, and contributes to Haramein’s Resonance Science Foundation. I dissociate completely from such references and like associations. Everyone is free to seek inspiration elsewhere, but Haramein’s mystical stories are definitely not my cup of tea.

The Language of Physics

The meaning of life in 15 pages !🙂 [Or… Well… At least a short description of the Universe… Not sure it helps in sense-making.] 🙂

Post scriptum (25 March 2021): Because this post is so extremely short and happy, I want to add a sad anecdote which illustrates what I have come to regard as the sorry state of physics as a science.

A few days ago, an honest researcher put me in cc of an email to a much higher-brow researcher. I won’t reveal names, but the latter – I will call him X – works at a prestigious accelerator lab in the US. The gist of the email was a question on an article of X: “I am still looking at the classical model for the deep orbits. But I have been having trouble trying to determine if the centrifugal and spin-orbit potentials have the same relativistic correction as the Coulomb potential. I have also been having trouble with the Ademko/Vysotski derivation of the Veff = V×E/mc2 – V2/2mc2 formula.”

I was greatly astonished to see X answer this: “Hello – What I know is that this term comes from the Bethe-Salpeter equation, which I am including (#1). The authors say in their book that this equation comes from the Pauli’s theory of spin. Reading from Bethe-Salpeter’s book [Quantum mechanics of one and two electron atoms]: “If we disregard all but the first three members of this equation, we obtain the ordinary Schroedinger equation. The next three terms are peculiar to the relativistic Schroedinger theory”. They say that they derived this equation from covariant Dirac equation, which I am also including (#2). They say that the last term in this equation is characteristic for the Dirac theory of spin ½ particles. I simplified the whole thing by choosing just the spin term, which is already used for hyperfine splitting of normal hydrogen lines. It is obviously approximation, but it gave me a hope to satisfy the virial theorem. Of course, now I know that using your Veff potential does that also. That is all I know.” [I added the italics/bold in the quote.]

So I see this answer while browsing through my emails on my mobile phone, and I am disgusted – thinking: Seriously? You get to publish in high-brow journals, but so you do not understand the equations, and you just drop terms and pick the ones that suit you to make your theory fit what you want to find? And so I immediately reply to all, politely but firmly: “All I can say, is that I would not use equations which I do not fully understand. Dirac’s wave equation itself does not make much sense to me. I think Schroedinger’s original wave equation is relativistically correct. The 1/2 factor in it has nothing to do with the non-relativistic kinetic energy, but with the concept of effective mass and the fact that it models electron pairs (two electrons – neglect of spin). Andre Michaud referred to a variant of Schroedinger’s equation including spin factors.”

Now X replies this, also from his iPhone: “For me the argument was simple. I was desperate trying to satisfy the virial theorem after I realized that ordinary Coulomb potential will not do it. I decided to try the spin potential, which is in every undergraduate quantum mechanical book, starting with Feynman or Tippler, to explain the hyperfine hydrogen splitting. They, however, evaluate it at large radius. I said, what happens if I evaluate it at small radius. And to my surprise, I could satisfy the virial theorem. None of this will be recognized as valid until one finds the small hydrogen experimentally. That is my main aim. To use theory only as a approximate guidance. After it is found, there will be an explosion of “correct” theories.” A few hours later, he makes things even worse by adding: “I forgot to mention another motivation for the spin potential. I was hoping that a spin flip will create an equivalent to the famous “21cm line” for normal hydrogen, which can then be used to detect the small hydrogen in astrophysics. Unfortunately, flipping spin makes it unstable in all potential configurations I tried so far.”

I have never come across a more blatant case of making a theory fit whatever you want to prove (apparently, X believes Mills’ hydrinos (hypothetical small hydrogen) are not a fraud), and it saddens me deeply. Of course, I do understand one will want to fiddle and modify equations when working on something, but you don’t do that when these things are going to get published by serious journals. Just goes to show how physicists effectively got lost in math, and how ‘peer reviews’ actually work: they don’t. :-/

A simple explanation of quantum-mechanical operators

I added an Annex to a paper that talks about all of the fancy stuff quantum physicists like to talk about, like scattering matrices and high-energy particle events. The Annex, however, is probably my simplest and shortest summary of the ordinariness of wavefunction math, including a quick overview of what quantum-mechanical operators actually are. It does not make use of state vector algebra or the usual high-brow talk about Gilbert spaces and what have you: you only need to know what a derivative is, and combine it with our realist interpretation of what the wavefunction actually represents.

I think I should do a paper on the language of physics. To show how (i) rotations (i, j, k), (ii) scalars (constants or just numerical values) and (iii) vectors (real vectors (e.g. position vectors) and pseudovectors (e.g. angular frequency or momentum)), and (iv) operators (derivatives of the wavefunction with respect to time and spatial directions) form ‘words’ (e.g. energy and momentum operators), and how these ‘words’ then combine into meaningful statements (e.g. Schroedinger’s equation).

All of physics can then be summed up in a half-page or so. All the rest is thermodynamics 🙂 JL

PS: You only get collapsing wavefunctions when adding uncertainty to the models (i.e. our own uncertainty about the energy and momentum). The ‘collapse’ of the wavefunction (let us be precise, the collapse of the (dissipating) wavepacket) thus corresponds to the ‘measurement’ operation. 🙂

PS2: Incidentally, the analysis also gives an even more intuitive explanation of Einstein’s mass-energy equivalence relation, which I summarize in a reply to one of the many ‘numerologist’ physicists on ResearchGate (copied below).

All of physics…

I just wrapped up my writings on physics (quantum physics) with a few annexes on the (complex) math of it, as well as a paper on how to model unstable particles and (high-energy) particle events. And then a friend of mine sent me this image of the insides of a cell. There is more of it on where it came from. Just admit it: it is truly amazing, isn’t? I suddenly felt a huge sense of wonder – probably because of the gap between the simple logic of quantum physics and this incredible complex molecular machinery.  

I quote: “Seen are Golgi apparatus, mitochondria, endoplasmic reticulum, cell wall, and hundreds of protein structures and membrane-bound organelles. The cell structure is of a Eukaryote cell i.e. a multicellular organism which means it can correspond to the cell structure of humans, dogs, or even fungi and plants.” These images were apparently put together from “X-ray, nuclear magnetic resonance (NMR) and cryoelectron microscopy datasets.”

I think it is one of those moments where it feels great to be human. 🙂

The Nature of Antimatter (dark matter)

The electromagnetic force has an asymmetry: the magnetic field lags the electric field. The phase shift is 90 degrees. We can use complex notation to write the E and B vectors as functions of each other. Indeed, the Lorentz force on a charge is equal to: F = qE + q(v×B). Hence, if we know the (electric field) E, then we know the (magnetic field) B: B is perpendicular to E, and its magnitude is 1/c times the magnitude of E. We may, therefore, write:

B = –iE/c

The minus sign in the B = –iE/c expression is there because we need to combine several conventions here. Of course, there is the classical (physical) right-hand rule for E and B, but we also need to combine the right-hand rule for the coordinate system with the convention that multiplication with the imaginary unit amounts to a counterclockwise rotation by 90 degrees. Hence, the minus sign is necessary for the consistency of the description. It ensures that we can associate the aeiEt/ħ and aeiEt/ħ functions with left and right-handed spin (angular momentum), respectively.

Now, we can easily imagine a antiforce: an electromagnetic antiforce would have a magnetic field which precedes the electric field by 90 degrees, and we can do the same for the nuclear force (EM and nuclear oscillations are 2D and 3D oscillations respectively). It is just an application of Occam’s Razor principle: the mathematical possibilities in the description (notations and equations) must correspond to physical realities, and vice versa (one-on-one). Hence, to describe antimatter, all we have to do is to put a minus sign in front of the wavefunction. [Of course, we should also take the opposite of the charge(s) of its antimatter counterpart, and please note we have a possible plural here (charges) because we think of neutral particles (e.g. neutrons, or neutral mesons) as consisting of opposite charges.] This is just the principle which we already applied when working out the equation for the neutral antikaon (see Annex IV and V of the above-referenced paper):

Don’t worry if you do not understand too much of the equations: we just put them there to impress the professionals. 🙂 The point is this: matter and antimatter are each other opposite, literally: the wavefunctions aeiEt/ħ and –aeiEt/ħ add up to zero, and they correspond to opposite forces too! Of course, we also have lightparticles, so we have antiphotons and antineutrinos too.

We think this explains the rather enormous amount of so-called dark matter and dark energy in the Universe (the Wikipedia article on dark matter says it accounts for about 85% of the total mass/energy of the Universe, while the article on the observable Universe puts it at about 95%!). We did not say much about this in our YouTube talk about the Universe, but we think we understand things now. Dark matter is called dark because it does not appear to interact with the electromagnetic field: it does not seem to absorb, reflect or emit electromagnetic radiation, and is, therefore, difficult to detect. That should not be a surprise: antiphotons would not be absorbed or emitted by ordinary matter. Only anti-atoms (i.e. think of a antihydrogen atom as a antiproton and a positron here) would do so.

So did we explain the mystery? We think so. 🙂

We will conclude with a final remark/question. The opposite spacetime signature of antimatter is, obviously, equivalent to a swap of the real and imaginary axes. This begs the question: can we, perhaps, dispense with the concept of charge altogether? Is geometry enough to understand everything? We are not quite sure how to answer this question but we do not think so: a positron is a positron, and an electron is an electron¾the sign of the charge (positive and negative, respectively) is what distinguishes them! We also think charge is conserved, at the level of the charges themselves (see our paper on matter/antimatter pair production and annihilation).

We, therefore, think of charge as the essence of the Universe. But, yes, everything else is sheer geometry! 🙂

The End of Physics?

There are two branches of physics. The nicer branch studies equilibrium states: simple laws, stable particles (electrons and protons, basically), the expanding (oscillating?) Universe, etcetera. This branch includes the study of dynamical systems which we can only describe in terms of probabilities or approximations: think of kinetic gas theory (thermodynamics) or, much simpler, hydrostatics (the flow of water, Feynman, Vol. II, chapters 40 and 41), about which Feynman writes this:

“The simplest form of the problem is to take a pipe that is very long and push water through it at high speed. We ask: to push a given amount of water through that pipe, how much pressure is needed? No one can analyze it from first principles and the properties of water. If the water flows very slowly, or if we use a thick goo like honey, then we can do it nicely. You will find that in your textbook. What we really cannot do is deal with actual, wet water running through a pipe. That is the central problem which we ought to solve some day, and we have not.” (Feynman, I-3-7)

Still, we believe first principles do apply to the flow of water through a pipe. In contrast, the second branch of physics – we think of the study of non-stable particles here: transients (charged kaons and pions, for example) or resonances (very short-lived intermediate energy states). The class of physicists who studies these must be commended, but they resemble econometrists modeling input-output relations: if they are lucky, they will get some kind of mathematical description of what goes in and what goes out, but the math does not tell them how stuff actually happens. It leads one to think about the difference between a theory, a calculation and an explanation. Simplifying somewhat, we can represent such input-output relations by thinking of a process that will be operating on some state |ψ⟩ to produce some other state |ϕ⟩, which we write like this:

⟨ϕ|A|ψ⟩

A is referred to as a Hermitian matrix if the process is reversible. Reversibility looks like time reversal, which can be represented by taking the complex conjugate ⟨ϕ|A|ψ⟩* = ⟨ψ|A†|ϕ⟩: we put a minus sign in front of the imaginary unit, so we have –i instead of i in the wavefunctions (or i instead of –i with respect to the usual convention for denoting the direction of rotation). Processes may not reversible, in which case we talk about symmetry-breaking: CPT-symmetry is always respected so, if T-symmetry (time) is broken, CP-symmetry is broken as well. There is nothing magical about that.

Physicists found the description of these input-output relations can be simplified greatly by introducing quarks (see Annex II of our paper on ontology and physics). Quarks have partial charge and, more generally, mix physical dimensions (mass/energy, spin or (angular) momentum). They create some order – think of it as some kind of taxonomy – in the vast zoo of (unstable) particles, which is great. However, we do not think there was a need to give them some kind of ontological status: unlike plants or insects, partial charges do not exist.

We also think the association between forces and (virtual) particles is misguided. Of course, one might say forces are being mediated by particles (matter- or light-particles), because particles effectively pack energy and angular momentum (light-particles – photons and neutrinos – differ from matter-particles (electrons, protons) in that they carry no charge, but they do carry electromagnetic and/or nuclear energy) and force and energy are, therefore, being transferred through particle reactions, elastically or non-elastically. However, we think it is important to clearly separate the notion of fields and particles: they are governed by the same laws (conservation of charge, energy, and (linear and angular) momentum, and – last but not least – (physical) action) but their nature is very different.

W.E. Lamb (1995), nearing the end of his very distinguished scientific career, wrote about “a comedy of errors and historical accidents”, but we think the business is rather serious: we have reached the End of Science. We have solved Feynman’s U = 0 equation. All that is left, is engineering: solving practical problems and inventing new stuff. That should be exciting enough. 🙂

Post scriptum: I added an Annex (III) to my paper on ontology and physics, with what we think of as a complete description of the Universe. It is abstruse but fun (we hope!): we basically add a description of events to Feynman’s U = 0 (un)worldliness formula. 🙂

Ontology and physics

One sometimes wonders what keeps amateur physicists awake. Why is it that they want to understand quarks and wave equations, or delve into complicated math (perturbation theory, for example)? I believe it is driven by the same human curiosity that drives philosophy. Physics stands apart from other sciences because it examines the smallest of smallest – the essence of things, so to speak.

Unlike other sciences (the human sciences in particular, perhaps), physicists also seek to reduce the number of concepts, rather than multiply them – even if, sadly, enough, they do not always a good job at that. However, generally speaking, physics and math may, effectively, be considered to be the King and Queen of Science, respectively.

The Queen is an eternal beauty, of course, because Her Language may mean anything. Physics, in contrast, talks specifics: physical dimensions (force, distance, energy, etcetera), as opposed to mathematical dimensions – which are mere quantities (scalars and vectors).

Science differs from religion in that it seeks to experimentally verify its propositions. It measures rather than believes. These measurements are cross-checked by a global community and, thereby, establish a non-subjective reality. The question of whether reality exists outside of us, is irrelevant: it is a category mistake (Ryle, 1949). It is like asking why we are here: we just are.

All is in the fundamental equations. An equation relates a measurement to Nature’s constants. Measurements – energy/mass, or velocities – are relative. Nature’s constants do not depend on the frame of reference of the observer and we may, therefore, label them as being absolute. This corresponds to the difference between variables and parameters in equations. The speed of light (c) and Planck’s quantum of action (h) are parameters in the E/m = c2 and E = hf, respectively.

Feynman (II-25-6) is right that the Great Law of Nature may be summarized as U = 0 but that “this simple notation just hides the complexity in the definitions of symbols is just a trick.” It is like talking of the night “in which all cows are equally black” (Hegel, Phänomenologie des Geistes, Vorrede, 1807). Hence, the U = 0 equation needs to be separated out. I would separate it out as:

We imagine things in 3D space and one-directional time (Lorentz, 1927, and Kant, 1781). The imaginary unit operator (i) represents a rotation in space. A rotation takes time. Its physical dimension is, therefore, s/m or -s/m, as per the mathematical convention in place (Minkowski’s metric signature and counter-clockwise evolution of the argument of complex numbers, which represent the (elementary) wavefunction).

Velocities can be linear or tangential, giving rise to the concepts of linear versus angular momentum. Tangential velocities imply orbitals: circular and elliptical orbitals are closed. Particles are pointlike charges in closed orbitals. We are not sure if non-closed orbitals might correspond to some reality: linear oscillations are field particles, but we do not think of lines as non-closed orbitals: the curvature of real space (the Universe we live in) suggest we should but we are not sure such thinking is productive (efforts to model gravity as a residual force have failed so far).

Space and time are innate or a priori categories (Kant, 1781). Elementary particles can be modeled as pointlike charges oscillating in space and in time. The concept of charge could be dispensed with if there were not lightlike particles: photons and neutrinos, which carry energy but no charge. The pointlike charge which is oscillating is pointlike but may have a finite (non-zero) physical dimension, which explains the anomalous magnetic moment of the free (Compton) electron. However, it only appears to have a non-zero dimension when the electromagnetic force is involved (the proton has no anomalous magnetic moment and is about 3.35 times smaller than the calculated radius of the pointlike charge inside of an electron). Why? We do not know: elementary particles are what they are.

We have two forces: electromagnetic and nuclear. One of the most remarkable things is that the E/m = c2 holds for both electromagnetic and nuclear oscillations, or combinations thereof (superposition theorem). Combined with the oscillator model (E = ma2ω2 = mc2 and, therefore, c must be equal to c = aω), this makes us think of c2 as modeling an elasticity or plasticity of space. Why two oscillatory modes only? In 3D space, we can only imagine oscillations in one, two and three dimensions (line, plane, and sphere). The idea of four-dimensional spacetime is not relevant in this context.

Photons and neutrinos are linear oscillations and, because they carry no charge, travel at the speed of light. Electrons and muon-electrons (and their antimatter counterparts) are 2D oscillations packing electromagnetic and nuclear energy, respectively. The proton (and antiproton) pack a 3D nuclear oscillation. Neutrons combine positive and negative charge and are, therefore, neutral. Neutrons may or may not combine the electromagnetic and nuclear force: their size (more or less the same as that of the proton) suggests the oscillation is nuclear.  

 2D oscillation3D oscillation
electromagnetic forcee± (electron/positron)orbital electron (e.g.: 1H)
nuclear forceμ± (muon-electron/antimuon)p± (proton/antiproton)
compositepions (π±/ π0)?n (neutron)? D+ (deuteron)?
corresponding field particleγ (photon)ν (neutrino)

The theory is complete: each theoretical/mathematical/logical possibility corresponds to a physical reality, with spin distinguishing matter from antimatter for particles with the same form factor.

When reading this, my kids might call me and ask whether I have gone mad. Their doubts and worry are not random: the laws of the Universe are deterministic (our macro-time scale introduces probabilistic determinism only). Free will is real, however: we analyze and, based on our analysis, we determine the best course to take when taking care of business. Each course of action is associated with an anticipated cost and return. We do not always choose the best course of action because of past experience, habit, laziness or – in my case – an inexplicable desire to experiment and explore new territory.

PS: I’ve written this all out in a paper, of course. 🙂 I also did a 30 minute YouTube video on it. Finally, I got a nice comment from an architect who wrote an interesting paper on wavefunctions and wave equations back in 1996 – including thoughts on gravity.

A Zitterbewegung model of the neutron

As part of my ventures into QCD, I quickly developed a Zitterbewegung model of the neutron, as a complement to my first sketch of a deuteron nucleus. The math of orbitals is interesting. Whatever field you have, one can model is using a coupling constant between the proportionality coefficient of the force, and the charge it acts on. That ties it nicely with my earlier thoughts on the meaning of the fine-structure constant.

My realist interpretation of quantum physics focuses on explanations involving the electromagnetic force only, but the matter-antimatter dichotomy still puzzles me very much. Also, the idea of virtual particles is no longer anathema to me, but I still want to model them as particle-field interactions and the exchange of real (angular or linear) momentum and energy, with a quantization of momentum and energy obeying the Planck-Einstein law.

The proton model will be key. We cannot explain it in the typical ‘mass without mass’ model of zittering charges: we get a 1/4 factor in the explanation of the proton radius, which is impossible to get rid of unless we assume some ‘strong’ force come into play. That is why I prioritize a ‘straight’ attack on the electron and the proton-electron bond in a primitive neutron model.

The calculation of forces inside a muon-electron and a proton (see ) is an interesting exercise: it is the only thing which explains why an electron annihilates a positron but electrons and protons can live together (the ‘anti-matter’ nature of charged particles only shows because of opposite spin directions of the fields – so it is only when the ‘structure’ of matter-antimatter pairs is different that they will not annihilate each other).

[…]

In short, 2021 will be an interesting year for me. The intent of my last two papers (on the deuteron model and the primitive neutron model) was to think of energy values: the energy value of the bond between electron and proton in the neutron, and the energy value of the bond between proton and neutron in a deuteron nucleus. But, yes, the more fundamental work remains to be done !

Cheers – Jean-Louis

The electromagnetic deuteron model

In my ‘signing off’ post, I wrote I had enough of physics but that my last(?) ambition was to “contribute to an intuitive, realist and mathematically correct model of the deuteron nucleus.” Well… The paper is there. And I am extremely pleased with the result. Thank you, Mr. Meulenberg. You sure have good intuition.

I took the opportunity to revisit Yukawa’s nuclear potential and demolish his modeling of a new nuclear force without a charge to act on. Looking back at the past 100 years of physics history, I now start to think that was the decisive destructive moment in physics: that 1935 paper, which started off all of the hype on virtual particles, quantum field theory, and a nuclear force that could not possibly be electromagnetic plus – totally not done, of course ! – utter disregard for physical dimensions and the physical geometry of fields in 3D space or – taking retardation effects into account – 4D spacetime. Fortunately, we have hope: the 2019 fixing of SI units puts physics firmly back onto the road to reality – or so we hope.

Paolo Di Sia‘s and my paper show one gets very reasonable energy and separation distances for nuclear bonds and inter-nucleon distances when assuming the presence of magnetic and/or electric dipole fields arising from deep electron orbitals. The model shows one of the protons pulling the ‘electron blanket’ from another proton (the neutron) towards its own side so as to create an electric dipole moment. So it is just like a valence electron in a chemical bond. So it is like water, then? Water is a polar molecule but we do not necessarily need to start with polar configurations when trying to expand this model so as to inject some dynamics into it (spherically symmetric orbitals are probably easier to model). Hmm… Perhaps I need to look at the thermodynamical equations for dry versus wet water once again… Phew ! Where to start?

I have no experience – I have very little math, actually – with modeling molecular orbitals. So I should, perhaps, contact a friend from a few years ago now – living in Hawaii and pursuing more spiritual matters too – who did just that long time ago: orbitals using Schroedinger’s wave equation (I think Schroedinger’s equation is relativistically correct – just a misinterpretation of the concept of ‘effective mass’ by the naysayers). What kind of wave equation are we looking at? One that integrates inverse square and inverse cube force field laws arising from charges and the dipole moments they create while moving. [Hey! Perhaps we can relate these inverse square and cube fields to the second- and third-order terms in the binomial development of the relativistic mass formula (see the section on kinetic energy in my paper on one of Feynman’s more original renderings of Maxwell’s equations) but… Well… Probably best to start by seeing how Feynman got those field equations out of Maxwell’s equations. It is a bit buried in his development of the Liénard and Wiechert equations, which are written in terms of the scalar and vector potentials φ and A instead of E and B vectors, but it should all work out.]

If the nuclear force is electromagnetic, then these ‘nuclear orbitals’ should respect the Planck-Einstein relation. So then we can calculate frequencies and radii of orbitals now, right? The use of natural units and imaginary units to represent rotations/orthogonality in space might make calculations easy (B = iE). Indeed, with the 2019 revision of SI units, I might need to re-evaluate the usefulness of natural units (I always stayed away from it because it ‘hides’ the physics in the math as it makes abstraction of their physical dimension).

Hey ! Perhaps we can model everything with quaternions, using imaginary units (i and j) to represent rotations in 3D space so as to ensure consistent application of the appropriate right-hand rules always (special relativity gets added to the mix so we probably need to relate the (ds)2 = (dx)2 + (dy)2 + (dz)2 – (dct)2 to the modified Hamilton’s q = a + ib + jckd expression then). Using vector equations throughout and thinking of h as a vector when using the E = hf and h = pλ Planck-Einstein relation (something with a magnitude and a direction) should do the trick, right? [In case you wonder how we can write f as a vector: angular frequency is a vector too. The Planck-Einstein relation is valid for both linear as well as circular oscillations: see our paper on the interpretation of de Broglie wavelength.]

Oh – and while special relativity is there because of Maxwell’s equation, gravity (general relativity) should be left out of the picture. Why? Because we would like to explain gravity as a residual very-far-field force. And trying to integrate gravity inevitable leads one to analyze particles as ‘black holes.’ Not nice, philosophically speaking. In fact, any 1/rn field inevitably leads one to think of some kind of black hole at the center, which is why thinking of fundamental particles in terms ring currents and dipole moments makes so much sense! [We need nothingness and infinity as mathematical concepts (limits, really) but they cannot possibly represent anything real, right?]

The consistent use of the Planck-Einstein law to model these nuclear electron orbitals should probably involve multiples of h to explain their size and energy: E = nhf rather than E = hf. For example, when calculating the radius of an orbital of a pointlike charge with the energy of a proton, one gets a radius that is only 1/4 of the proton radius (0.21 fm instead of 0.82 fm, approximately). To make the radius fit that of a proton, one has to use the E = 4hf relation. Indeed, for the time being, we should probably continue to reject the idea of using fractions of h to model deep electron orbitals. I also think we should avoid superluminal velocity concepts.

[…]

This post sounds like madness? Yes. And then, no! To be honest, I think of it as one of the better Aha! moments in my life. 🙂

Brussels, 30 December 2020

Post scriptum (1 January 2021): Lots of stuff coming together here ! 2021 will definitely see the Grand Unified Theory of Classical Physics becoming somewhat more real. It looks like Mills is going to make a major addition/correction to his electron orbital modeling work and, hopefully, manage to publish the gist of it in the eminent mainstream Nature journal. That makes a lot of sense: to move from an atom to an analysis of nuclei or complex three-particle systems, one should combine singlet and doublet energy states – if only to avoid reduce three-body problems to two-body problems. 🙂 I still do not buy the fractional use of Planck’s quantum of action, though. Especially now that we got rid of the concept of a separate ‘nuclear’ charge (there is only one charge: the electric charge, and it comes in two ‘colors’): if Planck’s quantum of action is electromagnetic, then it comes in wholes or multiples. No fractions. Fractional powers of distance functions in field or potential formulas are OK, however. 🙂

The complementarity of wave- and particle-like viewpoints on EM wave propagation

In 1995, W.E. Lamb Jr. wrote the following on the nature of the photon: “There is no such thing as a photon. Only a comedy of errors and historical accidents led to its popularity among physicists and optical scientists. I admit that the word is short and convenient. Its use is also habit forming. Similarly, one might find it convenient to speak of the “aether” or “vacuum” to stand for empty space, even if no such thing existed. There are very good substitute words for “photon”, (e.g., “radiation” or “light”), and for “photonics” (e.g., “optics” or “quantum optics”). Similar objections are possible to use of the word “phonon”, which dates from 1932. Objects like electrons, neutrinos of finite rest mass, or helium atoms can, under suitable conditions, be considered to be particles, since their theories then have viable non-relativistic and non-quantum limits.”[1]

The opinion of a Nobel Prize laureate carries some weight, of course, but we think the concept of a photon makes sense. As the electron moves from one (potential) energy state to another – from one atomic or molecular orbital to another – it builds an oscillating electromagnetic field which has an integrity of its own and, therefore, is not only wave-like but also particle-like.

We, therefore, dedicated the fifth chapter of our re-write of Feynman’s Lectures to a dual analysis of EM radiation (and, yes, this post is just an announcement of the paper so you are supposed to click the link to read it). It is, basically, an overview of a rather particular expression of Maxwell’s equations which Feynman uses to discuss the laws of radiation. I wonder how to – possibly – ‘transform’ or ‘transpose’ this framework so it might apply to deep electron orbitals and – possibly – proton-neutron oscillations.


[1] W.E. Lamb Jr., Anti-photon, in: Applied Physics B volume 60, pages 77–84 (1995).

Hope

Those who read this blog, or my papers, know that the King of Science, physics, is in deep trouble. [In case you wonder, the Queen of Science is math.]

The problem is rather serious: a lack of credibility. It would kill any other business, but things work differently in academics. The question is this: how many professional physicists would admit this? An even more important question is: how many of those who admit this, would try to do something about it?

We hope the proportion of both is increasing – so we can trust that at least the dynamics of all of this are OK. I am hopeful – but I would not bet on it.

Post scriptum: A researcher started a discussion on ResearchGate earlier this year. The question for discussion is this: “In September 2019, the New York Times printed an opinion piece by Sean Carroll titled”Even Physicists Don’t Understand Quantum Mechanics. Worse, they don’t seem to want to understand it.” (https://www.nytimes.com/2019/09/07/opinion/sunday/quantum-physics.html) Is it true that physicists don’t want to understand QM? And if so then why?” I replied this to it:

“Sean Carroll is one of the Gurus that is part of the problem rather than the solution: he keeps peddling approaches that have not worked in the past, and can never be made to work in the future. I am an amateur physicist only, but I have not come across a problem that cannot be solved by ‘old’ quantum physics, i.e. a combination of Maxwell’s equations and the Planck-Einstein relation. Lamb shift, anomalous magnetic moment, electron-positron pair creation/annihilation (a nuclear process), behavior of electrons in semiconductors, superconduction etc. There is a (neo-)classical solution for everything: no quantum field and/or perturbation theories are needed. Proton and electrons as elementary particles (and neutrons as the bound state of an proton and a nuclear electron), and photons and neutrinos as lightlike particles, carrying electromagnetic and strong field energy respectively. That’s it. Nothing more. Nothing less. Everyone who thinks otherwise is ‘lost in math’, IMNSHO.”

Brutal? Yes. Very much so. The more important question is this: is it true? I cannot know for sure, but it comes across as being truthful to me.

Quantum field theory and pair creation/annihilation

The creation and annihilation of matter-antimatter pairs is usually taken as proof that, somehow, fields can condense into matter-particles or, conversely, that matter-particles can somehow turn into light-particles (photons), which are nothing but traveling electromagnetic fields. However, pair creation always requires the presence of another particle and one may, therefore, legitimately wonder whether the electron and positron were not already present, somehow.

Carl Anderson’s original discovery of the positron involved cosmic rays hitting atmospheric molecules, a process which involves the creation of unstable particles including pions. Cosmic rays themselves are, unlike what the name suggests, no rays – not like gamma rays, at least – but highly energetic protons and atomic nuclei. Hence, they consist of matter-particles, not of photons. The creation of electron-positron pairs from cosmic rays also involves pions as intermediate particles:

1. The π+ and π particles have net positive and negative charge of 1 e+ and 1 e respectively. According to mainstream theory, this is because they combine a u and d quark but – abandoning the quark hypothesis[1] – we may want to think their charge could be explained, perhaps, by the presence of an electron![2]

2. The neutral pion, in turn, might, perhaps, consist of an electron and a positron, which should annihilate but take some time to do so!

Neutral pions have a much shorter lifetime – in the order of 10-18 s only – than π+ and π particles, whose lifetime is a much more respectable 2.6 times 10-8 s. Something you can effectively measure, in order words.[3] In short, despite similar energies, neutral pions do not seem to have a lot in common with π+ and π particles. Even the energy difference is quite substantial when measured in terms of the electron mass: the neutral pion has an energy of about 135 MeV, while π+ and π particles have an energy of almost 140 MeV. To be precise, the difference is about 4.6 MeV. That is quite a lot: the electron rest energy is 0.511 MeV only.[4] So it is not stupid to think that π+ and π particles might carry an extra positron or electron, somehow. In our not-so-humble view, this is as legitimate as thinking – like Rutherford did – that a neutron should, somehow, combine a proton and an electron.[5]

The whole analysis – both in the QED as well as in the QCD sector of quantum physics – would radically alter when thinking of neutral particles – such as neutrons and π0 particles – not as consisting of quarks but of protons/antiprotons and/or electrons/positrons cancelling each other’s charges out. We have not seen much – if anything – which convinces us this cannot be correct. We, therefore, believe a more realist interpretation of quantum physics should be possible for high-energy phenomena as well. With a more realist theory, we mean one that does not involve quantum field and/or renormalization theory.

Such new theory would not be contradictory to the principle that, in Nature, the number of charged particles is no longer conserved, but that total (net) charge is actually being conserved, always. Hence, charged particles could appear and disappear, but they would be part of neutral particles. All particles in such processes are very short-lived anyway, so what is a particle here? We should probably think of these things as an unstable combination of various bits and bobs, isn’t it? 😊

So, yes, we did a paper on this. And we like it. Have a look: it’s on ResearchGate, academia.edu, and – as usual – Phil Gibb’s site (which has all of our papers, including our very early ones, which you might want to take with a pinch of salt). 🙂


[1] You may be so familiar with quarks that you do not want to question this hypothesis anymore. If so, let me ask you: where do the quarks go when a π± particle disintegrates into a muon-e±?

[2] They disintegrate into muons (muon-electrons or muon-positrons), which themselves then decay into an electron or a positron respectively.

[3] The point estimate of the lifetime of a neutral pion of the Particle Data Group (PDG) is about 8.5 times 10-17 s. Such short lifetimes cannot measured in a classical sense: such particles are usually referred to as resonances (rather than particles) and the lifetime is calculated from a so-called resonance width. We may discuss this approach in more detail later.

[4] Of course, it is much smaller when compared to the proton (rest) energy, which it is about 938 MeV.

[5] See our short history of quantum-mechanical ideas or our paper on protons and neutrons.

The true mystery of quantum physics

In many of our papers, we presented the orbital motion of an electron around a nucleus or inside of a more complicated molecular structure[1], as well as the motion of the pointlike charge inside of an electron itself, as a fundamental oscillation. You will say: what is fundamental and, conversely, what is not? These oscillations are fundamental in the sense that these motions are (1) perpetual or stable and (2) also imply a quantization of space resulting from the Planck-Einstein relation.

Needless to say, this quantization of space looks very different depending on the situation: the order of magnitude of the radius of orbital motion around a nucleus is about 150 times the electron’s Compton radius[2] so, yes, that is very different. However, the basic idea is always the same: a pointlike charge going round and round in a rather regular fashion (otherwise our idea of a cycle time (T = 1/f) and an orbital would not make no sense whatsoever), and that oscillation then packs a certain amount of energy as well as Planck’s quantum of action (h). In fact, that’s just what the Planck-Einstein relation embodies: E = h·f. Frequencies and, therefore, radii and velocities are very different (we think of the pointlike charge inside of an electron as whizzing around at lightspeed, while the order of magnitude of velocities of the electron in an atomic or molecular orbital is also given by that fine-structure constant: v = α·c/n (n is the principal quantum number, or the shell in the gross structure of an atom), but the underlying equations of motion – as Dirac referred to it – are not fundamentally different.

We can look at these oscillations in two very different ways. Most Zitterbewegung theorists (or realist thinkers, I might say) think of it as a self-perpetuating current in an electromagnetic field. David Hestenes is probably the best known theorist in this class. However, we feel such view does not satisfactorily answer the quintessential question: what keeps the charge in its orbit? We, therefore, preferred to stick with an alternative model, which we loosely refer to as the oscillator model.

However, truth be told, we are aware this model comes with its own interpretational issues. Indeed, our interpretation of this oscillator model oscillated between the metaphor of a classical (non-relativistic) two-dimensional oscillator (think of a Ducati V2 engine, with the two pistons working in tandem in a 90-degree angle) and the mathematically correct analysis of a (one-dimensional) relativistic oscillator, which we may sum up in the following relativistically correct energy conservation law:

dE/dt = d[kx2/2 + mc2]/dt = 0

More recently, we actually noted the number of dimensions (think of the number of pistons of an engine) should actually not matter at all: an old-fashioned radial airplane engine has 3, 5, 7, or more cylinders (the non-even number has to do with the firing mechanism for four-stroke engines), but the interplay between those pistons can be analyzed just as well as the ‘sloshing back and forth’ of kinetic and potential energy in a dynamic system (see our paper on the meaning of uncertainty and the geometry of the wavefunction). Hence, it seems any number of springs or pistons working together would do the trick: somehow, linear becomes circular motion, and vice versa. But so what number of dimensions should we use for our metaphor, really?

We now think the ‘one-dimensional’ relativistic oscillator is the correct mathematical analysis, but we should interpret it more carefully. Look at the dE/dt = d[kx2/2 + mc2]/dt = = d(PE + KE)/dt = 0 once more.

For the potential energy, one gets the same kx2/2 formula one gets for the non-relativistic oscillator. That is no surprise: potential energy depends on position only, not on velocity, and there is nothing relative about position. However, the (½)m0v2 term that we would get when using the non-relativistic formulation of Newton’s Law is now replaced by the mc2 = γm0c2 term. Both energies vary – with position and with velocity respectively – but the equation above tells us their sum is some constant. Equating x to 0 (when the velocity v = c) gives us the total energy of the system: E = mc2. Just as it should be. 🙂 So how can we now reconcile this two models? One two-dimensional but non-relativistic, and the other relativistically correct but one-dimensional only? We always get this weird 1/2 factor! And we cannot think it away, so what is it, really?

We still don’t have a definite answer, but we think we may be closer to the conceptual locus where these two models might meet: the key is to interpret x and v in the equation for the relativistic oscillator as (1) the distance along an orbital, and (2) v as the tangential velocity of the pointlike charge along this orbital.

Huh? Yes. Read everything slowly and you might see the point. [If not, don’t worry about it too much. This is really a minor (but important) point in my so-called realist interpretation of quantum mechanics.]

If you get the point, you’ll immediately cry wolf and say such interpretation of x as a distance measured along some orbital (as opposed to the linear concept we are used to) and, consequently, thinking of v as some kind of tangential velocity along such orbital, looks pretty random. However, keep thinking about it, and you will have to admit it is a rather logical way out of the logical paradox. The formula for the relativistic oscillator assumes a pointlike charge with zero rest mass oscillating between v = 0 and v = c. However, something with zero rest mass will always be associated with some velocity: it cannot be zero! Think of a photon here: how would you slow it down? And you may think we could, perhaps, slow down a pointlike electric charge with zero rest mass in some electromagnetic field but, no! The slightest force on it will give it infinite acceleration according to Newton’s force law. [Admittedly, we would need to distinguish here between its relativistic expression (F = dp/dt) and its non-relativistic expression (F = m0·a) when further dissecting this statement, but you get the idea. Also note that we are discussing our electron here, in which we do have a zero-rest-mass charge. In an atomic or molecular orbital, we are talking an electron with a non-zero rest mass: just the mass of the electron whizzing around at a (significant) fraction (α) of lightspeed.]

Hence, it is actually quite rational to argue that the relativistic oscillator cannot be linear: the velocity must be some tangential velocity, always and – for a pointlike charge with zero rest mass – it must equal lightspeed, always. So, yes, we think this line of reasoning might well the conceptual locus where the one-dimensional relativistic oscillator (E = m·a2·ω2) and the two-dimensional non-relativistic oscillator (E = 2·m·a2·ω2/2 = m·a2·ω2) could meet. Of course, we welcome the view of any reader here! In fact, if there is a true mystery in quantum physics (we do not think so, but we know people – academics included – like mysterious things), then it is here!

Post scriptum: This is, perhaps, a good place to answer a question I sometimes get: what is so natural about relativity and a constant speed of light? It is not so easy, perhaps, to show why and how Lorentz’ transformation formulas make sense but, in contrast, it is fairly easy to think of the absolute speed of light like this: infinite speeds do not make sense, both physically as well as mathematically. From a physics point of view, the issue is this: something that moves about at an infinite speed is everywhere and, therefore, nowhere. So it doesn’t make sense. Mathematically speaking, you should not think of v reaching infinite but of a limit of a ratio of a distance interval that goes to infinity, while the time interval goes to zero. So, in the limit, we get a division of an infinite quantity by 0. That’s not infinity but an indeterminacy: it is totally undefined! Indeed, mathematicians can easily deal with infinity and zero, but divisions like zero divided by zero, or infinity divided by zero are meaningless. [Of course, we may have different mathematical functions in the numerator and denominator whose limits yields those values. There is then a reasonable chance we will be able to factor stuff out so as to get something else. We refer to such situations as indeterminate forms, but these are not what we refer to here. The informed reader will, perhaps, also note the division of infinity by zero does not figure in the list of indeterminacies, but any division by zero is generally considered to be undefined.]


[1] It may be extra electron such as in, for example, the electron which jumps from place to place in a semiconductor (see our quantum-mechanical analysis of electric currents). Also, as Dirac first noted, the analysis is actually also valid for electron holes, in which case our atom or molecule will be positively ionized instead of being neutral or negatively charged.

[2] We say 150 because that is close enough to the 1/α = 137 factor that relates the Bohr radius to the Compton radius of an electron. The reader may not be familiar with the idea of a Compton radius (as opposed to the Compton wavelength) but we refer him or her to our Zitterbewegung (ring current) model of an electron.

Electron propagation in a lattice

It is done! My last paper on the mentioned topic (available on Phil Gibbs’s site, my ResearchGate page or academia.edu) should conclude my work on the QED sector. It is a thorough exploration of the hitherto mysterious concept of the effective mass and all that.

The result I got is actually very nice: my calculation of the order of magnitude of the kb factor in the formula for the energy band (the conduction band, as you may know it) shows that the usual small angle approximation of the formula does not make all that much sense. This shows that some ‘realist’ thinking about what is what in these quantum-mechanical models does constrain the options: we cannot just multiply wave numbers with some random multiple of π or 2π. These things have a physical meaning!

So no multiverses or many worlds, please! One world is enough, and it is nice we can map it to a unique mathematical description.

I should now move on and think about the fun stuff: what is going on in the nucleus and all that? Let’s see where we go from here. Downloads on ResearchGate have been going through the roof lately (a thousand reads on ResearchGate is better than ten thousand on viXra.org, I guess), so it is all very promising. 🙂