A simple explanation of quantum-mechanical operators

I added an Annex to a paper that talks about all of the fancy stuff quantum physicists like to talk about, like scattering matrices and high-energy particle events. The Annex, however, is probably my simplest and shortest summary of the ordinariness of wavefunction math, including a quick overview of what quantum-mechanical operators actually are. It does not make use of state vector algebra or the usual high-brow talk about Gilbert spaces and what have you: you only need to know what a derivative is, and combine it with our realist interpretation of what the wavefunction actually represents.

I think I should do a paper on the language of physics. To show how (i) rotations (i, j, k), (ii) scalars (constants or just numerical values) and (iii) vectors (real vectors (e.g. position vectors) and pseudovectors (e.g. angular frequency or momentum)), and (iv) operators (derivatives of the wavefunction with respect to time and spatial directions) form ‘words’ (e.g. energy and momentum operators), and how these ‘words’ then combine into meaningful statements (e.g. Schroedinger’s equation).

All of physics can then be summed up in a half-page or so. All the rest is thermodynamics 🙂 JL

PS: You only get collapsing wavefunctions when adding uncertainty to the models (i.e. our own uncertainty about the energy and momentum). The ‘collapse’ of the wavefunction (let us be precise, the collapse of the (dissipating) wavepacket) thus corresponds to the ‘measurement’ operation. 🙂

PS2: Incidentally, the analysis also gives an even more intuitive explanation of Einstein’s mass-energy equivalence relation, which I summarize in a reply to one of the many ‘numerologist’ physicists on ResearchGate (copied below).

Advertisement

All you need to know about cosmology…

I just did a short paper with, yes, all you need to know about cosmology. It recapitulates my theory of dark matter (antimatter), how we might imagine the Big Bang (not a single one, probably!), the possibility of an oscillating Universe, possible extraterrestrial life, interstellar communication, and, yes, life itself. It also tries to offer a more intuitive explanation of SRT/GRT based on an analysis of the argument of the quantum-mechanical wavefunction – although it may not come across as being very ‘intuitive’ (my math is, without any doubt, much more intuitive to me than to you – if only because it is a ‘language’ I developed over years!).

I introduced the paper with a rather long comment on one of the ResearchGate discussion threads: Is QM consistent?. I copy it here for the convenience of my readers. 🙂

The concept of ‘dimension’ may well be the single most misunderstood concept in physics. The bare minimum rule to get out of the mess and have fruitful exchanges with other (re)searchers is to clearly distinguish between mathematical and physical dimensions. Physical dimensions are covered by the 2019 revision of SI units, which may well be the most significant consolidation of theory which science has seen over the past hundred years or so (since Einstein’s SRT/GRT theories, in fact). Its definitions (e.g. the definition of the fine-structure constant) – combined with the CODATA values for commonly repeated measurements – sum up all of physics.

A few months before his untimely demise, H.A. Lorentz delivered his last contributions to quantum physics (Solvay Conference, 1927, General Discussion). He did not challenge the new physics, but did remark it failed to prove a true understanding of what was actually going on by not providing a consistent interpretation of the equations (which he did not doubt were true, in the sense of representing scientifically established facts and repeated measurements) in other words. Among various other remarks, he made this one: “We are trying to represent phenomena. We try to form an image of them in our mind. Till now, we always tried to do using the ordinary notions of space and time. These notions may be innate; they result, in any case, from our personal experience, from our daily observations. To me, these notions are clear, and I admit I am not able to have any idea about physics without those notions. The image I want to have when thinking physical phenomena has to be clear and well defined, and it seems to me that cannot be done without these notions of a system defined in space and in time.”

Systems of equations may be reduced or expanded to include more or less mathematical (and physical) dimensions, but one has to be able to reduce them to the basic laws of physics (the mass-energy equivalence relation, the relativistically correct expression of Newton’s force law, the Planck-Einstein relation, etcetera), whose dimensions are physical. The real and imaginary part of the wavefunction represents kinetic and potential energy sloshing back and forth in a system, always adding up to the total energy of the system. The sum of squares of the real and imaginary part adding up to give us the energy density (non-normalized wavefunction) at each point in space or, after normalization, a probability P(r) to find the electron as a function of the position vector r. The argument of the wavefunction itself is invariant and, therefore, is consistent with both SRT as well as GRT (see Annex I and II of The Finite Universe).

The quantum-mechanical wavefunction is, therefore, the pendant to both the Planck-Einstein relation and the mass-energy equivalence relation. Indeed, all comes out of the E = h·f = p·λ and E = mc2 equations (or their reduced forms) combined with Maxwell’s equations written in terms of the scalar and vector potential. The indeterminacy in regard to the position is statistical only: it arises because of the high velocity of the pointlike charge, which makes it impossible to accurately determine its position at any point in time. In other words, the problem is that we are not able to determine the initial condition of the system. If we would be able to do so, we would be able to substitute the indefinite integrals used to derive and define the quantum-mechanical operators to definite integrals, and so we would have a completely defined system. [See: The Meaning of Uncertainty and the Geometry of the Wavefunction.]

Quarks make sense as mathematical form factors only: they reduce the complexity of the scattering matrix, but they are no equivalent to a full and consistent application to the conservation and symmetry laws (conservation of energy, linear and angular momentum, physical action, and elementary charge). The quark hypothesis suffers from the same defect or weakness as the one that H.A. Lorentz noted in regard to the Uncertainty Principle, or in regard to 19th century aether theories. I paraphrase: “The conditions of an experiment are such that, from a practical point of view, we would have indeterminism, but there is no need to elevate indeterminism to a philosophical principle.” Likewise, the elevation of quarks – the belief that these mathematical form factors have some kind of ontological status – may satisfy some kind of deeper religious thirst for knowledge, but that is all there is to it.

Post-WWII developments saw a confluence of (Cold War) politics and scientific dogma – which is not at all unusual in the history of thought, but which has been documented now sufficiently well to get over it (see: Oliver Consa, February 2020, Something is rotten in the state of QED). Of course, there was also a more innocent driver here, which Feynman writes about rather explicitly: students were no longer electing physics as a study because everything was supposed to be solved in that field, and all that was left was engineering. Hence, Feynman and many others probably did try to re-establish an original sense of mystery and wonder to attract the brightest. As Feynman’s writes in the epilogue to his Lectures: “The main purpose of my teaching has not been to prepare you for some examination—it was not even to prepare you to serve industry or the military. I [just] wanted most to give you some appreciation of the wonderful world and the physicist’s way of looking at it, which, I believe, is a major part of the true culture of modern times.”

In any case, I think Caltech’s ambitious project to develop an entirely new way of presenting the subject was very successful. I see very few remaining fundamental questions, except – perhaps – the questions related to the nature of electric charge (fractal?), but all other questions mentioned as ‘unsolved problems’ on Wikipedia’s list for physics and cosmology (see: https://en.wikipedia.org/wiki/List_of_unsolved_problems_in_physics), such as the question of dark matter (antimatter), the arrow of time, one-photon Mach-Zehnder interference, the anomaly in the magnetic moment of an electron, etcetera, come across as comprehensible and, therefore, ‘solved’ to me. As such, I repeat what I think of as a logical truth: quantum physics is fully consistent. ‘Numerical’ interpretations of quantum physics (such as SO(4), for example) may not be wrong, but they do not provide me with the kind of understanding I was looking for, and finally – after many years of deep questioning myself and others – have found.

Feynman is right that the Great Law of Nature may be summarized as U = 0 (Lectures, II-25-6) but also notes this: “This simple notation just hides the complexity in the definitions of symbols: it is just a trick.” It is like talking of “the night in which all cows are equally black” (Hegel, PhĂ€nomenologie des Geistes, Vorrede, 1807). Hence, the U = 0 equation needs to be separated out. I note a great majority of people on this forum try to do that in a very sensible way, i.e. they are aware that science differs from religion in that it seeks to experimentally verify its propositions: it measures rather than believes, and these measurements are cross-checked by a global community and, thereby, establish a non-subjective reality, of which I feel part. A limited number of searchers may believe their version of truth is more true than mainstream views, but I would suggest they do some more reading before trying to re-invent the wheel.

For the rest, we should heed Wittgenstein’s final philosophical thesis on this forum, I think: “Wovon man nicht sprechen kann, darĂŒber muß man schweigen.” Again, this applies to scientific discourse only, of course. We are all free to publish whatever nonsense we want on other forums. Chances are more people would read me there, but as the scope for some kind of consensus decreases accordingly, I try to refrain from doing so.

PS: To understand relativity theory, one must agree on the notion of ‘synchronized clocks’. Synchronization in the context of SRT does not correspond to the everyday usage of the concept. It is not a matter of making them ‘tick’ the same: we must simply assume that the clock that is used to measure the distance from A to B does not move relative to the clock that is used to measure the distance from B to A: clocks that are moving relative to each other cannot be made to tick the same. An observer in the inertial reference frame can only agree to a t = t’ = 0 point (or, as we are talking time, a t = t’ = 0 instant, we should say). From an ontological perspective, this entails both observers can agree on the notion of an infinitesimally small point in space and an infinitesimally small instant of time. Again, these notions are mathematical concepts and do not correspond to the physical concept of quantization of energy, which is given by the Planck-Einstein relation. But the mathematical or philosophical notion does not come across as problematic to me. Likewise, the idea of instantaneous or momentaneous momentum may or may not correspond to a physical reality, but I do not think of it as problematic. When everything is said and done, we do need math to describe physical reality. Feynman’s U = 0 (un)worldliness equation is, effectively, like a very black cow in a very dark night: I just cannot ‘see’ it. 🙂 The notion of infinitesimally small time and distance scales is just like reading the e-i*pi = -1 identity, the ei0 = e0 = 1 or i2 = -1 relations for me. Interpreting i as a rotation by 90 degrees along the circumference of a circle ensures these notions come across as obvious logical (or mathematical/philosophical) truths. 🙂 What is amazing is that complex numbers describe Nature so well, but then mankind took a long time to find that out! [Remember: Euler was an 18th century mathematician, and Louis de Broglie a 20th century physicist so, yes, they are separated by two full centuries!]

All of physics…

I just wrapped up my writings on physics (quantum physics) with a few annexes on the (complex) math of it, as well as a paper on how to model unstable particles and (high-energy) particle events. And then a friend of mine sent me this image of the insides of a cell. There is more of it on where it came from. Just admit it: it is truly amazing, isn’t? I suddenly felt a huge sense of wonder – probably because of the gap between the simple logic of quantum physics and this incredible complex molecular machinery.  

I quote: “Seen are Golgi apparatus, mitochondria, endoplasmic reticulum, cell wall, and hundreds of protein structures and membrane-bound organelles. The cell structure is of a Eukaryote cell i.e. a multicellular organism which means it can correspond to the cell structure of humans, dogs, or even fungi and plants.” These images were apparently put together from “X-ray, nuclear magnetic resonance (NMR) and cryoelectron microscopy datasets.”

I think it is one of those moments where it feels great to be human. 🙂

The Nature of Antimatter (dark matter)

The electromagnetic force has an asymmetry: the magnetic field lags the electric field. The phase shift is 90 degrees. We can use complex notation to write the E and B vectors as functions of each other. Indeed, the Lorentz force on a charge is equal to: F = qE + q(v×B). Hence, if we know the (electric field) E, then we know the (magnetic field) B: B is perpendicular to E, and its magnitude is 1/c times the magnitude of E. We may, therefore, write:

B = –iE/c

The minus sign in the B = –iE/c expression is there because we need to combine several conventions here. Of course, there is the classical (physical) right-hand rule for E and B, but we also need to combine the right-hand rule for the coordinate system with the convention that multiplication with the imaginary unit amounts to a counterclockwise rotation by 90 degrees. Hence, the minus sign is necessary for the consistency of the description. It ensures that we can associate the aeiEt/ħ and aeiEt/ħ functions with left and right-handed spin (angular momentum), respectively.

Now, we can easily imagine a antiforce: an electromagnetic antiforce would have a magnetic field which precedes the electric field by 90 degrees, and we can do the same for the nuclear force (EM and nuclear oscillations are 2D and 3D oscillations respectively). It is just an application of Occam’s Razor principle: the mathematical possibilities in the description (notations and equations) must correspond to physical realities, and vice versa (one-on-one). Hence, to describe antimatter, all we have to do is to put a minus sign in front of the wavefunction. [Of course, we should also take the opposite of the charge(s) of its antimatter counterpart, and please note we have a possible plural here (charges) because we think of neutral particles (e.g. neutrons, or neutral mesons) as consisting of opposite charges.] This is just the principle which we already applied when working out the equation for the neutral antikaon (see Annex IV and V of the above-referenced paper):

Don’t worry if you do not understand too much of the equations: we just put them there to impress the professionals. 🙂 The point is this: matter and antimatter are each other opposite, literally: the wavefunctions aeiEt/ħ and –aeiEt/ħ add up to zero, and they correspond to opposite forces too! Of course, we also have lightparticles, so we have antiphotons and antineutrinos too.

We think this explains the rather enormous amount of so-called dark matter and dark energy in the Universe (the Wikipedia article on dark matter says it accounts for about 85% of the total mass/energy of the Universe, while the article on the observable Universe puts it at about 95%!). We did not say much about this in our YouTube talk about the Universe, but we think we understand things now. Dark matter is called dark because it does not appear to interact with the electromagnetic field: it does not seem to absorb, reflect or emit electromagnetic radiation, and is, therefore, difficult to detect. That should not be a surprise: antiphotons would not be absorbed or emitted by ordinary matter. Only anti-atoms (i.e. think of a antihydrogen atom as a antiproton and a positron here) would do so.

So did we explain the mystery? We think so. 🙂

We will conclude with a final remark/question. The opposite spacetime signature of antimatter is, obviously, equivalent to a swap of the real and imaginary axes. This begs the question: can we, perhaps, dispense with the concept of charge altogether? Is geometry enough to understand everything? We are not quite sure how to answer this question but we do not think so: a positron is a positron, and an electron is an electronŸthe sign of the charge (positive and negative, respectively) is what distinguishes them! We also think charge is conserved, at the level of the charges themselves (see our paper on matter/antimatter pair production and annihilation).

We, therefore, think of charge as the essence of the Universe. But, yes, everything else is sheer geometry! 🙂

The End of Physics?

There are two branches of physics. The nicer branch studies equilibrium states: simple laws, stable particles (electrons and protons, basically), the expanding (oscillating?) Universe, etcetera. This branch includes the study of dynamical systems which we can only describe in terms of probabilities or approximations: think of kinetic gas theory (thermodynamics) or, much simpler, hydrostatics (the flow of water, Feynman, Vol. II, chapters 40 and 41), about which Feynman writes this:

“The simplest form of the problem is to take a pipe that is very long and push water through it at high speed. We ask: to push a given amount of water through that pipe, how much pressure is needed? No one can analyze it from first principles and the properties of water. If the water flows very slowly, or if we use a thick goo like honey, then we can do it nicely. You will find that in your textbook. What we really cannot do is deal with actual, wet water running through a pipe. That is the central problem which we ought to solve some day, and we have not.” (Feynman, I-3-7)

Still, we believe first principles do apply to the flow of water through a pipe. In contrast, the second branch of physics – we think of the study of non-stable particles here: transients (charged kaons and pions, for example) or resonances (very short-lived intermediate energy states). The class of physicists who studies these must be commended, but they resemble econometrists modeling input-output relations: if they are lucky, they will get some kind of mathematical description of what goes in and what goes out, but the math does not tell them how stuff actually happens. It leads one to think about the difference between a theory, a calculation and an explanation. Simplifying somewhat, we can represent such input-output relations by thinking of a process that will be operating on some state |ψ⟩ to produce some other state |ϕ⟩, which we write like this:

⟚ϕ|A|ψ⟩

A is referred to as a Hermitian matrix if the process is reversible. Reversibility looks like time reversal, which can be represented by taking the complex conjugate ⟚ϕ|A|ψ⟩* = ⟚ψ|A†|ϕ⟩: we put a minus sign in front of the imaginary unit, so we have –i instead of i in the wavefunctions (or i instead of –i with respect to the usual convention for denoting the direction of rotation). Processes may not reversible, in which case we talk about symmetry-breaking: CPT-symmetry is always respected so, if T-symmetry (time) is broken, CP-symmetry is broken as well. There is nothing magical about that.

Physicists found the description of these input-output relations can be simplified greatly by introducing quarks (see Annex II of our paper on ontology and physics). Quarks have partial charge and, more generally, mix physical dimensions (mass/energy, spin or (angular) momentum). They create some order – think of it as some kind of taxonomy – in the vast zoo of (unstable) particles, which is great. However, we do not think there was a need to give them some kind of ontological status: unlike plants or insects, partial charges do not exist.

We also think the association between forces and (virtual) particles is misguided. Of course, one might say forces are being mediated by particles (matter- or light-particles), because particles effectively pack energy and angular momentum (light-particles – photons and neutrinos – differ from matter-particles (electrons, protons) in that they carry no charge, but they do carry electromagnetic and/or nuclear energy) and force and energy are, therefore, being transferred through particle reactions, elastically or non-elastically. However, we think it is important to clearly separate the notion of fields and particles: they are governed by the same laws (conservation of charge, energy, and (linear and angular) momentum, and – last but not least – (physical) action) but their nature is very different.

W.E. Lamb (1995), nearing the end of his very distinguished scientific career, wrote about “a comedy of errors and historical accidents”, but we think the business is rather serious: we have reached the End of Science. We have solved Feynman’s U = 0 equation. All that is left, is engineering: solving practical problems and inventing new stuff. That should be exciting enough. 🙂

Post scriptum: I added an Annex (III) to my paper on ontology and physics, with what we think of as a complete description of the Universe. It is abstruse but fun (we hope!): we basically add a description of events to Feynman’s U = 0 (un)worldliness formula. 🙂

On the Universe and Aliens

I was a bit bored today (Valentine’s Day but no Valentine playing for me), and so I did a video on the Universe and (the possibility) of Life elsewhere. It is simple (I managed to limit it to 40 minutes!) but it deals with all of the Big Questions: fundamental forces and distance scales; the geometric approach to gravity and the curvature of the Universe; Big Bang(s) and – who knows? – an oscillating Universe; and, yes, Life here and, perhaps, elsewhere. Enjoy ! The corresponding paper is available on ResearchGate.

PS: I’ve also organized my thoughts on quarks in a (much more) moderate annex to my paper on ontology and physics. Quite a productive Valentine’s Day – despite the absence of a Valentina ! 🙂 JL

Ontology and physics

One sometimes wonders what keeps amateur physicists awake. Why is it that they want to understand quarks and wave equations, or delve into complicated math (perturbation theory, for example)? I believe it is driven by the same human curiosity that drives philosophy. Physics stands apart from other sciences because it examines the smallest of smallest – the essence of things, so to speak.

Unlike other sciences (the human sciences in particular, perhaps), physicists also seek to reduce the number of concepts, rather than multiply them – even if, sadly, enough, they do not always a good job at that. However, generally speaking, physics and math may, effectively, be considered to be the King and Queen of Science, respectively.

The Queen is an eternal beauty, of course, because Her Language may mean anything. Physics, in contrast, talks specifics: physical dimensions (force, distance, energy, etcetera), as opposed to mathematical dimensions – which are mere quantities (scalars and vectors).

Science differs from religion in that it seeks to experimentally verify its propositions. It measures rather than believes. These measurements are cross-checked by a global community and, thereby, establish a non-subjective reality. The question of whether reality exists outside of us, is irrelevant: it is a category mistake (Ryle, 1949). It is like asking why we are here: we just are.

All is in the fundamental equations. An equation relates a measurement to Nature’s constants. Measurements – energy/mass, or velocities – are relative. Nature’s constants do not depend on the frame of reference of the observer and we may, therefore, label them as being absolute. This corresponds to the difference between variables and parameters in equations. The speed of light (c) and Planck’s quantum of action (h) are parameters in the E/m = c2 and E = hf, respectively.

Feynman (II-25-6) is right that the Great Law of Nature may be summarized as U = 0 but that “this simple notation just hides the complexity in the definitions of symbols is just a trick.” It is like talking of the night “in which all cows are equally black” (Hegel, PhĂ€nomenologie des Geistes, Vorrede, 1807). Hence, the U = 0 equation needs to be separated out. I would separate it out as:

We imagine things in 3D space and one-directional time (Lorentz, 1927, and Kant, 1781). The imaginary unit operator (i) represents a rotation in space. A rotation takes time. Its physical dimension is, therefore, s/m or -s/m, as per the mathematical convention in place (Minkowski’s metric signature and counter-clockwise evolution of the argument of complex numbers, which represent the (elementary) wavefunction).

Velocities can be linear or tangential, giving rise to the concepts of linear versus angular momentum. Tangential velocities imply orbitals: circular and elliptical orbitals are closed. Particles are pointlike charges in closed orbitals. We are not sure if non-closed orbitals might correspond to some reality: linear oscillations are field particles, but we do not think of lines as non-closed orbitals: the curvature of real space (the Universe we live in) suggest we should but we are not sure such thinking is productive (efforts to model gravity as a residual force have failed so far).

Space and time are innate or a priori categories (Kant, 1781). Elementary particles can be modeled as pointlike charges oscillating in space and in time. The concept of charge could be dispensed with if there were not lightlike particles: photons and neutrinos, which carry energy but no charge. The pointlike charge which is oscillating is pointlike but may have a finite (non-zero) physical dimension, which explains the anomalous magnetic moment of the free (Compton) electron. However, it only appears to have a non-zero dimension when the electromagnetic force is involved (the proton has no anomalous magnetic moment and is about 3.35 times smaller than the calculated radius of the pointlike charge inside of an electron). Why? We do not know: elementary particles are what they are.

We have two forces: electromagnetic and nuclear. One of the most remarkable things is that the E/m = c2 holds for both electromagnetic and nuclear oscillations, or combinations thereof (superposition theorem). Combined with the oscillator model (E = ma2ω2 = mc2 and, therefore, c must be equal to c = aω), this makes us think of c2 as modeling an elasticity or plasticity of space. Why two oscillatory modes only? In 3D space, we can only imagine oscillations in one, two and three dimensions (line, plane, and sphere). The idea of four-dimensional spacetime is not relevant in this context.

Photons and neutrinos are linear oscillations and, because they carry no charge, travel at the speed of light. Electrons and muon-electrons (and their antimatter counterparts) are 2D oscillations packing electromagnetic and nuclear energy, respectively. The proton (and antiproton) pack a 3D nuclear oscillation. Neutrons combine positive and negative charge and are, therefore, neutral. Neutrons may or may not combine the electromagnetic and nuclear force: their size (more or less the same as that of the proton) suggests the oscillation is nuclear.  

 2D oscillation3D oscillation
electromagnetic forcee± (electron/positron)orbital electron (e.g.: 1H)
nuclear forceΌ± (muon-electron/antimuon)p± (proton/antiproton)
compositepions (π±/ π0)?n (neutron)? D+ (deuteron)?
corresponding field particleÎł (photon)Μ (neutrino)

The theory is complete: each theoretical/mathematical/logical possibility corresponds to a physical reality, with spin distinguishing matter from antimatter for particles with the same form factor.

When reading this, my kids might call me and ask whether I have gone mad. Their doubts and worry are not random: the laws of the Universe are deterministic (our macro-time scale introduces probabilistic determinism only). Free will is real, however: we analyze and, based on our analysis, we determine the best course to take when taking care of business. Each course of action is associated with an anticipated cost and return. We do not always choose the best course of action because of past experience, habit, laziness or – in my case – an inexplicable desire to experiment and explore new territory.

PS: I’ve written this all out in a paper, of course. 🙂 I also did a 30 minute YouTube video on it. Finally, I got a nice comment from an architect who wrote an interesting paper on wavefunctions and wave equations back in 1996 – including thoughts on gravity.

A Zitterbewegung model of the neutron

As part of my ventures into QCD, I quickly developed a Zitterbewegung model of the neutron, as a complement to my first sketch of a deuteron nucleus. The math of orbitals is interesting. Whatever field you have, one can model is using a coupling constant between the proportionality coefficient of the force, and the charge it acts on. That ties it nicely with my earlier thoughts on the meaning of the fine-structure constant.

My realist interpretation of quantum physics focuses on explanations involving the electromagnetic force only, but the matter-antimatter dichotomy still puzzles me very much. Also, the idea of virtual particles is no longer anathema to me, but I still want to model them as particle-field interactions and the exchange of real (angular or linear) momentum and energy, with a quantization of momentum and energy obeying the Planck-Einstein law.

The proton model will be key. We cannot explain it in the typical ‘mass without mass’ model of zittering charges: we get a 1/4 factor in the explanation of the proton radius, which is impossible to get rid of unless we assume some ‘strong’ force come into play. That is why I prioritize a ‘straight’ attack on the electron and the proton-electron bond in a primitive neutron model.

The calculation of forces inside a muon-electron and a proton (see ) is an interesting exercise: it is the only thing which explains why an electron annihilates a positron but electrons and protons can live together (the ‘anti-matter’ nature of charged particles only shows because of opposite spin directions of the fields – so it is only when the ‘structure’ of matter-antimatter pairs is different that they will not annihilate each other).

[…]

In short, 2021 will be an interesting year for me. The intent of my last two papers (on the deuteron model and the primitive neutron model) was to think of energy values: the energy value of the bond between electron and proton in the neutron, and the energy value of the bond between proton and neutron in a deuteron nucleus. But, yes, the more fundamental work remains to be done !

Cheers – Jean-Louis

The electromagnetic deuteron model

In my ‘signing off’ post, I wrote I had enough of physics but that my last(?) ambition was to “contribute to an intuitive, realist and mathematically correct model of the deuteron nucleus.” Well… The paper is there. And I am extremely pleased with the result. Thank you, Mr. Meulenberg. You sure have good intuition.

I took the opportunity to revisit Yukawa’s nuclear potential and demolish his modeling of a new nuclear force without a charge to act on. Looking back at the past 100 years of physics history, I now start to think that was the decisive destructive moment in physics: that 1935 paper, which started off all of the hype on virtual particles, quantum field theory, and a nuclear force that could not possibly be electromagnetic plus – totally not done, of course ! – utter disregard for physical dimensions and the physical geometry of fields in 3D space or – taking retardation effects into account – 4D spacetime. Fortunately, we have hope: the 2019 fixing of SI units puts physics firmly back onto the road to reality – or so we hope.

Paolo Di Sia‘s and my paper show one gets very reasonable energy and separation distances for nuclear bonds and inter-nucleon distances when assuming the presence of magnetic and/or electric dipole fields arising from deep electron orbitals. The model shows one of the protons pulling the ‘electron blanket’ from another proton (the neutron) towards its own side so as to create an electric dipole moment. So it is just like a valence electron in a chemical bond. So it is like water, then? Water is a polar molecule but we do not necessarily need to start with polar configurations when trying to expand this model so as to inject some dynamics into it (spherically symmetric orbitals are probably easier to model). Hmm… Perhaps I need to look at the thermodynamical equations for dry versus wet water once again… Phew ! Where to start?

I have no experience – I have very little math, actually – with modeling molecular orbitals. So I should, perhaps, contact a friend from a few years ago now – living in Hawaii and pursuing more spiritual matters too – who did just that long time ago: orbitals using Schroedinger’s wave equation (I think Schroedinger’s equation is relativistically correct – just a misinterpretation of the concept of ‘effective mass’ by the naysayers). What kind of wave equation are we looking at? One that integrates inverse square and inverse cube force field laws arising from charges and the dipole moments they create while moving. [Hey! Perhaps we can relate these inverse square and cube fields to the second- and third-order terms in the binomial development of the relativistic mass formula (see the section on kinetic energy in my paper on one of Feynman’s more original renderings of Maxwell’s equations) but… Well… Probably best to start by seeing how Feynman got those field equations out of Maxwell’s equations. It is a bit buried in his development of the LiĂ©nard and Wiechert equations, which are written in terms of the scalar and vector potentials φ and A instead of E and B vectors, but it should all work out.]

If the nuclear force is electromagnetic, then these ‘nuclear orbitals’ should respect the Planck-Einstein relation. So then we can calculate frequencies and radii of orbitals now, right? The use of natural units and imaginary units to represent rotations/orthogonality in space might make calculations easy (B = iE). Indeed, with the 2019 revision of SI units, I might need to re-evaluate the usefulness of natural units (I always stayed away from it because it ‘hides’ the physics in the math as it makes abstraction of their physical dimension).

Hey ! Perhaps we can model everything with quaternions, using imaginary units (i and j) to represent rotations in 3D space so as to ensure consistent application of the appropriate right-hand rules always (special relativity gets added to the mix so we probably need to relate the (ds)2 = (dx)2 + (dy)2 + (dz)2 – (dct)2 to the modified Hamilton’s q = a + ib + jc – kd expression then). Using vector equations throughout and thinking of h as a vector when using the E = hf and h = pλ Planck-Einstein relation (something with a magnitude and a direction) should do the trick, right? [In case you wonder how we can write f as a vector: angular frequency is a vector too. The Planck-Einstein relation is valid for both linear as well as circular oscillations: see our paper on the interpretation of de Broglie wavelength.]

Oh – and while special relativity is there because of Maxwell’s equation, gravity (general relativity) should be left out of the picture. Why? Because we would like to explain gravity as a residual very-far-field force. And trying to integrate gravity inevitable leads one to analyze particles as ‘black holes.’ Not nice, philosophically speaking. In fact, any 1/rn field inevitably leads one to think of some kind of black hole at the center, which is why thinking of fundamental particles in terms ring currents and dipole moments makes so much sense! [We need nothingness and infinity as mathematical concepts (limits, really) but they cannot possibly represent anything real, right?]

The consistent use of the Planck-Einstein law to model these nuclear electron orbitals should probably involve multiples of h to explain their size and energy: E = nhf rather than E = hf. For example, when calculating the radius of an orbital of a pointlike charge with the energy of a proton, one gets a radius that is only 1/4 of the proton radius (0.21 fm instead of 0.82 fm, approximately). To make the radius fit that of a proton, one has to use the E = 4hf relation. Indeed, for the time being, we should probably continue to reject the idea of using fractions of h to model deep electron orbitals. I also think we should avoid superluminal velocity concepts.

[…]

This post sounds like madness? Yes. And then, no! To be honest, I think of it as one of the better Aha! moments in my life. 🙂

Brussels, 30 December 2020

Post scriptum (1 January 2021): Lots of stuff coming together here ! 2021 will definitely see the Grand Unified Theory of Classical Physics becoming somewhat more real. It looks like Mills is going to make a major addition/correction to his electron orbital modeling work and, hopefully, manage to publish the gist of it in the eminent mainstream Nature journal. That makes a lot of sense: to move from an atom to an analysis of nuclei or complex three-particle systems, one should combine singlet and doublet energy states – if only to avoid reduce three-body problems to two-body problems. 🙂 I still do not buy the fractional use of Planck’s quantum of action, though. Especially now that we got rid of the concept of a separate ‘nuclear’ charge (there is only one charge: the electric charge, and it comes in two ‘colors’): if Planck’s quantum of action is electromagnetic, then it comes in wholes or multiples. No fractions. Fractional powers of distance functions in field or potential formulas are OK, however. 🙂

The complementarity of wave- and particle-like viewpoints on EM wave propagation

In 1995, W.E. Lamb Jr. wrote the following on the nature of the photon: “There is no such thing as a photon. Only a comedy of errors and historical accidents led to its popularity among physicists and optical scientists. I admit that the word is short and convenient. Its use is also habit forming. Similarly, one might find it convenient to speak of the “aether” or “vacuum” to stand for empty space, even if no such thing existed. There are very good substitute words for “photon”, (e.g., “radiation” or “light”), and for “photonics” (e.g., “optics” or “quantum optics”). Similar objections are possible to use of the word “phonon”, which dates from 1932. Objects like electrons, neutrinos of finite rest mass, or helium atoms can, under suitable conditions, be considered to be particles, since their theories then have viable non-relativistic and non-quantum limits.”[1]

The opinion of a Nobel Prize laureate carries some weight, of course, but we think the concept of a photon makes sense. As the electron moves from one (potential) energy state to another – from one atomic or molecular orbital to another – it builds an oscillating electromagnetic field which has an integrity of its own and, therefore, is not only wave-like but also particle-like.

We, therefore, dedicated the fifth chapter of our re-write of Feynman’s Lectures to a dual analysis of EM radiation (and, yes, this post is just an announcement of the paper so you are supposed to click the link to read it). It is, basically, an overview of a rather particular expression of Maxwell’s equations which Feynman uses to discuss the laws of radiation. I wonder how to – possibly – ‘transform’ or ‘transpose’ this framework so it might apply to deep electron orbitals and – possibly – proton-neutron oscillations.


[1] W.E. Lamb Jr., Anti-photon, in: Applied Physics B volume 60, pages 77–84 (1995).

Low-energy nuclear reactions

I thought I should stop worrying about physics, but then I got an impromptu invitation to a symposium on low-energy nuclear reactions (LENR) and I got all excited about it. The field of LENR was, and still is, often referred to as cold fusion which, after initial enthusiasm, got a not-so-good name because of… More than one reason, really. Read the Wikipedia article on it, or just google and read some other blog articles (e.g. Scientific American’s guest blog on the topic is a pretty good one, I think).

The presentations were very good (especially those on the experimental results and the recent involvement of some very respectable institutions in addition to the usual suspects and, sadly, some fly-by-night operators too), and the follow-on conversation with one of the co-organizers convinced me that the researchers are serious, open-minded and – while not quite being able to provide all of the answers we are all seeking – very ready to discuss them seriously. Most, if not all, experiments involve transmutions of nuclei triggered by low-energy inputs such as a low-energy radiation (irradiation and transmutation of palladium by, say, a now-household 5 mW laser beam is just one of the examples). One experiment even triggered a current just by adding plain heat which, as you know, is nothing but very low-energy (infrared) radiation, although I must admit this was one I would like to see replicated en masse before believing it to be real (the equipment was small and simple, and so the experimenters could have shared it easily with other labs).

When looking at these experiments, the comparison that comes to mind is that of an opera singer shattering crystal with his or her voice: some frequency in the sound causes the material to resonate at, yes, its resonant frequency (most probably an enormous but integer multiple of the sound frequency), and then the energy builds up – like when you give a child on a swing an extra push every time when you should – as the amplitude becomes larger and larger – till the breaking point is reached. Another comparison is the failing of a suspension bridge when external vibrations (think of the rather proverbial soldier regiment here) cause similar resonance phenomena. So, yes, it is not unreasonable to believe that one could be able to induce neutron decay and, thereby, release the binding energy between the proton and the electron in the process by some low-energy stimulation provided the frequencies are harmonic.

The problem with the comparison – and for the LENR idea to be truly useful – is this: one cannot see any net production of energy here. The strain or stress that builds up in the crystal glass is a strain induced by the energy in the sound wave (which is why the singing demos usually include amplifiers to attain the required power/amplitude ratio, i.e. the required decibels). In addition, the breaking of crystal or a suspension bridge typically involves a weaker link somewhere, or some directional aspect (so that would be the equivalent of an impurity in a crystal structure, I guess), but that is a minor point, and a point that is probably easier to tackle than the question on the energy equation.

LENR research has probably advanced far enough now (the first series of experiments started in 1989) to slowly start focusing on the whole chain of these successful experiments: what is the equivalent, in these low-energy reactions, of the nuclear fuel in high-energy fission or fusion experiments? And, if it can be clearly identified, the researchers need to show that the energy that goes into the production of this fuel is much less than the energy you get out of it by burning it (and, of course, with ‘burning’ I mean the decay reaction here). [In case you have heard about Randell Mills’ hydrino experiments, he should show the emission spectrum of these hydrinos. Otherwise, one might think he is literally burning hydrogen. Attracting venture capital and providing scientific proof are not mutually exclusive, are they? In the meanwhile, I hope that what he is showing is real, in the way all LENR researchers hope it is real.]

LENR research may also usefully focus on getting the fundamental theory right. The observed anomalous heat and/or transmutation reactions cannot be explained by mainstream quantum physics (I am talking QCD here, so that’s QFT, basically). That should not surprise us: one does not need quarks or gluons to explain high-energy nuclear processes such as fission or fusion, either! My theory is, of course, typically simplistically simple: the energy that is being unlocked is just the binding energy between the nuclear electron and the protons, in the neutron itself or in a composite nucleus, the simplest of which is the deuteron nucleus. I talk about that in my paper on matter-antimatter pair creation/annihilation as a nuclear process but you do not need to be an adept of classical or realist interpretations of quantum mechanics to understand this point. To quote a motivational writer here: it is OK for things to be easy. 🙂

So LENR theorists just need to accept they are not mainstream – yet, that is – and come out with a more clearly articulated theory on why their stuff works the way it does. For some reason I do not quite understand, they come across as somewhat hesitant to do so. Fears of being frozen out even more by the mainstream? Come on guys ! You are coming out of the cold anyway, so why not be bold and go all the way? It is a time of opportunities now, and the field of LENR is one of them, both theoretically as well as practically speaking. I honestly think it is one of those rare moments in the history of physics where experimental research may be well ahead of theoretical physics, so they should feel like proud trailblazers!

Personally, I do not think it will replace big classical nuclear energy plants anytime soon but, in a not-so-distant future, it might yield much very useful small devices: lower energy, and, therefore, lower risk also. I also look forward to LENR research dealing the fatal blow to standard theory by confirming we do not need perturbation and renormalization theories to explain reality. 🙂

Post scriptum: If low-energy nuclear reactions are real, mainstream (astro)physicists will also have to rework their stories on cosmogenesis and the (future) evolution of the Universe. The standard story may well be summed up in the brief commentary of the HyperPhysics entry on the deuteron nucleus:

The stability of the deuteron is an important part of the story of the universe. In the Big Bang model it is presumed that in early stages there were equal numbers of neutrons and protons since the available energies were much higher than the 0.78 MeV required to convert a proton and electron to a neutron. When the temperature dropped to the point where neutrons could no longer be produced from protons, the decay of free neutrons began to diminish their population. Those which combined with protons to form deuterons were protected from further decay. This is fortunate for us because if all the neutrons had decayed, there would be no universe as we know it, and we wouldn’t be here!

If low-energy nuclear reactions are real – and I think they are – then the standard story about the Big Bang is obviously bogus too. I am not necessarily doubting the reality of the Big Bang itself (the ongoing expansion of the Universe is a scientific fact so, yes, the Universe must have been much smaller and (much) more energy-dense long time ago), but the standard calculations on proton-neutron reactions taking place, or not, at cut-off temperatures/energies above/below 0.78 MeV do not make sense anymore. One should, perhaps, think more in terms of how matter-antimatter ratios might or might not have evolved (and, of course, one should keep an eye on the electron-proton ratio, but that should work itself out because of charge conservation) to correctly calculate the early evolution of the Universe, rather than focusing so much on proton-neutron ratios.

Why do I say that? Because neutrons do appear to consist of a proton and an electron – rather than of quarks and gluons – and they continue to decay and then recombine again, so these proton-neutron reactions must not be thoughts of as some historic (discontinuous) process.

[…] Hmm… The more I look at the standard stories, the more holes I see
 This one, however, is very serious. If LENR and/or cold fusion is real, then it will also revolutionize the theories on cosmogenesis (the evolution of the Universe). I instinctively like that, of course, because – just like quantization – I had the impression the discontinuities are there, but not quite in the way mainstream physicists – thinking more in terms of quarks and gluons rather than in terms of stuff that we can actually measure – portray the whole show.

Signing off…

I have been exploring the weird wonderland of physics for over seven years now. At several occasions, I thought I should just stop. It was rewarding, but terribly exhausting at times as well! I am happy I did not give up, if only because I finally managed to come up with a more realist interpretation of the ‘mystery’ of matter-antimatter pair production/annihilation. So, yes, I think I can confidently state I finally understand physics the way I want to understand it. It was an extraordinary journey, and I am happy I could share it with many fellow searchers (300 posts and 300,000 hits on my first website now, 10,000+ downloads of papers (including the downloads from Phil Gibb’s site and academia.edu) and, better still, lots of interesting conversations.

One of these conversations was with a fine nuclear physicist, Andrew Meulenberg. We were in touch on the idea of a neutron (some kind of combination of a proton and a ‘nuclear’ electron—following up on Rutherford’s original idea, basically). More importantly, we chatted about, perhaps, developing a model for the deuterium nucleus (deuteron)—the hydrogen isotope which consists of a proton and a neutron. However, I feel I need to let go here, if only because I do not think I have the required mathematical skills for a venture like this. I feel somewhat guilty of letting him down. Hence, just in case someone out there feels he could contribute to this, I am copying my last email to him below. It sort of sums up my basic intuitions in terms of how one could possibly approach this.

Can it be done? Maybe. Maybe not. All I know is that not many have been trying since Bohr’s young wolves hijacked scientific discourse after the 1927 Solvay Conference and elevated a mathematical technique – perturbation theory – to the scientific dogma which is now referred to as quantum field theory.

So, yes, now I am really signing off. Thanks for reading me, now or in the past—I wrote my first post here about seven years ago! I hope it was not only useful but enjoyable as well. Oh—And please check out my YouTube channel on Physics ! 🙂

From: Jean Louis Van Belle
Sent: 14 November 2020 17:59
To: Andrew Meulenberg
Subject: Time and energy…

These things are hard
 You are definitely much smarter with these things than I can aspire too
 But I do have ideas. We must analyze the proton in terms of a collection of infinitesimally small charges – just like Feynman’s failed assembly of the electron (https://www.feynmanlectures.caltech.edu/II_28.html#Ch28-S3): it must be possible to do this and it will give us the equivalent of electromagnetic mass for the strong force. The assembly of the proton out of infinitesimally small charge bits will work because the proton is, effectively, massive. Not like an electron which effectively appears as a ‘cloud’ of charge and, therefore, has several radii and, yes, can pass through the nucleus and also ‘envelopes’ a proton when forming a neutron with it.

I cannot offer much in terms of analytical skills here. All of quantum physics – the new model of a hydrogen atom – grew out of the intuition of a young genius (Louis de Broglie) and a seasoned mathematical physicist (Erwin Schroedinger) finding a mathematical equation for it. That model is valid still – we just need to add spin from the outset (cf. the plus/minus sign of the imaginary unit) and acknowledge the indeterminacy in it is just statistical, but these are minor things.

I have not looked at your analysis of a neutron as an (hyper-)excited state of the hydrogen atom yet but it must be correct: what else can it be? It is what Rutherford said it should be when he first hypothesized the existence of a neutron.

I do not know how much time I want to devote to this (to be honest, I am totally sick of academic physics) but – whatever time I have – I want to contribute to an intuitive, realist and mathematically correct model of the deuteron nucleus.

JL

Hope

Those who read this blog, or my papers, know that the King of Science, physics, is in deep trouble. [In case you wonder, the Queen of Science is math.]

The problem is rather serious: a lack of credibility. It would kill any other business, but things work differently in academics. The question is this: how many professional physicists would admit this? An even more important question is: how many of those who admit this, would try to do something about it?

We hope the proportion of both is increasing – so we can trust that at least the dynamics of all of this are OK. I am hopeful – but I would not bet on it.

Post scriptum: A researcher started a discussion on ResearchGate earlier this year. The question for discussion is this: “In September 2019, the New York Times printed an opinion piece by Sean Carroll titled”Even Physicists Don’t Understand Quantum Mechanics. Worse, they don’t seem to want to understand it.” (https://www.nytimes.com/2019/09/07/opinion/sunday/quantum-physics.html) Is it true that physicists don’t want to understand QM? And if so then why?” I replied this to it:

“Sean Carroll is one of the Gurus that is part of the problem rather than the solution: he keeps peddling approaches that have not worked in the past, and can never be made to work in the future. I am an amateur physicist only, but I have not come across a problem that cannot be solved by ‘old’ quantum physics, i.e. a combination of Maxwell’s equations and the Planck-Einstein relation. Lamb shift, anomalous magnetic moment, electron-positron pair creation/annihilation (a nuclear process), behavior of electrons in semiconductors, superconduction etc. There is a (neo-)classical solution for everything: no quantum field and/or perturbation theories are needed. Proton and electrons as elementary particles (and neutrons as the bound state of an proton and a nuclear electron), and photons and neutrinos as lightlike particles, carrying electromagnetic and strong field energy respectively. That’s it. Nothing more. Nothing less. Everyone who thinks otherwise is ‘lost in math’, IMNSHO.”

Brutal? Yes. Very much so. The more important question is this: is it true? I cannot know for sure, but it comes across as being truthful to me.

Quantum field theory and pair creation/annihilation

The creation and annihilation of matter-antimatter pairs is usually taken as proof that, somehow, fields can condense into matter-particles or, conversely, that matter-particles can somehow turn into light-particles (photons), which are nothing but traveling electromagnetic fields. However, pair creation always requires the presence of another particle and one may, therefore, legitimately wonder whether the electron and positron were not already present, somehow.

Carl Anderson’s original discovery of the positron involved cosmic rays hitting atmospheric molecules, a process which involves the creation of unstable particles including pions. Cosmic rays themselves are, unlike what the name suggests, no rays – not like gamma rays, at least – but highly energetic protons and atomic nuclei. Hence, they consist of matter-particles, not of photons. The creation of electron-positron pairs from cosmic rays also involves pions as intermediate particles:

1. The π+ and π particles have net positive and negative charge of 1 e+ and 1 e respectively. According to mainstream theory, this is because they combine a u and d quark but – abandoning the quark hypothesis[1] – we may want to think their charge could be explained, perhaps, by the presence of an electron![2]

2. The neutral pion, in turn, might, perhaps, consist of an electron and a positron, which should annihilate but take some time to do so!

Neutral pions have a much shorter lifetime – in the order of 10-18 s only – than π+ and π particles, whose lifetime is a much more respectable 2.6 times 10-8 s. Something you can effectively measure, in order words.[3] In short, despite similar energies, neutral pions do not seem to have a lot in common with π+ and π particles. Even the energy difference is quite substantial when measured in terms of the electron mass: the neutral pion has an energy of about 135 MeV, while π+ and π particles have an energy of almost 140 MeV. To be precise, the difference is about 4.6 MeV. That is quite a lot: the electron rest energy is 0.511 MeV only.[4] So it is not stupid to think that π+ and π particles might carry an extra positron or electron, somehow. In our not-so-humble view, this is as legitimate as thinking – like Rutherford did – that a neutron should, somehow, combine a proton and an electron.[5]

The whole analysis – both in the QED as well as in the QCD sector of quantum physics – would radically alter when thinking of neutral particles – such as neutrons and π0 particles – not as consisting of quarks but of protons/antiprotons and/or electrons/positrons cancelling each other’s charges out. We have not seen much – if anything – which convinces us this cannot be correct. We, therefore, believe a more realist interpretation of quantum physics should be possible for high-energy phenomena as well. With a more realist theory, we mean one that does not involve quantum field and/or renormalization theory.

Such new theory would not be contradictory to the principle that, in Nature, the number of charged particles is no longer conserved, but that total (net) charge is actually being conserved, always. Hence, charged particles could appear and disappear, but they would be part of neutral particles. All particles in such processes are very short-lived anyway, so what is a particle here? We should probably think of these things as an unstable combination of various bits and bobs, isn’t it? 😊

So, yes, we did a paper on this. And we like it. Have a look: it’s on ResearchGate, academia.edu, and – as usual – Phil Gibb’s site (which has all of our papers, including our very early ones, which you might want to take with a pinch of salt). 🙂


[1] You may be so familiar with quarks that you do not want to question this hypothesis anymore. If so, let me ask you: where do the quarks go when a π± particle disintegrates into a muon-e±?

[2] They disintegrate into muons (muon-electrons or muon-positrons), which themselves then decay into an electron or a positron respectively.

[3] The point estimate of the lifetime of a neutral pion of the Particle Data Group (PDG) is about 8.5 times 10-17 s. Such short lifetimes cannot measured in a classical sense: such particles are usually referred to as resonances (rather than particles) and the lifetime is calculated from a so-called resonance width. We may discuss this approach in more detail later.

[4] Of course, it is much smaller when compared to the proton (rest) energy, which it is about 938 MeV.

[5] See our short history of quantum-mechanical ideas or our paper on protons and neutrons.

The true mystery of quantum physics

In many of our papers, we presented the orbital motion of an electron around a nucleus or inside of a more complicated molecular structure[1], as well as the motion of the pointlike charge inside of an electron itself, as a fundamental oscillation. You will say: what is fundamental and, conversely, what is not? These oscillations are fundamental in the sense that these motions are (1) perpetual or stable and (2) also imply a quantization of space resulting from the Planck-Einstein relation.

Needless to say, this quantization of space looks very different depending on the situation: the order of magnitude of the radius of orbital motion around a nucleus is about 150 times the electron’s Compton radius[2] so, yes, that is very different. However, the basic idea is always the same: a pointlike charge going round and round in a rather regular fashion (otherwise our idea of a cycle time (T = 1/f) and an orbital would not make no sense whatsoever), and that oscillation then packs a certain amount of energy as well as Planck’s quantum of action (h). In fact, that’s just what the Planck-Einstein relation embodies: E = h·f. Frequencies and, therefore, radii and velocities are very different (we think of the pointlike charge inside of an electron as whizzing around at lightspeed, while the order of magnitude of velocities of the electron in an atomic or molecular orbital is also given by that fine-structure constant: v = α·c/n (n is the principal quantum number, or the shell in the gross structure of an atom), but the underlying equations of motion – as Dirac referred to it – are not fundamentally different.

We can look at these oscillations in two very different ways. Most Zitterbewegung theorists (or realist thinkers, I might say) think of it as a self-perpetuating current in an electromagnetic field. David Hestenes is probably the best known theorist in this class. However, we feel such view does not satisfactorily answer the quintessential question: what keeps the charge in its orbit? We, therefore, preferred to stick with an alternative model, which we loosely refer to as the oscillator model.

However, truth be told, we are aware this model comes with its own interpretational issues. Indeed, our interpretation of this oscillator model oscillated between the metaphor of a classical (non-relativistic) two-dimensional oscillator (think of a Ducati V2 engine, with the two pistons working in tandem in a 90-degree angle) and the mathematically correct analysis of a (one-dimensional) relativistic oscillator, which we may sum up in the following relativistically correct energy conservation law:

dE/dt = d[kx2/2 + mc2]/dt = 0

More recently, we actually noted the number of dimensions (think of the number of pistons of an engine) should actually not matter at all: an old-fashioned radial airplane engine has 3, 5, 7, or more cylinders (the non-even number has to do with the firing mechanism for four-stroke engines), but the interplay between those pistons can be analyzed just as well as the ‘sloshing back and forth’ of kinetic and potential energy in a dynamic system (see our paper on the meaning of uncertainty and the geometry of the wavefunction). Hence, it seems any number of springs or pistons working together would do the trick: somehow, linear becomes circular motion, and vice versa. But so what number of dimensions should we use for our metaphor, really?

We now think the ‘one-dimensional’ relativistic oscillator is the correct mathematical analysis, but we should interpret it more carefully. Look at the dE/dt = d[kx2/2 + mc2]/dt = = d(PE + KE)/dt = 0 once more.

For the potential energy, one gets the same kx2/2 formula one gets for the non-relativistic oscillator. That is no surprise: potential energy depends on position only, not on velocity, and there is nothing relative about position. However, the (œ)m0v2 term that we would get when using the non-relativistic formulation of Newton’s Law is now replaced by the mc2 = Îłm0c2 term. Both energies vary – with position and with velocity respectively – but the equation above tells us their sum is some constant. Equating x to 0 (when the velocity v = c) gives us the total energy of the system: E = mc2. Just as it should be. 🙂 So how can we now reconcile this two models? One two-dimensional but non-relativistic, and the other relativistically correct but one-dimensional only? We always get this weird 1/2 factor! And we cannot think it away, so what is it, really?

We still don’t have a definite answer, but we think we may be closer to the conceptual locus where these two models might meet: the key is to interpret x and v in the equation for the relativistic oscillator as (1) the distance along an orbital, and (2) v as the tangential velocity of the pointlike charge along this orbital.

Huh? Yes. Read everything slowly and you might see the point. [If not, don’t worry about it too much. This is really a minor (but important) point in my so-called realist interpretation of quantum mechanics.]

If you get the point, you’ll immediately cry wolf and say such interpretation of x as a distance measured along some orbital (as opposed to the linear concept we are used to) and, consequently, thinking of v as some kind of tangential velocity along such orbital, looks pretty random. However, keep thinking about it, and you will have to admit it is a rather logical way out of the logical paradox. The formula for the relativistic oscillator assumes a pointlike charge with zero rest mass oscillating between v = 0 and v = c. However, something with zero rest mass will always be associated with some velocity: it cannot be zero! Think of a photon here: how would you slow it down? And you may think we could, perhaps, slow down a pointlike electric charge with zero rest mass in some electromagnetic field but, no! The slightest force on it will give it infinite acceleration according to Newton’s force law. [Admittedly, we would need to distinguish here between its relativistic expression (F = dp/dt) and its non-relativistic expression (F = m0·a) when further dissecting this statement, but you get the idea. Also note that we are discussing our electron here, in which we do have a zero-rest-mass charge. In an atomic or molecular orbital, we are talking an electron with a non-zero rest mass: just the mass of the electron whizzing around at a (significant) fraction (α) of lightspeed.]

Hence, it is actually quite rational to argue that the relativistic oscillator cannot be linear: the velocity must be some tangential velocity, always and – for a pointlike charge with zero rest mass – it must equal lightspeed, always. So, yes, we think this line of reasoning might well the conceptual locus where the one-dimensional relativistic oscillator (E = m·a2·ω2) and the two-dimensional non-relativistic oscillator (E = 2·m·a2·ω2/2 = m·a2·ω2) could meet. Of course, we welcome the view of any reader here! In fact, if there is a true mystery in quantum physics (we do not think so, but we know people – academics included – like mysterious things), then it is here!

Post scriptum: This is, perhaps, a good place to answer a question I sometimes get: what is so natural about relativity and a constant speed of light? It is not so easy, perhaps, to show why and how Lorentz’ transformation formulas make sense but, in contrast, it is fairly easy to think of the absolute speed of light like this: infinite speeds do not make sense, both physically as well as mathematically. From a physics point of view, the issue is this: something that moves about at an infinite speed is everywhere and, therefore, nowhere. So it doesn’t make sense. Mathematically speaking, you should not think of v reaching infinite but of a limit of a ratio of a distance interval that goes to infinity, while the time interval goes to zero. So, in the limit, we get a division of an infinite quantity by 0. That’s not infinity but an indeterminacy: it is totally undefined! Indeed, mathematicians can easily deal with infinity and zero, but divisions like zero divided by zero, or infinity divided by zero are meaningless. [Of course, we may have different mathematical functions in the numerator and denominator whose limits yields those values. There is then a reasonable chance we will be able to factor stuff out so as to get something else. We refer to such situations as indeterminate forms, but these are not what we refer to here. The informed reader will, perhaps, also note the division of infinity by zero does not figure in the list of indeterminacies, but any division by zero is generally considered to be undefined.]


[1] It may be extra electron such as in, for example, the electron which jumps from place to place in a semiconductor (see our quantum-mechanical analysis of electric currents). Also, as Dirac first noted, the analysis is actually also valid for electron holes, in which case our atom or molecule will be positively ionized instead of being neutral or negatively charged.

[2] We say 150 because that is close enough to the 1/α = 137 factor that relates the Bohr radius to the Compton radius of an electron. The reader may not be familiar with the idea of a Compton radius (as opposed to the Compton wavelength) but we refer him or her to our Zitterbewegung (ring current) model of an electron.

Electron propagation in a lattice

It is done! My last paper on the mentioned topic (available on Phil Gibbs’s site, my ResearchGate page or academia.edu) should conclude my work on the QED sector. It is a thorough exploration of the hitherto mysterious concept of the effective mass and all that.

The result I got is actually very nice: my calculation of the order of magnitude of the kb factor in the formula for the energy band (the conduction band, as you may know it) shows that the usual small angle approximation of the formula does not make all that much sense. This shows that some ‘realist’ thinking about what is what in these quantum-mechanical models does constrain the options: we cannot just multiply wave numbers with some random multiple of π or 2π. These things have a physical meaning!

So no multiverses or many worlds, please! One world is enough, and it is nice we can map it to a unique mathematical description.

I should now move on and think about the fun stuff: what is going on in the nucleus and all that? Let’s see where we go from here. Downloads on ResearchGate have been going through the roof lately (a thousand reads on ResearchGate is better than ten thousand on viXra.org, I guess), so it is all very promising. 🙂

Understanding semiconductors, lasers and other technical stuff

I wrote a lot of papers but most of them – if not all – deal with very basic stuff: the meaning of uncertainty (just statistical indeterminacy because we have no information on the initial condition of the system), the Planck-Einstein relation (how Planck’s quantum of action models an elementary cycle or an oscillation), and Schrödinger’s wavefunctions (the solutions to his equation) as the equations of motion for a pointlike charge. If anything, I hope I managed to restore a feeling that quantum electrodynamics is not essentially different from classical physics: it just adds the element of a quantization – of energy, momentum, magnetic flux, etcetera.

Importantly, we also talked about what photons and electrons actually are, and that electrons are pointlike but not dimensionless: their magnetic moment results from an internal current and, hence, spin is something real – something we can explain in terms of a two-dimensional perpetual current. In the process, we also explained why electrons take up some space: they have a radius (the Compton radius). So that explains the quantization of space, if you want.

We also talked fields and told you – because matter-particles do have a structure – we should have a dynamic view of the fields surrounding those. Potential barriers – or their corollary: potential wells – should, therefore, not be thought of as static fields. They result from one or more charges moving around and these fields, therefore, vary in time. Hence, a particle breaking through a ‘potential wall’ or coming out of a potential ‘well’ is just using an opening, so to speak, which corresponds to a classical trajectory.

We, therefore, have the guts to say that some of what you will read in a standard textbook is plain nonsense. Richard Feynman, for example, starts his lecture on a current in a crystal lattice by writing this: “You would think that a low-energy electron would have great difficulty passing through a solid crystal. The atoms are packed together with their centers only a few angstroms apart, and the effective diameter of the atom for electron scattering is roughly an angstrom or so. That is, the atoms are large, relative to their spacing, so that you would expect the mean free path between collisions to be of the order of a few angstroms—which is practically nothing. You would expect the electron to bump into one atom or another almost immediately. Nevertheless, it is a ubiquitous phenomenon of nature that if the lattice is perfect, the electrons are able to travel through the crystal smoothly and easily—almost as if they were in a vacuum. This strange fact is what lets metals conduct electricity so easily; it has also permitted the development of many practical devices. It is, for instance, what makes it possible for a transistor to imitate the radio tube. In a radio tube electrons move freely through a vacuum, while in the transistor they move freely through a crystal lattice.” [The italics are mine.]

It is nonsense because it is not the electron that is traveling smoothly, easily or freely: it is the electrical signal, and – no ! – that is not to be equated with the quantum-mechanical amplitude. The quantum-mechanical amplitude is just a mathematical concept: it does not travel through the lattice in any physical sense ! In fact, it does not even travel through the lattice in a logical sense: the quantum-mechanical amplitudes are to be associated with the atoms in the crystal lattice, and describe their state – i.e. whether or not they have an extra electron or (if we are analyzing electron holes in the lattice) if they are lacking one. So the drift velocity of the electron is actually very low, and the way the signal moves through the lattice is just like in the game of musical chairs – but with the chairs on a line: all players agree to kindly move to the next chair for the new arrival so the last person on the last chair can leave the game to get a beer. So here it is the same: one extra electron causes all other electrons to move. [For more detail, we refer to our paper on matter-waves, amplitudes and signals.]

But so, yes, we have not said much about semiconductors, lasers and other technical stuff. Why not? Not because it should be difficult: we already cracked the more difficult stuff (think of an explanation of the anomalous magnetic moment, the Lamb shift, or one-photon Mach-Zehnder interference here). No. We are just lacking time ! It is, effectively, going to be an awful lot of work to rewrite those basic lectures on semiconductors – or on lasers or other technical matters which attract students in physics – so as to show why and how the mechanics of these things actually work: not approximately, but how exactly – and, more importantly, why and how these phenomena can be explained in terms of something real: actual electrons moving through the lattice at lower or higher drift speeds within a conduction band (and then what that conduction band actually is).

The same goes for lasers: we talk about induced emission and all that, but we need to explain what that might actually represent – while avoiding the usual mumbo-jumbo about bosonic behavior and other useless generalizations of properties of actually matter- and light-particles that can be reasonably explained in terms of the structure of these particles – instead of invoking quantum-mechanical theorems or other dogmatic or canonical a priori assumptions.

So, yes, it is going to be hard work – and I am not quite sure if I have sufficient time or energy for it. I will try, and so I will probably be offline for quite some time while doing that. Be sure to have fun in the meanwhile ! 🙂

Post scriptum: Perhaps I should also focus on converting some of my papers into journal articles, but then I don’t feel like it’s worth going through all of the trouble that takes. Academic publishing is a weird thing. Either the editorial line of the journal is very strong, in which case they do not want to publish non-mainstream theory, and also insist on introductions and other credentials, or, else, it is very weak or even absent – and then it is nothing more than vanity or ego, right? So I think I am just fine with the viXra collection and the ‘preprint’ papers on ResearchGate now. I’ve been thinking it allows me to write what I want and – equally important – how I want to write it. In any case, I am writing for people like you and me. Not so much for dogmatic academics or philosophers. The poor experience with reviewers of my manuscript has taught me well, I guess. I should probably wait to get an invitation to publish now.

Feynman’s Lectures: A Survivor’s Guide

A few days ago, I mentioned I felt like writing a new book: a sort of guidebook for amateur physicists like me. I realized that is actually fairly easy to do. I have three very basic papers – one on particles (both light and matter), one on fields, and one on the quantum-mechanical toolbox (amplitude math and all of that). But then there is a lot of nitty-gritty to be written about the technical stuff, of course: self-interference, superconductors, the behavior of semiconductors (as used in transistors), lasers, and so many other things – and all of the math that comes with it. However, for that, I can refer you to Feynman’s three volumes of lectures, of course. In fact, I should: it’s all there. So… Well… That’s it, then. I am done with the QED sector. Here is my summary of it all (links to the papers on Phil Gibbs’ site):

Paper I: Quantum behavior (the abstract should enrage the dark forces)

Paper II: Probability amplitudes (quantum math)

Paper III: The concept of a field (why you should not bother about QFT)

Paper IV: Survivor’s guide to all of the rest (keep smiling)

Paper V: Uncertainty and the geometry of the wavefunction (the final!)

The last paper is interesting because it shows statistical indeterminism is the only real indeterminism. We can, therefore, use Bell’s Theorem to prove our theory is complete: there is no need for hidden variables, so why should we bother about trying to prove or disprove they can or cannot exist?

Jean Louis Van Belle, 21 October 2020

Note: As for the QCD sector, that is a mess. We might have to wait another hundred years or so to see the smoke clear up there. Or, who knows, perhaps some visiting alien(s) will come and give us a decent alternative for the quark hypothesis and quantum field theories. One of my friends thinks so. Perhaps I should trust him more. 🙂

As for Phil Gibbs, I should really thank him for being one of the smartest people on Earth – and for his site, of course. Brilliant forum. Does what Feynman wanted everyone to do: look at the facts, and think for yourself. 🙂