The proton model will be key. We cannot explain it in the typical ‘mass without mass’ model of zittering charges: we get a 1/4 factor in the explanation of the proton radius, which is impossible to get rid of unless we assume some ‘strong’ force come into play. That is why I prioritize a ‘straight’ attack on the electron and the proton-electron bond in a primitive neutron model.
The calculation of forces inside a muon-electron and a proton (see ) is an interesting exercise: it is the only thing which explains why an electron annihilates a positron but electrons and protons can live together (the ‘anti-matter’ nature of charged particles only shows because of opposite spin directions of the fields – so it is only when the ‘structure’ of matter-antimatter pairs is different that they will not annihilate each other).
In short, 2021 will be an interesting year for me. The intent of my last two papers (on the deuteron model and the primitive neutron model) was to think of energy values: the energy value of the bond between electron and proton in the neutron, and the energy value of the bond between proton and neutron in a deuteron nucleus. But, yes, the more fundamental work remains to be done !
In my ‘signing off’ post, I wrote I had enough of physics but that my last(?) ambition was to “contribute to an intuitive, realist and mathematically correct model of the deuteron nucleus.” Well… The paper is there. And I am extremely pleased with the result. Thank you, Mr. Meulenberg. You sure have good intuition.
I took the opportunity to revisit Yukawa’s nuclear potential and demolish his modeling of a new nuclear force without a charge to act on. Looking back at the past 100 years of physics history, I now start to think that was the decisive destructive moment in physics: that 1935 paper, which started off all of the hype on virtual particles, quantum field theory, and a nuclear force that could not possibly be electromagnetic plus – totally not done, of course ! – utter disregard for physical dimensions and the physical geometry of fields in 3D space or – taking retardation effects into account – 4D spacetime. Fortunately, we have hope: the 2019 fixing of SI units puts physics firmly back onto the road to reality – or so we hope.
Paolo Di Sia‘s and my paper show one gets very reasonable energy and separation distances for nuclear bonds and inter-nucleon distances when assuming the presence of magnetic and/or electric dipole fields arising from deep electron orbitals. The model shows one of the protons pulling the ‘electron blanket’ from another proton (the neutron) towards its own side so as to create an electric dipole moment. So it is just like a valence electron in a chemical bond. So it is like water, then? Water is a polar molecule but we do not necessarily need to start with polar configurations when trying to expand this model so as to inject some dynamics into it (spherically symmetric orbitals are probably easier to model). Hmm… Perhaps I need to look at the thermodynamical equations for dry versus wet water once again… Phew ! Where to start?
I have no experience – I have very little math, actually – with modeling molecular orbitals. So I should, perhaps, contact a friend from a few years ago now – living in Hawaii and pursuing more spiritual matters too – who did just that long time ago: orbitals using Schroedinger’s wave equation (I think Schroedinger’s equation is relativistically correct – just a misinterpretation of the concept of ‘effective mass’ by the naysayers). What kind of wave equation are we looking at? One that integrates inverse square and inverse cube force field laws arising from charges and the dipole moments they create while moving. [Hey! Perhaps we can relate these inverse square and cube fields to the second- and third-order terms in the binomial development of the relativistic mass formula (see the section on kinetic energy in my paper on one of Feynman’s more original renderings of Maxwell’s equations) but… Well… Probably best to start by seeing how Feynman got those field equations out of Maxwell’s equations. It is a bit buried in his development of the Liénard and Wiechert equations, which are written in terms of the scalar and vector potentials φ and A instead of E and B vectors, but it should all work out.]
If the nuclear force is electromagnetic, then these ‘nuclear orbitals’ should respect the Planck-Einstein relation. So then we can calculate frequencies and radii of orbitals now, right? The use of natural units and imaginary units to represent rotations/orthogonality in space might make calculations easy (B = iE). Indeed, with the 2019 revision of SI units, I might need to re-evaluate the usefulness of natural units (I always stayed away from it because it ‘hides’ the physics in the math as it makes abstraction of their physical dimension).
Hey ! Perhaps we can model everything with quaternions, using imaginary units (i and j) to represent rotations in 3D space so as to ensure consistent application of the appropriate right-hand rules always (special relativity gets added to the mix so we probably need to relate the (ds)2 = (dx)2 + (dy)2 + (dz)2 – (dct)2 to the modified Hamilton’s q = a + ib + jc – kd expression then). Using vector equations throughout and thinking of has a vector when using the E = hf and h = pλ Planck-Einstein relation (something with a magnitude and a direction) should do the trick, right? [In case you wonder how we can write fas a vector: angular frequency is a vector too. The Planck-Einstein relation is valid for both linear as well as circular oscillations: see our paper on the interpretation of de Broglie wavelength.]
Oh – and while special relativity is there because of Maxwell’s equation, gravity (general relativity) should be left out of the picture. Why? Because we would like to explain gravity as a residual very-far-field force. And trying to integrate gravity inevitable leads one to analyze particles as ‘black holes.’ Not nice, philosophically speaking. In fact, any 1/rn field inevitably leads one to think of some kind of black hole at the center, which is why thinking of fundamental particles in terms ring currents and dipole moments makes so much sense! [We need nothingness and infinity as mathematical concepts (limits, really) but they cannot possibly represent anything real, right?]
The consistent use of the Planck-Einstein law to model these nuclear electron orbitals should probably involve multiples of h to explain their size and energy: E = nhf rather than E = hf. For example, when calculating the radius of an orbital of a pointlike charge with the energy of a proton, one gets a radius that is only 1/4 of the proton radius (0.21 fm instead of 0.82 fm, approximately). To make the radius fit that of a proton, one has to use the E = 4hf relation. Indeed, for the time being, we should probably continue to reject the idea of using fractions of h to model deep electron orbitals. I also think we should avoid superluminal velocity concepts.
This post sounds like madness? Yes. And then, no! To be honest, I think of it as one of the better Aha! moments in my life. 🙂
Brussels, 30 December 2020
Post scriptum (1 January 2021): Lots of stuff coming together here ! 2021 will definitely see the Grand Unified Theory of Classical Physics becoming somewhat more real. It looks like Mills is going to make a major addition/correction to his electron orbital modeling work and, hopefully, manage to publish the gist of it in the eminent mainstream Nature journal. That makes a lot of sense: to move from an atom to an analysis of nuclei or complex three-particle systems, one should combine singlet and doublet energy states – if only to avoid reduce three-body problems to two-body problems. 🙂 I still do not buy the fractional use of Planck’s quantum of action, though. Especially now that we got rid of the concept of a separate ‘nuclear’ charge (there is only one charge: the electric charge, and it comes in two ‘colors’): if Planck’s quantum of action is electromagnetic, then it comes in wholes or multiples. No fractions. Fractional powers of distance functions in field or potential formulas are OK, however. 🙂
In 1995, W.E. Lamb Jr. wrote the following on the nature of the photon: “There is no such thing as a photon. Only a comedy of errors and historical accidents led to its popularity among physicists and optical scientists. I admit that the word is short and convenient. Its use is also habit forming. Similarly, one might find it convenient to speak of the “aether” or “vacuum” to stand for empty space, even if no such thing existed. There are very good substitute words for “photon”, (e.g., “radiation” or “light”), and for “photonics” (e.g., “optics” or “quantum optics”). Similar objections are possible to use of the word “phonon”, which dates from 1932. Objects like electrons, neutrinos of finite rest mass, or helium atoms can, under suitable conditions, be considered to be particles, since their theories then have viable non-relativistic and non-quantum limits.”
The opinion of a Nobel Prize laureate carries some weight, of course, but we think the concept of a photon makes sense. As the electron moves from one (potential) energy state to another – from one atomic or molecular orbital to another – it builds an oscillating electromagnetic field which has an integrity of its own and, therefore, is not only wave-like but also particle-like.
We, therefore, dedicated the fifth chapter of our re-write of Feynman’s Lectures to a dual analysis of EM radiation (and, yes, this post is just an announcement of the paper so you are supposed to click the link to read it). It is, basically, an overview of a rather particular expression of Maxwell’s equations which Feynman uses to discuss the laws of radiation. I wonder how to – possibly – ‘transform’ or ‘transpose’ this framework so it might apply to deep electron orbitals and – possibly – proton-neutron oscillations.
The presentations were very good (especially those on the experimental results and the recent involvement of some very respectable institutions in addition to the usual suspects and, sadly, some fly-by-night operators too), and the follow-on conversation with one of the co-organizers convinced me that the researchers are serious, open-minded and – while not quite being able to provide all of the answers we are all seeking – very ready to discuss them seriously. Most, if not all, experiments involve transmutions of nuclei triggered by low-energy inputs such as a low-energy radiation (irradiation and transmutation of palladium by, say, a now-household 5 mW laser beam is just one of the examples). One experiment even triggered a current just by adding plain heat which, as you know, is nothing but very low-energy (infrared) radiation, although I must admit this was one I would like to see replicated en masse before believing it to be real (the equipment was small and simple, and so the experimenters could have shared it easily with other labs).
When looking at these experiments, the comparison that comes to mind is that of an opera singer shattering crystal with his or her voice: some frequency in the sound causes the material to resonate at, yes, its resonant frequency (most probably an enormous but integer multiple of the sound frequency), and then the energy builds up – like when you give a child on a swing an extra push every time when you should – as the amplitude becomes larger and larger – till the breaking point is reached. Another comparison is the failing of a suspension bridge when external vibrations (think of the rather proverbial soldier regiment here) cause similar resonance phenomena. So, yes, it is not unreasonable to believe that one could be able to induce neutron decay and, thereby, release the binding energy between the proton and the electron in the process by some low-energy stimulation provided the frequencies are harmonic.
The problem with the comparison – and for the LENR idea to be truly useful – is this: one cannot see any net production of energy here. The strain or stress that builds up in the crystal glass is a strain induced by the energy in the sound wave (which is why the singing demos usually include amplifiers to attain the required power/amplitude ratio, i.e. the required decibels). In addition, the breaking of crystal or a suspension bridge typically involves a weaker link somewhere, or some directional aspect (so that would be the equivalent of an impurity in a crystal structure, I guess), but that is a minor point, and a point that is probably easier to tackle than the question on the energy equation.
LENR research has probably advanced far enough now (the first series of experiments started in 1989) to slowly start focusing on the whole chain of these successful experiments: what is the equivalent, in these low-energy reactions, of the nuclear fuel in high-energy fission or fusion experiments? And, if it can be clearly identified, the researchers need to show that the energy that goes into the production of this fuel is much less than the energy you get out of it by burning it (and, of course, with ‘burning’ I mean the decay reaction here). [In case you have heard about Randell Mills’ hydrino experiments, he should show the emission spectrum of these hydrinos. Otherwise, one might think he is literally burning hydrogen. Attracting venture capital and providing scientific proof are not mutually exclusive, are they? In the meanwhile, I hope that what he is showing is real, in the way all LENR researchers hope it is real.]
LENR research may also usefully focus on getting the fundamental theory right. The observed anomalous heat and/or transmutation reactions cannot be explained by mainstream quantum physics (I am talking QCD here, so that’s QFT, basically). That should not surprise us: one does not need quarks or gluons to explain high-energy nuclear processes such as fission or fusion, either! My theory is, of course, typically simplistically simple: the energy that is being unlocked is just the binding energy between the nuclear electron and the protons, in the neutron itself or in a composite nucleus, the simplest of which is the deuteron nucleus. I talk about that in my paper on matter-antimatter pair creation/annihilation as a nuclear process but you do not need to be an adept of classical or realist interpretations of quantum mechanics to understand this point. To quote a motivational writer here: it is OK for things to be easy. 🙂
So LENR theorists just need to accept they are not mainstream – yet, that is – and come out with a more clearly articulated theory on why their stuff works the way it does. For some reason I do not quite understand, they come across as somewhat hesitant to do so. Fears of being frozen out even more by the mainstream? Come on guys ! You are coming out of the cold anyway, so why not be bold and go all the way? It is a time of opportunities now, and the field of LENR is one of them, both theoretically as well as practically speaking. I honestly think it is one of those rare moments in the history of physics where experimental research may be well ahead of theoretical physics, so they should feel like proud trailblazers!
Personally, I do not think it will replace big classical nuclear energy plants anytime soon but, in a not-so-distant future, it might yield much very useful small devices: lower energy, and, therefore, lower risk also. I also look forward to LENR research dealing the fatal blow to standard theory by confirming we do not need perturbation and renormalization theories to explain reality. 🙂
Post scriptum: If low-energy nuclear reactions are real, mainstream (astro)physicists will also have to rework their stories on cosmogenesis and the (future) evolution of the Universe. The standard story may well be summed up in the brief commentary of the HyperPhysics entry on the deuteron nucleus:
“The stability of the deuteron is an important part of the story of the universe. In the Big Bang model it is presumed that in early stages there were equal numbers of neutrons and protons since the available energies were much higher than the 0.78 MeV required to convert a proton and electron to a neutron. When the temperature dropped to the point where neutrons could no longer be produced from protons, the decay of free neutrons began to diminish their population. Those which combined with protons to form deuterons were protected from further decay. This is fortunate for us because if all the neutrons had decayed, there would be no universe as we know it, and we wouldn’t be here!“
If low-energy nuclear reactions are real – and I think they are – then the standard story about the Big Bang is obviously bogus too. I am not necessarily doubting the reality of the Big Bang itself (the ongoing expansion of the Universe is a scientific fact so, yes, the Universe must have been much smaller and (much) more energy-dense long time ago), but the standard calculations on proton-neutron reactions taking place, or not, at cut-off temperatures/energies above/below 0.78 MeV do not make sense anymore. One should, perhaps, think more in terms of how matter-antimatter ratios might or might not have evolved (and, of course, one should keep an eye on the electron-proton ratio, but that should work itself out because of charge conservation) to correctly calculate the early evolution of the Universe, rather than focusing so much on proton-neutron ratios.
Why do I say that? Because neutrons do appear to consist of a proton and an electron – rather than of quarks and gluons – and they continue to decay and then recombine again, so these proton-neutron reactions must not be thoughts of as some historic (discontinuous) process.
[…] Hmm… The more I look at the standard stories, the more holes I see… This one, however, is very serious. If LENR and/or cold fusion is real, then it will also revolutionize the theories on cosmogenesis (the evolution of the Universe). I instinctively like that, of course, because – just like quantization – I had the impression the discontinuities are there, but not quite in the way mainstream physicists – thinking more in terms of quarks and gluons rather than in terms of stuff that we can actually measure – portray the whole show.
I have been exploring the weird wonderland of physics for over seven years now. At several occasions, I thought I should just stop. It was rewarding, but terribly exhausting at times as well! I am happy I did not give up, if only because I finally managed to come up with a more realist interpretation of the ‘mystery’ of matter-antimatter pair production/annihilation. So, yes, I think I can confidently state I finally understand physics the way I want to understand it. It was an extraordinary journey, and I am happy I could share it with many fellow searchers (300 posts and 300,000 hits on my first website now, 10,000+ downloads of papers (including the downloads from Phil Gibb’s site and academia.edu) and, better still, lots of interesting conversations.
One of these conversations was with a fine nuclear physicist, Andrew Meulenberg. We were in touch on the idea of a neutron (some kind of combination of a proton and a ‘nuclear’ electron—following up on Rutherford’s original idea, basically). More importantly, we chatted about, perhaps, developing a model for the deuterium nucleus (deuteron)—the hydrogen isotope which consists of a proton and a neutron. However, I feel I need to let go here, if only because I do not think I have the required mathematical skills for a venture like this. I feel somewhat guilty of letting him down. Hence, just in case someone out there feels he could contribute to this, I am copying my last email to him below. It sort of sums up my basic intuitions in terms of how one could possibly approach this.
Can it be done? Maybe. Maybe not. All I know is that not many have been trying since Bohr’s young wolves hijacked scientific discourse after the 1927 Solvay Conference and elevated a mathematical technique – perturbation theory – to the scientific dogma which is now referred to as quantum field theory.
So, yes, now I am really signing off. Thanks for reading me, now or in the past—I wrote my first post here about seven years ago! I hope it was not only useful but enjoyable as well. Oh—And please check out my YouTube channel on Physics ! 🙂
From: Jean Louis Van Belle Sent: 14 November 2020 17:59 To: Andrew Meulenberg Subject: Time and energy…
These things are hard… You are definitely much smarter with these things than I can aspire too… But I do have ideas. We must analyze the proton in terms of a collection of infinitesimally small charges – just like Feynman’s failed assembly of the electron (https://www.feynmanlectures.caltech.edu/II_28.html#Ch28-S3): it must be possible to do this and it will give us the equivalent of electromagnetic mass for the strong force. The assembly of the proton out of infinitesimally small charge bits will work because the proton is, effectively, massive. Not like an electron which effectively appears as a ‘cloud’ of charge and, therefore, has several radii and, yes, can pass through the nucleus and also ‘envelopes’ a proton when forming a neutron with it.
I cannot offer much in terms of analytical skills here. All of quantum physics – the new model of a hydrogen atom – grew out of the intuition of a young genius (Louis de Broglie) and a seasoned mathematical physicist (Erwin Schroedinger) finding a mathematical equation for it. That model is valid still – we just need to add spin from the outset (cf. the plus/minus sign of the imaginary unit) and acknowledge the indeterminacy in it is just statistical, but these are minor things.
I have not looked at your analysis of a neutron as an (hyper-)excited state of the hydrogen atom yet but it must be correct: what else can it be? It is what Rutherford said it should be when he first hypothesized the existence of a neutron.
I do not know how much time I want to devote to this (to be honest, I am totally sick of academic physics) but – whatever time I have – I want to contribute to an intuitive, realist and mathematically correct model of the deuteron nucleus.
Those who read this blog, or my papers, know that the King of Science, physics, is in deep trouble. [In case you wonder, the Queen of Science is math.]
The problem is rather serious: a lack of credibility. It would kill any other business, but things work differently in academics. The question is this: how many professional physicists would admit this? An even more important question is: how many of those who admit this, would try to do something about it?
We hope the proportion of both is increasing – so we can trust that at least the dynamics of all of this are OK. I am hopeful – but I would not bet on it.
“Sean Carroll is one of the Gurus that is part of the problem rather than the solution: he keeps peddling approaches that have not worked in the past, and can never be made to work in the future. I am an amateur physicist only, but I have not come across a problem that cannot be solved by ‘old’ quantum physics, i.e. a combination of Maxwell’s equations and the Planck-Einstein relation. Lamb shift, anomalous magnetic moment, electron-positron pair creation/annihilation (a nuclear process), behavior of electrons in semiconductors, superconduction etc. There is a (neo-)classical solution for everything: no quantum field and/or perturbation theories are needed. Proton and electrons as elementary particles (and neutrons as the bound state of an proton and a nuclear electron), and photons and neutrinos as lightlike particles, carrying electromagnetic and strong field energy respectively. That’s it. Nothing more. Nothing less. Everyone who thinks otherwise is ‘lost in math’, IMNSHO.”
Brutal? Yes. Very much so. The more important question is this: is it true? I cannot know for sure, but it comes across as being truthful to me.
The creation and annihilation of matter-antimatter pairs is usually taken as proof that, somehow, fields can condense into matter-particles or, conversely, that matter-particles can somehow turn into light-particles (photons), which are nothing but traveling electromagnetic fields. However, pair creation always requires the presence of another particle and one may, therefore, legitimately wonder whether the electron and positron were not already present, somehow.
Carl Anderson’s original discovery of the positron involved cosmic rays hitting atmospheric molecules, a process which involves the creation of unstable particles including pions. Cosmic rays themselves are, unlike what the name suggests, no rays – not like gamma rays, at least – but highly energetic protons and atomic nuclei. Hence, they consist of matter-particles, not of photons. The creation of electron-positron pairs from cosmic rays also involves pions as intermediate particles:
1. The π+ and π– particles have net positive and negative charge of 1 e+ and 1 e– respectively. According to mainstream theory, this is because they combine a u and d quark but – abandoning the quark hypothesis – we may want to think their charge could be explained, perhaps, by the presence of an electron!
2. The neutral pion, in turn, might, perhaps, consist of an electron and a positron, which should annihilate but take some time to do so!
Neutral pions have a much shorter lifetime – in the order of 10-18 s only – than π+ and π– particles, whose lifetime is a much more respectable 2.6 times 10-8 s. Something you can effectively measure, in order words. In short, despite similar energies, neutral pions do not seem to have a lot in common with π+ and π– particles. Even the energy difference is quite substantial when measured in terms of the electron mass: the neutral pion has an energy of about 135 MeV, while π+ and π– particles have an energy of almost 140 MeV. To be precise, the difference is about 4.6 MeV. That is quite a lot: the electron rest energy is 0.511 MeV only. So it is not stupid to think that π+ and π– particles might carry an extra positron or electron, somehow. In our not-so-humble view, this is as legitimate as thinking – like Rutherford did – that a neutron should, somehow, combine a proton and an electron.
The whole analysis – both in the QED as well as in the QCD sector of quantum physics – would radically alter when thinking of neutral particles – such as neutrons and π0 particles – not as consisting of quarks but of protons/antiprotons and/or electrons/positrons cancelling each other’s charges out. We have not seen much – if anything – which convinces us this cannot be correct. We, therefore, believe a more realist interpretation of quantum physics should be possible for high-energy phenomena as well. With a more realist theory, we mean one that does not involve quantum field and/or renormalization theory.
Such new theory would not be contradictory to the principle that, in Nature, the number of charged particles is no longer conserved, but that total (net) charge is actually being conserved, always. Hence, charged particles could appear and disappear, but they would be part of neutral particles. All particles in such processes are very short-lived anyway, so what is a particle here? We should probably think of these things as an unstable combination of various bits and bobs, isn’t it? 😊
So, yes, we did a paper on this. And we like it. Have a look: it’s on ResearchGate, academia.edu, and – as usual – Phil Gibb’s site (which has all of our papers, including our very early ones, which you might want to take with a pinch of salt). 🙂
 You may be so familiar with quarks that you do not want to question this hypothesis anymore. If so, let me ask you: where do the quarks go when a π± particle disintegrates into a muon-e±?
 They disintegrate into muons (muon-electrons or muon-positrons), which themselves then decay into an electron or a positron respectively.
 The point estimate of the lifetime of a neutral pion of the Particle Data Group (PDG) is about 8.5 times 10-17 s. Such short lifetimes cannot measured in a classical sense: such particles are usually referred to as resonances (rather than particles) and the lifetime is calculated from a so-called resonance width. We may discuss this approach in more detail later.
 Of course, it is much smaller when compared to the proton (rest) energy, which it is about 938 MeV.
Needless to say, this quantization of space looks very different depending on the situation: the order of magnitude of the radius of orbital motion around a nucleus is about 150 times the electron’s Compton radius so, yes, that is very different. However, the basic idea is always the same: a pointlike charge going round and round in a rather regular fashion (otherwise our idea of a cycle time (T = 1/f) and an orbital would not make no sense whatsoever), and that oscillation then packs a certain amount of energy as well as Planck’s quantum of action (h). In fact, that’s just what the Planck-Einstein relation embodies: E = h·f. Frequencies and, therefore, radii and velocities are very different (we think of the pointlike charge inside of an electron as whizzing around at lightspeed, while the order of magnitude of velocities of the electron in an atomic or molecular orbital is also given by that fine-structure constant: v = α·c/n (n is the principal quantum number, or the shell in the gross structure of an atom), but the underlying equations of motion – as Dirac referred to it – are not fundamentally different.
We can look at these oscillations in two very different ways. Most Zitterbewegung theorists (or realist thinkers, I might say) think of it as a self-perpetuating current in an electromagnetic field. David Hestenes is probably the best known theorist in this class. However, we feel such view does not satisfactorily answer the quintessential question: what keeps the charge in its orbit? We, therefore, preferred to stick with an alternative model, which we loosely refer to as the oscillator model.
However, truth be told, we are aware this model comes with its own interpretational issues. Indeed, our interpretation of this oscillator model oscillated between the metaphor of a classical (non-relativistic) two-dimensional oscillator (think of a Ducati V2 engine, with the two pistons working in tandem in a 90-degree angle) and the mathematically correct analysis of a (one-dimensional) relativistic oscillator, which we may sum up in the following relativistically correct energy conservation law:
dE/dt = d[kx2/2 + mc2]/dt = 0
More recently, we actually noted the number of dimensions (think of the number of pistons of an engine) should actually not matter at all: an old-fashioned radial airplane engine has 3, 5, 7, or more cylinders (the non-even number has to do with the firing mechanism for four-stroke engines), but the interplay between those pistons can be analyzed just as well as the ‘sloshing back and forth’ of kinetic and potential energy in a dynamic system (see our paper on the meaning of uncertainty and the geometry of the wavefunction). Hence, it seems any number of springs or pistons working together would do the trick: somehow, linear becomes circular motion, and vice versa. But so what number of dimensions should we use for our metaphor, really?
We now think the ‘one-dimensional’ relativistic oscillator is the correct mathematical analysis, but we should interpret it more carefully. Look at the dE/dt = d[kx2/2 + mc2]/dt = = d(PE + KE)/dt = 0 once more.
For the potential energy, one gets the same kx2/2 formula one gets for the non-relativistic oscillator. That is no surprise: potential energy depends on position only, not on velocity, and there is nothing relative about position. However, the (½)m0v2 term that we would get when using the non-relativistic formulation of Newton’s Law is now replaced by the mc2 = γm0c2 term. Both energies vary – with position and with velocity respectively – but the equation above tells us their sum is some constant. Equating x to 0 (when the velocity v = c) gives us the total energy of the system: E = mc2. Just as it should be. 🙂 So how can we now reconcile this two models? One two-dimensional but non-relativistic, and the other relativistically correct but one-dimensional only? We always get this weird 1/2 factor! And we cannot think it away, so what is it, really?
We still don’t have a definite answer, but we think we may be closer to the conceptual locus where these two models might meet: the key is to interpret x and v in the equation for the relativistic oscillator as (1) the distance along an orbital, and (2) v as the tangential velocity of the pointlike charge along this orbital.
If you get the point, you’ll immediately cry wolf and say such interpretation of x as a distance measured along some orbital (as opposed to the linear concept we are used to) and, consequently, thinking of v as some kind of tangential velocity along such orbital, looks pretty random. However, keep thinking about it, and you will have to admit it is a rather logical way out of the logical paradox. The formula for the relativistic oscillator assumes a pointlike charge with zero rest mass oscillating between v = 0 and v = c. However, something with zero rest mass will always be associated with some velocity: it cannot be zero! Think of a photon here: how would you slow it down? And you may think we could, perhaps, slow down a pointlike electric charge with zero rest mass in some electromagnetic field but, no! The slightest force on it will give it infinite acceleration according to Newton’s force law. [Admittedly, we would need to distinguish here between its relativistic expression (F = dp/dt) and its non-relativistic expression (F = m0·a) when further dissecting this statement, but you get the idea. Also note that we are discussing our electron here, in which we do have a zero-rest-mass charge. In an atomic or molecular orbital, we are talking an electron with a non-zero rest mass: just the mass of the electron whizzing around at a (significant) fraction (α) of lightspeed.]
Hence, it is actually quite rational to argue that the relativistic oscillator cannot be linear: the velocity must be some tangential velocity, always and – for a pointlike charge with zero rest mass – it must equal lightspeed, always. So, yes, we think this line of reasoning might well the conceptual locus where the one-dimensional relativistic oscillator (E = m·a2·ω2) and the two-dimensional non-relativistic oscillator (E = 2·m·a2·ω2/2 = m·a2·ω2) could meet. Of course, we welcome the view of any reader here! In fact, if there is a true mystery in quantum physics (we do not think so, but we know people – academics included – like mysterious things), then it is here!
Post scriptum: This is, perhaps, a good place to answer a question I sometimes get: what is so natural about relativity and a constant speed of light? It is not so easy, perhaps, to show why and how Lorentz’ transformation formulas make sense but, in contrast, it is fairly easy to think of the absolute speed of light like this: infinite speeds do not make sense, both physically as well as mathematically. From a physics point of view, the issue is this: something that moves about at an infinite speed is everywhere and, therefore, nowhere. So it doesn’t make sense. Mathematically speaking, you should not think of v reaching infinite but of a limit of a ratio of a distance interval that goes to infinity, while the time interval goes to zero. So, in the limit, we get a division of an infinite quantity by 0. That’s not infinity but an indeterminacy: it is totally undefined! Indeed, mathematicians can easily deal with infinity and zero, but divisions like zero divided by zero, or infinity divided by zero are meaningless. [Of course, we may have different mathematical functions in the numerator and denominator whose limits yields those values. There is then a reasonable chance we will be able to factor stuff out so as to get something else. We refer to such situations as indeterminate forms, but these are not what we refer to here. The informed reader will, perhaps, also note the division of infinity by zero does not figure in the list of indeterminacies, but any division by zero is generally considered to be undefined.]
 It may be extra electron such as in, for example, the electron which jumps from place to place in a semiconductor (see our quantum-mechanical analysis of electric currents). Also, as Dirac first noted, the analysis is actually also valid for electron holes, in which case our atom or molecule will be positively ionized instead of being neutral or negatively charged.
 We say 150 because that is close enough to the 1/α = 137 factor that relates the Bohr radius to the Compton radius of an electron. The reader may not be familiar with the idea of a Compton radius (as opposed to the Compton wavelength) but we refer him or her to our Zitterbewegung (ring current) model of an electron.
It is done! My last paper on the mentioned topic (available on Phil Gibbs’s site, my ResearchGate page or academia.edu) should conclude my work on the QED sector. It is a thorough exploration of the hitherto mysterious concept of the effective mass and all that.
The result I got is actually very nice: my calculation of the order of magnitude of the kb factor in the formula for the energy band (the conduction band, as you may know it) shows that the usual small angle approximation of the formula does not make all that much sense. This shows that some ‘realist’ thinking about what is what in these quantum-mechanical models does constrain the options: we cannot just multiply wave numbers with some random multiple of π or 2π. These things have a physical meaning!
So no multiverses or many worlds, please! One world is enough, and it is nice we can map it to a unique mathematical description.
I should now move on and think about the fun stuff: what is going on in the nucleus and all that? Let’s see where we go from here. Downloads on ResearchGate have been going through the roof lately (a thousand reads on ResearchGate is better than ten thousand on viXra.org, I guess), so it is all very promising. 🙂
I wrote a lot of papers but most of them – if not all – deal with very basic stuff: the meaning of uncertainty (just statistical indeterminacy because we have no information on the initial condition of the system), the Planck-Einstein relation (how Planck’s quantum of action models an elementary cycle or an oscillation), and Schrödinger’s wavefunctions (the solutions to his equation) as the equations of motion for a pointlike charge. If anything, I hope I managed to restore a feeling that quantum electrodynamics is not essentially different from classical physics: it just adds the element of a quantization – of energy, momentum, magnetic flux, etcetera.
Importantly, we also talked about what photons and electrons actually are, and that electrons are pointlike but not dimensionless: their magnetic moment results from an internal current and, hence, spin is something real – something we can explain in terms of a two-dimensional perpetual current. In the process, we also explained why electrons take up some space: they have a radius (the Compton radius). So that explains the quantization of space, if you want.
We also talked fields and told you – because matter-particles do have a structure – we should have a dynamic view of the fields surrounding those. Potential barriers – or their corollary: potential wells – should, therefore, not be thought of as static fields. They result from one or more charges moving around and these fields, therefore, vary in time. Hence, a particle breaking through a ‘potential wall’ or coming out of a potential ‘well’ is just using an opening, so to speak, which corresponds to a classical trajectory.
We, therefore, have the guts to say that some of what you will read in a standard textbook is plain nonsense. Richard Feynman, for example, starts his lecture on a current in a crystal lattice by writing this: “You would think that a low-energy electron would have great difficulty passing through a solid crystal. The atoms are packed together with their centers only a few angstroms apart, and the effective diameter of the atom for electron scattering is roughly an angstrom or so. That is, the atoms are large, relative to their spacing, so that you would expect the mean free path between collisions to be of the order of a few angstroms—which is practically nothing. You would expect the electron to bump into one atom or another almost immediately. Nevertheless, it is a ubiquitous phenomenon of nature that if the lattice is perfect, the electrons are able to travel through the crystal smoothly and easily—almost as if they were in a vacuum. This strange fact is what lets metals conduct electricity so easily; it has also permitted the development of many practical devices. It is, for instance, what makes it possible for a transistor to imitate the radio tube. In a radio tube electrons move freely through a vacuum, while in the transistor they move freely through a crystal lattice.” [The italics are mine.]
It is nonsense because it is not the electron that is traveling smoothly, easily or freely: it is the electrical signal, and – no ! – that is not to be equated with the quantum-mechanical amplitude. The quantum-mechanical amplitude is just a mathematical concept: it does not travel through the lattice in any physical sense ! In fact, it does not even travel through the lattice in a logical sense: the quantum-mechanical amplitudes are to be associated with the atoms in the crystal lattice, and describe their state – i.e. whether or not they have an extra electron or (if we are analyzing electron holes in the lattice) if they are lacking one. So the drift velocity of the electron is actually very low, and the way the signal moves through the lattice is just like in the game of musical chairs – but with the chairs on a line: all players agree to kindly move to the next chair for the new arrival so the last person on the last chair can leave the game to get a beer. So here it is the same: one extra electron causes all other electrons to move. [For more detail, we refer to our paper on matter-waves, amplitudes and signals.]
But so, yes, we have not said much about semiconductors, lasers and other technical stuff. Why not? Not because it should be difficult: we already cracked the more difficult stuff (think of an explanation of the anomalous magnetic moment, the Lamb shift, or one-photon Mach-Zehnder interference here). No. We are just lacking time ! It is, effectively, going to be an awful lot of work to rewrite those basic lectures on semiconductors – or on lasers or other technical matters which attract students in physics – so as to show why and how the mechanics of these things actually work: not approximately, but how exactly – and, more importantly, why and how these phenomena can be explained in terms of something real: actual electrons moving through the lattice at lower or higher drift speeds within a conduction band (and then what that conduction band actually is).
The same goes for lasers: we talk about induced emission and all that, but we need to explain what that might actually represent – while avoiding the usual mumbo-jumbo about bosonic behavior and other useless generalizations of properties of actually matter- and light-particles that can be reasonably explained in terms of the structure of these particles – instead of invoking quantum-mechanical theorems or other dogmatic or canonical a priori assumptions.
So, yes, it is going to be hard work – and I am not quite sure if I have sufficient time or energy for it. I will try, and so I will probably be offline for quite some time while doing that. Be sure to have fun in the meanwhile ! 🙂
Post scriptum: Perhaps I should also focus on converting some of my papers into journal articles, but then I don’t feel like it’s worth going through all of the trouble that takes. Academic publishing is a weird thing. Either the editorial line of the journal is very strong, in which case they do not want to publish non-mainstream theory, and also insist on introductions and other credentials, or, else, it is very weak or even absent – and then it is nothing more than vanity or ego, right? So I think I am just fine with the viXra collection and the ‘preprint’ papers on ResearchGate now. I’ve been thinking it allows me to write what I want and – equally important – how I want to write it. In any case, I am writing for people like you and me. Not so much for dogmatic academics or philosophers. The poor experience with reviewers of my manuscript has taught me well, I guess. I should probably wait to get an invitation to publish now.
A few days ago, I mentioned I felt like writing a new book: a sort of guidebook for amateur physicists like me. I realized that is actually fairly easy to do. I have three very basic papers – one on particles (both light and matter), one on fields, and one on the quantum-mechanical toolbox (amplitude math and all of that). But then there is a lot of nitty-gritty to be written about the technical stuff, of course: self-interference, superconductors, the behavior of semiconductors (as used in transistors), lasers, and so many other things – and all of the math that comes with it. However, for that, I can refer you to Feynman’s three volumes of lectures, of course. In fact, I should: it’s all there. So… Well… That’s it, then. I am done with the QED sector. Here is my summary of it all (links to the papers on Phil Gibbs’ site):
The last paper is interesting because it shows statistical indeterminism is the only real indeterminism. We can, therefore, use Bell’s Theorem to prove our theory is complete: there is no need for hidden variables, so why should we bother about trying to prove or disprove they can or cannot exist?
Jean Louis Van Belle, 21 October 2020
Note: As for the QCD sector, that is a mess. We might have to wait another hundred years or so to see the smoke clear up there. Or, who knows, perhaps some visiting alien(s) will come and give us a decent alternative for the quark hypothesis and quantum field theories. One of my friends thinks so. Perhaps I should trust him more. 🙂
As for Phil Gibbs, I should really thank him for being one of the smartest people on Earth – and for his site, of course. Brilliant forum. Does what Feynman wanted everyone to do: look at the facts, and think for yourself. 🙂
I ended my post on particles as spacetime oscillations saying I should probably write something about the concept of a field too, and why and how many academic physicists abuse it so often. So I did that, but it became a rather lengthy paper, and so I will refer you to Phil Gibbs’ site, where I post such stuff. Here is the link. Let me know what you think of it.
As for how it fits in with the rest of my writing, I already jokingly rewrote two of Feynman’s introductory Lectures on quantum mechanics (see: Quantum Behavior and Probability Amplitudes). I consider this paper to be the third. 🙂
Post scriptum: Now that I am talking about Richard Feynman – again ! – I should add that I really think of him as a weird character. I think he himself got caught in that image of the ‘Great Teacher’ while, at the same (and, surely, as a Nobel laureate), he also had to be seen to a ‘Great Guru.’ Read: a Great Promoter of the ‘Grand Mystery of Quantum Mechanics’ – while he probably knew classical electromagnetism combined with the Planck-Einstein relation can explain it all… Indeed, his lecture on superconductivity starts off as an incoherent ensemble of ‘rocket science’ pieces, to then – in the very last paragraphs – manipulate Schrödinger’s equation (and a few others) to show superconducting currents are just what you would expect in a superconducting fluid. Let me quote him:
“Schrödinger’s equation for the electron pairs in a superconductor gives us the equations of motion of an electrically charged ideal fluid. Superconductivity is the same as the problem of the hydrodynamics of a charged liquid. If you want to solve any problem about superconductors you take these equations for the fluid [or the equivalent pair, Eqs. (21.32) and (21.33)], and combine them with Maxwell’s equations to get the fields.”
So… Well… Looks he too is all about impressing people with ‘rocket science models’ first, and then he simplifies it all to… Well… Something simple. 😊
Having said that, I still like Feynman more than modern science gurus, because the latter usually don’t get to the simplifying part.
My very first publication on Phil Gibb’s site – The Quantum-Mechanical Wavefunction as a Gravitational Wave – reached 500+ downloads. I find that weird, because I warn the reader in the comments section that some of these early ideas do not make sense. Indeed, while my idea of modelling an electron as a two-dimensional oscillation has not changed, the essence of the model did. My theory of matter is based on the idea of a naked charge – with zero rest mass – orbiting around some center, and the energy in its motion – a perpetual current ring, really – is what gives matter its (equivalent) mass. Wheeler’s idea of ‘mass without mass’. The force is, therefore, definitelynot gravitational.
It cannot be: the force has to grab onto something, and all it can grab onto is the naked charge. The force must, therefore, be electromagnetic. So I now look at that very first paper as an immature essay. However, I leave it there because that paper does ask all of the right questions, and I should probably revisit it – because the questions I get on my last paper on the subject – De Broglie’s Matter-Wave: Concept and Issues, which gets much more attention on ResearchGate than on Phil Gibb’s site (so it is more serious, perhaps) – are quite similar to the ones I try to answer in that very first paper: what is the true nature of the matter-wave? What is that fundamental oscillation?
I have been thinking about this for many years now, and I may never be able to give a definite answer to the question, but yesterday night some thoughts came to me that may or may not make sense. And so to be able to determine whether they might, I thought I should write them down. So that is what I am going to do here, and you should not take it very seriously. If anything, they may help you to find some answers for yourself. So if you feel like switching off because I am getting too philosophical, please do: I myself wonder how useful it is to try to interpret equations and, hence, to write about what I am going to write about here – so I do not mind at all if you do too!
That is too much already as an introduction, so let us get started. One of my more obvious reflections yesterday was this: the nature of the matter-wave is not gravitational, but it is an oscillation in space and in time. As such, we may think of it as a spacetime oscillation. In any case, physicists often talk about spacetime oscillations without any clear idea of what they actually mean by it, so we may as well try to clarify it in this very particular context here: the explanation of matter in terms of an oscillating pointlike charge. Indeed, the first obvious point to make is that any such perpetual motion may effectively be said to be a spacetime oscillation: it is an oscillation in space – and in time, right?
As such, a planet orbiting some star – think of the Earth orbiting our Sun – may be thought of a spacetime oscillation too ! Am I joking? No, I am not. Let me elaborate this idea. The concept of a spacetime oscillation implies we think of space as something physical, as having an essence of sorts. We talk of a spacetime fabric, a (relativistic) aether or whatever other term comes to mind. The Wikipedia article on aether theories quotes Robert B. Laughlin as follows in this regard: “It is ironic that Einstein’s most creative work, the general theory of relativity, should boil down to conceptualizing space as a medium when his original premise [in special relativity] was that no such medium existed [..] The word ‘ether’ has extremely negative connotations in theoretical physics because of its past association with opposition to relativity. This is unfortunate because, stripped of these connotations, it rather nicely captures the way most physicists actually think about the vacuum.”
I disagree with that. I do not think about the vacuum in such terms: the vacuum is the Cartesian mathematical 3D space in which we imagine stuff to exist. We should not endow this mathematical space with any physical qualities – with some essence. Mathematical concepts are mathematical concepts only. It is the difference between size and distance. Size is physical: an electron – any physical object, really – has a size. But the distance between two points is a mathematical concept only.
The confusion arises from us expressing both in terms of the physical distance unit: a meter, or a pico- or femtometer – whatever is appropriate for the scale of the things that we are looking at. So it is the same thing when we talk about a point: we need to distinguish a physical point – think of our pointlike charge here – and a mathematical point. That should be the key to understanding matter-particles as spacetime oscillations – if we would want to understand them as such, that is – which is what we are trying to do here. So how should we think of this? Let us start with matter-particles. In our realist interpretation of physics, we think of matter-particles as consisting of charge – in contrast to, say, photons, the particles of light, which (also) carry energy but no charge. Let us consider the electron, because the structure of the proton is very different and may involve a different force: a strong force – as opposed to the electromagnetic force that we are so familiar with. Let me use an animated gif from the Wikipedia Commons repository to recapture the idea of such (two-dimensional) oscillation.
Think of the green dot as the pointlike charge: it is a physical point moving in a mathematical space – a simple 2D plane, in this case. So it goes from here to there, and here and there are two mathematical points only: points in the 3D Cartesian space which – as H.A. Lorentz pointed out when criticizing the new theories – is a notion without which we cannot imagine any idea in physics. So we have a spacetime oscillation here alright: an oscillation in space, and in time. Oscillations in space are always oscillations in time, obviously – because the idea of an oscillation implies the idea of motion, and the idea of motion always involves the notion of space as well as the notion of time. So what makes this spacetime oscillation different from, say, the Earth orbiting around the Sun?
Perhaps we should answer this question by pointing out the similarities first. A planet orbiting around the sun involves perpetual motion too: there is an interplay between kinetic and potential energy, both of which depend on the distance from the center. Indeed, Earth falls into the Sun, so to speak, and its kinetic energy gets converted into potential energy and vice versa. However, the centripetal force is gravitational, of course. The centripetal force on the pointlike charge is not: there is nothing at the center pulling it. But – Hey ! – what is pulling our planet, exactly? We do not believe in virtual gravitons traveling up and down between the Sun and the Earth, do we? So the analogy may not be so bad, after all ! It is just a very different force: its structure is different, and it acts on something different: a charge versus mass. That’s it. Nothing more. Nothing less.
Or… Well… Velocities are very different, of course, but even there distinctions are, perhaps, less clear-cut than they appear to be at first. The pointlike charge in our electron has no mass and, therefore, moves at lightspeed. The electron itself, however, acquires mass and, therefore, moves at a fraction of lightspeed only in an atomic or molecular orbital. And much slower in a perpetual current in superconducting material. [Yes. When thinking of electrons in the context of superconduction, we have an added complication: we should think of electron pairs (Cooper pairs) rather than individual electrons, it seems. We are not quite sure what to make of this – except to note electrons will also want to lower their energy by pairing up in atomic or molecular orbitals, and we think the nature of this pairing must, therefore, be the same.]
Did we clarify anything? Maybe. Maybe not. Saying that an electron is a pointlike charge and a two-dimensional oscillation, or saying that it’s a spacetime oscillation itself, appears to be a tautology here, right? Yes. You are right. So what’s the point, then?
We are not sure, except for one thing: when defining particles as spacetime oscillations, we do definitely not need the idea of virtual particles. That’s rubbish: an unnecessary multiplication of concepts. So I think that is some kind of progress we got out of this rather difficult philosophical reflections, and that is useful, I think. To illustrate this point, you may want to think of the concept of heat. When there is heat, there is no empty space. There is no vacuum anymore. When we heat a space, we fill it with photons. They bounce around and get absorbed and re-emitted all of the time. in fact, we, therefore, also need matter to imagine a heated space. Hence, space here is no longer the vacuum: it is full of energy, but this energy is always somewhere – and somewhere specifically: it’s carried by a photon, or (temporarily) stored as an electron orbits around a nucleus in an excited state (which amounts to the same as saying it is being stored by an atom or some molecular structure consisting of atoms). In short, heat is energy but it is being ‘transmitted’ or ‘transported’ through space by photons. Again, the point is that the vacuum itself should not be associated with energy: it is empty. It is a mathematical construct only.
We should try to think this through – even further than we already did – by thinking how photons – or radiation of heat – would disturb perpetual currents: in an atom, obviously (the electron orbitals), but also perpetual superconducting currents at the macro-scale: unless the added heat from the photons is continuously taken away by the supercooling helium or whatever is used, radiation or heat will literally bounce the electrons into a different physical trajectory, so we should effectively associate excited energy states with different patterns of motion: a different oscillation, in other words. So it looks like electrons – or electrons in atomic/molecular orbitals – do go from one state into another (excited) state and back again but, in whatever state they are, we should think of them as being in their own space (and time). So that is the nature of particles as spacetime oscillations then, I guess. Can we say anything more about it?
I am not sure. At this moment, I surely have nothing more to say about it. Some more thinking about how superconduction – at the macro-scale – might actually work could, perhaps, shed more light on it: is there an energy transfer between the two electrons in a Cooper pair? An interplay between kinetic and potential energy? Perhaps the two electrons behave like coupled pendulums? If they do, then we need to answer the question: how, exactly? Is there an exchange of (real) photons, or is the magic of the force the same: some weird interaction in spacetime which we can no further meaningfully analyze, but which gives space not only some physicality but also causes us to think of it as being discrete, somehow. Indeed, an electron is an electron: it is a whole. Thinking of it as a pointlike charge in perpetual motion does not make it less of a whole. Likewise, an electron in an atomic orbital is a whole as well: it just occupies more space. But both are particles: they have a size. They are no longer pointlike: they occupy a measurable space: the Cartesian (continuous) mathematical space becomes (discrete) physical space.
I need to add another idea here – or another question for you, if I may. If superconduction can only occur when electrons pair up, then we should probably think of the pairs as some unit too – and a unit that may take up a rather large space. Hence, the idea of a discrete, pointlike, particle becomes somewhat blurred, right? Or, at the very least, it becomes somewhat less absolute, doesn’t it? 🙂
I guess I am getting lost in words here, which is probably worse than getting ‘lost in math‘ (I am just paraphrasing Sabine Hossenfelder here) but, yes, that is why I am writing a blog post rather than a paper here. If you want equations, read my papers. 🙂 Oh – And don’t forget: fields are real as well. They may be relative, but they are real. And it’s not because they are quantized (think of (magnetic) flux quantization in the context of superconductivity, for example) that they are necessarily discrete – that we have field packets, so to speak. I should do a blog post on that. I will. Give me some time. 🙂
Post scriptum: What I wrote above on there not being any exchange of gravitons between an orbiting planet and its central star (or between double stars or whatever gravitational trajectories out there), does not imply I am ruling out their existence. I am a firm believer in the existence of gravitational waves, in fact. We should all be firm believers because – apart from some marginal critics still wondering what was actually being measured – the LIGO detections are real. However, whether or not these waves involve discrete lightlike particles – like photons and, in the case of the strong force, neutrinos – is a very different question. Do I have an opinion on it? I sure do. It is this: when matter gets destroyed or created (remember the LIGO detections involved the creation and/or destruction of matter as black holes merge), gravitational waves must carry some of the energy, and there is no reason to assume that the Planck-Einstein relation would not apply. Hence, we will have energy packets in the gravitational wave as well: the equivalent of photons (and, most probably, of neutrinos), in other words. All of this is, obviously, very speculative. Again, just think of this whole blog post as me freewheeling: the objective is, quite simply, to make you think as hard as I do about these matters. 🙂
As for my remark on the Cooper pairs being a unit or not, that question may be answered by thinking about what happens if Cooper pairs are broken, which is a topic I am not familiar with, so I cannot say anything about it.
I’ve been asked a couple of times: “What about Bell’s No-Go Theorem, which tells us there are no hidden variables that can explain quantum-mechanical interference in some kind of classical way?” My answer to that question is quite arrogant, because it’s the answer Albert Einstein would give when younger physicists would point out that his objections to quantum mechanics (which he usually expressed as some new thought experiment) violated this or that axiom or theorem in quantum mechanics: “Das ist mir wur(sch)t.”
In English: I don’t care. Einstein never lost the discussions with Heisenberg or Bohr: he just got tired of them. Like Einstein, I don’t care either – because Bell’s Theorem is what it is: a mathematical theorem. Hence, it respects the GIGO principle: garbage in, garbage out. In fact, John Stewart Bell himself – one of the third-generation physicists, we may say – had always hoped that some “radical conceptual renewal” might disprove his conclusions. We should also remember Bell kept exploring alternative theories – including Bohm’s pilot wave theory, which is a hidden variables theory – until his death at a relatively young age. [J.S. Bell died from a cerebral hemorrhage in 1990 – the year he was nominated for the Nobel Prize in Physics. He was just 62 years old then.]
So I never really explored Bell’s Theorem. I was, therefore, very happy to get an email from Gerard van der Ham, who seems to have the necessary courage and perseverance to research this question in much more depth and, yes, relate it to a (local) realist interpretation of quantum mechanics. I actually still need to study his papers, and analyze the YouTube video he made (which looks much more professional than my videos), but this is promising.
To be frank, I got tired of all of these discussions – just like Einstein, I guess. The difference between realist interpretations of quantum mechanics and the Copenhagen dogmas is just a factor 2 or π in the formulas, and Richard Feynman famously said we should not care about such factors (Feynman’s Lectures, III-2-4). Modern physicists fudge them away consistently. They’ve done much worse than that, actually. They are not interested in truth. Convention, dogma, indoctrination – – non-scientific historical stuff – seems to prevent them from that. And modern science gurus – the likes of Sean Carroll or Sabine Hossenfelder etc. – play the age-old game of being interesting: they pretend to know something you do not know or – if they don’t – that they are close to getting the answers. They are not. They have them already. They just don’t want to tell you that because, yes, it’s the end of physics.
 See: John Stewart Bell, Speakable and unspeakable in quantum mechanics, pp. 169–172, Cambridge University Press, 1987.
There is an army of physicists out there – still – trying to convince you there is still some mystery that needs explaining. They are wrong: quantum-mechanical weirdness is weird, but it is not some mystery. We have a decent interpretation of what quantum-mechanical equations – such as Schrodinger’s equation, for example – actually mean. We can also understand what photons, electrons, or protons – light and matter – actually are, and such understanding can be expressed in terms of 3D space, time, force, and charge: elementary concepts that feel familiar to us. There is no mystery left.
Unfortunately, physicists have completely lost it: they have multiplied concepts and produced a confusing but utterly unconvincing picture of the essence of the Universe. They promoted weird mathematical concepts – the quark hypothesis is just one example among others – and gave them some kind of reality status. The Nobel Prize Committee then played the role of the Vatican by canonizing the newfound religion.
It is a sad state of affairs, because we are surrounded by too many lies already: the ads and political slogans that shout us in the face as soon as we log on to Facebook to see what our friends are up to, or to YouTube to watch something or – what I often do – listen to the healing sounds of music.
The language and vocabulary of physics are complete. Does it make us happier beings? It should, shouldn’t it? I am happy I understand. I find consciousness fascinating – self-consciousness even more – but not because I think it is rooted in mystery. No. Consciousness arises from the self-organization of matter: order arising from chaos. It is a most remarkable thing – and it happens at all levels: atoms in molecules, molecules forming cellular systems, cellular systems forming biological systems. We are a biological system which, in turn, is part of much larger systems: biological, ecological – material systems. There is no God talking to us. We are on our own, and we must make the best out of it. We have everything, and we know everything.
Sadly, most people do not realize.
Post scriptum: With the end of physics comes the end of technology as well, isn’t it? All of the advanced technologies in use today are effectively already described in Feynman’s Lectures on Physics, which were written and published in the first half of the 1960s.
So it is all there. I was born in 1969, when Man first walked on the Moon. CERN and other spectacular research projects have since been established, but, when one is brutally honest, one has to admit these experiments have not added anything significant – neither to the knowledge nor to the technology base of humankind (and, yes, I know your first instinct is to disagree with that, but that is because study or the media indoctrinated you that way). It is a rather strange thought, but I think it is essentially correct. Most scientists, experts and commentators are trying to uphold a totally fake illusion of progress.
Pre-scriptum: For those who do not like to read, I produced a very short YouTube presentation/video on this topic. About 15 minutes – same time as it will take you to read this post, probably. Check it out: https://www.youtube.com/watch?v=sJxAh_uCNjs.
We think of space and time as fundamental categories of the mind. And they are, but only in the sense that the famous Dutch physicist H.A. Lorentz conveyed to us: we do not seem to be able to conceive of any idea in physics without these two notions. However, relativity theory tells us these two concepts are not absolute and we may, therefore, say they cannot be truly fundamental. Only Nature’s constants – the speed of light, or Planck’s quantum of action – are absolute: these constants seem to mix space and time into something that is, apparently, more fundamental.
The speed of light (c) combines the physical dimensions of space and time, and Planck’s quantum of action (h) adds the idea of a force. But time, distance, and force are all relative. Energy (force over a distance), momentum (force times time) are, therefore, also relative. In contrast, the speed of light, and Planck’s quantum of action, are absolute. So we should think of distance, and of time, as some kind of projection of a deeper reality: the reality of light or – in case of Planck’s quantum of action – the reality of an electron or a proton. In contrast, time, distance, force, energy, momentum and whatever other concept we would derive from them exist in our mind only.
We should add another point here. To imagine the reality of an electron or a proton (or the idea of an elementary particle, you might say), we need an additional concept: the concept of charge. The elementary charge (e) is, effectively, a third idea (or category of the mind, one might say) without which we cannot imagine Nature. The ideas of charge and force are, of course, closely related: a force acts on a charge, and a charge is that upon which a force is acting. So we cannot think of charge without thinking of force, and vice versa. But, as mentioned above, the concept of force is relative: it incorporates the idea of time and distance (a force is that what accelerates a charge). In contrast, the idea of the elementary charge is absolute again: it does not depend on our frame of reference.
So we have three fundamental concepts: (1) velocity (or motion, you might say: a ratio of distance and time); (2) (physical) action (force times distance times time); and (3) charge. We measure them in three fundamental units: c, h, and e. Che. 🙂 So that’s reality, then: all of the metaphysics of physics are here. In three letters. We need three concepts: three things that we think of as being real, somehow. Real in the sense that we do not think they exist in our mind only. Light is real, and elementary particles are equally real. All other concepts exist in our mind only.
So were Kant’s ideas about space and time wrong? Maybe. Maybe not. If they are wrong, then that’s quite OK: Immanuel Kant lived in the 18th century, and had not ventured much beyond the place where he was born. Less exciting times. I think he was basically right in saying that space and time exist in our mind only. But he had no answer(s) to the question as to what is real: if some things exist in our mind only, something must exist in what is not our mind, right? So that is what we refer to as reality then: that which does not exist in our mind only.
Modern physics has the answers. The philosophy curriculum at universities should, therefore, adapt to modern times: Maxwell first derived the (absolute) speed of light in 1862, and Einstein published the (special) theory of relativity back in 1905. Hence, philosophers are 100-150 years behind the curve. They are probably even behind the general public. Philosophers should learn about modern physics as part of their studies so they can (also) think about real things rather than mental constructs only.
Our alternative realist interpretation of quantum physics is pretty complete but one thing that has been puzzling us is the mass density of a proton: why is it so massive as compared to an electron? We simplified things by adding a factor in the Planck-Einstein relation. To be precise, we wrote it as E = 4·h·f. This allowed us to derive the proton radius from the ring current model:
This felt a bit artificial. Writing the Planck-Einstein relation using an integer multiple of h or ħ (E = n·h·f = n·ħ·ω) is not uncommon. You should have encountered this relation when studying the black-body problem, for example, and it is also commonly used in the context of Bohr orbitals of electrons. But why is n equal to 4 here? Why not 2, or 3, or 5 or some other integer? We do not know: all we know is that the proton is very different. A proton is, effectively, not the antimatter counterpart of an electron—a positron. While the proton is much smaller – 459 times smaller, to be precise – its mass is 1,836 times that of the electron. Note that we have the same 1/4 factor here because the mass and Compton radius are inversely proportional:
This doesn’t look all that bad but it feels artificial. In addition, our reasoning involved a unexplained difference – a mysterious but exact SQRT(2) factor, to be precise – between the theoretical and experimentally measured magnetic moment of a proton. In short, we assumed some form factor must explain both the extraordinary mass density as well as this SQRT(2) factor but we were not quite able to pin it down, exactly. A remark on a video on our YouTube channel inspired us to think some more – thank you for that, Andy! – and we think we may have the answer now.
We now think the mass – or energy – of a proton combines two oscillations: one is the Zitterbewegung oscillation of the pointlike charge (which is a circular oscillation in a plane) while the other is the oscillation of the plane itself. The illustration below is a bit horrendous (I am not so good at drawings) but might help you to get the point. The plane of the Zitterbewegung (the plane of the proton ring current, in other words) may oscillate itself between +90 and −90 degrees. If so, the effective magnetic moment will differ from the theoretical magnetic moment we calculated, and it will differ by that SQRT(2) factor.
Hence, we should rewrite our paper, but the logic remains the same: we just have a much better explanation now of why we should apply the energy equipartition theorem.
Mystery solved! 🙂
Post scriptum (9 August 2020): The solution is not as simple as you may imagine. When combining the idea of some other motion to the ring current, we must remember that the speed of light – the presumed tangential speed of our pointlike charge – cannot change. Hence, the radius must become smaller. We also need to think about distinguishing two different frequencies, and things quickly become quite complicated.
Perhaps I should have titled this post differently: the physicist’s worldview. We may, effectively, assume that Richard Feynman’s Lectures on Physicsrepresent mainstream sentiment, and he does get into philosophy—less or more liberally depending on the topic. Hence, yes, Feynman’s worldview is pretty much that of most physicists, I would think. So what is it? One of his more succinct statements is this:
“Often, people in some unjustified fear of physics say you cannot write an equation for life. Well, perhaps we can. As a matter of fact, we very possibly already have an equation to a sufficient approximation when we write the equation of quantum mechanics.” (Feynman’s Lectures, p. II-41-11)
He then jots down that equation which Schrödinger has on his grave (shown below). It is a differential equation: it relates the wavefunction (ψ) to its time derivative through the Hamiltonian coefficients that describe how physical states change with time (Hij), the imaginary unit (i) and Planck’s quantum of action (ħ).
Feynman, and all modern academic physicists in his wake, claim this equation cannot be understood. I don’t agree: the explanation is not easy, and requires quite some prerequisites, but it is not anymore difficult than, say, trying to understand Maxwell’s equations, or the Planck-Einstein relation (E = ħ·ω = h·f).
In fact, a good understanding of both allows you to not only understand Schrödinger’s equation but all of quantum physics. The basics are this: the presence of the imaginary unit tells us the wavefunction is cyclical, and that it is an oscillation in two dimensions. The presence of Planck’s quantum of action in this equation tells us that such oscillation comes in units of ħ. Schrödinger’s wave equation as a whole is, therefore, nothing but a succinct representation of the energy conservation principle. Hence, we can understand it.
At the same time, we cannot, of course. We can only grasp it to some extent. Indeed, Feynman concludes his philosophical remarks as follows:
“The next great era of awakening of human intellect may well produce a method of understanding the qualitative content of equations. Today we cannot. Today we cannot see that the water flow equations contain such things as the barber pole structure of turbulence that one sees between rotating cylinders. We cannot see whether Schrödinger’s equation contains frogs, musical composers, or morality—or whether it does not. We cannot say whether something beyond it like God is needed, or not. And so we can all hold strong opinions either way.” (Feynman’s Lectures, p. II-41-12)
I think that puts the matter to rest—for the time being, at least. 🙂
I thought I would no longer post stuff here but I see this site still gets a lot more traffic than the new one, so I will make an exception and cross-post an announcement of a new video on my YouTube channel. Indeed, yesterday I was to talk for about 30 minutes to some students who are looking at classical electron models as part of an attempt to try to model what might be happening to an electron when moving through a magnetic field. Of course, I only had time to discuss the ring current model, and even then it inadvertently turned into a two-hour presentation. Fortunately, they were polite and no one dropped out—although it was an online Google Meet. In fact, they reacted quite enthusiastically, and so we all enjoyed it a lot. So much that I adjusted the presentation a bit the next morning (which added even more time to it unfortunately) and published it online. So this is the link to it, and I hope you enjoy it. If so, please like it—and share it! 🙂
Oh! Forgot to mention: in case you wonder why this video is different than others, see my Tweet on Sean Carroll’s latest series of videos hereunder. That should explain it. 🙂
Post scriptum: I got the usual question from one of the students, of course: if an electron is a ring current, then why doesn’t it radiate its energy away? The easy answer is: an electron is an electron and so it doesn’t—for the same reason that an electron in an atomic orbital or a Cooper pair in a superconducting loop of current does not radiate energy away. The more difficult answer is a bit mysterious: it has got to do with flux quantization and, most importantly, with the Planck-Einstein relation. I will not be too explicit here (it is just a footnote) but the following elements should be noted:
1. The Planck-Einstein law embodies a (stable) wavicle: a wavicle respects the Planck-Einstein relation (E = h·f) as well as Einstein’s mass-energy equivalence relation (E = mc2). A wavicle will, therefore, carry energy but it will also pack one or more units of Planck’s quantum of action. Both the energy as well as this finite amount of physical action (Wirkung in German) will be conserved—cycle after cycle.
2. Hence, equilibrium states should be thought of as electromagnetic oscillations without friction. Indeed, it is the frictional element that explains the radiation of, say, an electron going up and down in an antenna and radiating some electromagnetic signal out. To add to this rather intuitive explanation, I should also remind you that it is the accelerations and decelerations of the electric charge in an antenna that generate the radio wave—not the motion as such. So one should, perhaps, think of a charge going round and round as moving like in a straight line—along some geodesic in its own space. That’s the metaphor, at least.
3. Technically, one needs to think in terms of quantized fluxes and Poynting vectors and energy transfers from kinetic to potential (and back) and from ‘electric’ to ‘magnetic’ (and back). In short, the electron really is an electromagnetic perpetuum mobile ! I know that sounds mystical (too) but then I never said I would take all of the mystery away from quantum physics ! 🙂 If there would be no mystery left, I would not be interested in physics. On the quantization of flux for superconducting loops: see, for example, http://electron6.phys.utk.edu/qm2/modules/m5-6/flux.htm. There is other stuff you may want to dig into too, like my alternative Principles of Physics, of course ! 🙂
I am done with reading Feynman and commenting on it—especially because this site just got mutilated by the third DMCA takedown of material (see below). Follow me to my new blog. No Richard Feynman, Mr. Gottlieb or DMCA there! Pure logic only. This site has served its purpose, and that is to highlight the Rotten State of QED. 🙂
Long time ago – in 1996, to be precise – I studied Wittgenstein’s TLP—part of a part-time BPhil degree program. At the time, I did not like it. The lecture notes were two or three times the volume of the work itself, and I got pretty poor marks for it. I guess one has to go through life to get an idea of what he was writing about. With all of the nonsense lately, I thought about one of the lines in that little book: “One must, so to speak, throw away the ladder after he has climbed up it. One must transcend the propositions, and then he will see the world aright.” (TLP, 6-54)
For Mr. Gottlieb and other narrow-minded zealots and mystery wallahs – who would not be interested in Wittgenstein anyway – I’ll just quote Wittgenstein’s quote of Ferdinand Kürnberger:
“. . . und alles, was man weiss, nicht bloss rauschen und brausen gehört hat, lässt sich in drei Worten sagen.“
I will let you google-translate that and, yes, sign off here—in the spirit of Ludwig Boltzmann and Paul Ehrenfest. [Sorry for being too lengthy or verbose here.]
“Bring forward what is true. Write it so that it is clear. Defend it to your last breath.” (Boltzmann)
Jun 20, 2020, 4:30 PM UTC
We’ve received the DMCA takedown notice below regarding material published on your WordPress.com site, which means the complainant is asserting ownership of this material and claiming that your use of it is not permitted by them or the law. As required by the DMCA, we have disabled public access to the material.
Repeated incidents of copyright infringement will also lead to the permanent suspension of your WordPress.com site. We certainly don’t want that to happen, so please delete any other material you may have uploaded for which you don’t have the necessary rights and refrain from uploading additional material that you do not have permission to upload. Although we can’t provide legal advice, these resources might help you make this determination:
If you believe that this DMCA takedown notice was received in error, or if you believe your usage of this material would be considered fair use, it’s important that you submit a formal DMCA counter notice to ensure that your WordPress.com site remains operational. If you submit a valid counter notice, we will return the material to your site in 10 business days if the complainant does not reply with legal action.
Please refer to the following pages for more information:
Please note that republishing the material yourself, without permission from the copyright holder (even after you have submitted a counter notice) will result in the permanent suspension of your WordPress.com site and/or account.
Well… Thank you, WordPress. I guess you’ll first suspend the site and then the account? I hope you’ll give me some time to create another account, at least? If not, this spacetime rebel will have to find another host for his site. 🙂