When I wrote my first PS in November last year, I thought it would be my last blog post here – but the stats keep going up. Good enough here on WordPress, and even better on ResearchGate: a 160+ score now and still rising fast: top 1% climber still – despite that I have published nothing since a year now – which got me into the top 30% bracket of RG researchers in less than two years – and, while it is far from going viral, further rise looks a bit inevitable now.
It clearly shows that I am not mad and that you are reading serious physics here – but without the usual hocus-pocus and ‘mystery’ that leaves so many young and-not-so-young people disgusted. I repeat: there is no serious puzzle in physics any more. All that is being done now, is to further work out the consequences of the fundamental laws of physics that were written down about a hundred years ago (de Broglie wrote his thesis in 1924, so this centenary is almost there). For those who are seeking to simplify further by resorting to some kind of ‘meta-symbolism’ or an even more ‘holistic’ perspective (whatever that might mean), I think the exchange below (from my ResearchGate account) might be useful. For the rest, I have nothing to add anymore. It is all there ! 🙂
M (7 days ago): Dear JL – I was amazed to find your piece on the jitter-bugging phenomena [sic] (not hypothesis). I think you may find my more holistic perspective useful in fine-tuning your work. I hope you agree, and I would love to collaborate. After all, as far as I know, your work is the first substantive effort in nearly 60 years+ (in this very fertile direction). Cheers, etc. ~ M
M (7 days ago): Dear JL – Bravo!!! I just saw the abstract of your paper on conserving the enthusiasm of young people afflicted by modern SM-QM nonsense, dogma, etc. I am now even more motivated to have your help reviewing, editing, and developing my next-gen ontology of the cosmos. Cheers ~ M
My rapid-fire answers (yesterday and today):
Txs man ! This developed partly because (1) I had too much time on my hands (a difficult past five years as I came back from abroad and my mom and bro died from cancer – I had to go through cancer surgery myself) and (2) helping my son getting through his exams on quantum physics as part of his engineering studies (he is just as much as a rebel as me and (also) wanted more ‘common-sense’ explanations. The ‘orbital’ or ‘circular’ motion concept for interpreting de Broglie’s wavefunction (orbital frequencies instead of linear ones) is the key to everything. 🙂 No magic. 🙂 Charge and motion are the only concepts that are real. 🙂 There is no copyright to what I produced (a lot is just about building further on strands the ‘Old Great’ (including Schroedinger himself) had in mind) so feel free to use it and further develop. My blog post on Paul Ehrenfest’ s suicide is probably still the most ‘accessible’ introduction to it all. It is also tragic – as tragic (or more, probably) as Dirac’s depression when he sort of ‘turned his back’ on the young wolves he used to support – but still… https://readingfeynman.org/2020/05/27/ehrenfest-and-other-tragedies-in-physics/
I also did some YouTube videos to ‘market’ it all – but there is only so much one can do. It is a weird situation. APS, WSP and even Springer Verlag wanted to do something with me but they all backed off in the end. Fortunately I do not suffer from much ego (one advantage of my experience in war-torn countries such as Afghanistan and in Ukraine (March)) – so I take everything lightly. My “Post Scriptum” to my papers – https://www.researchgate.net/publication/356556508_Post_Scriptum – is a read of 15 minutes only and guides all of the material. Have fun with it ! Life is short. I know – having come clean out of cancer (unlike my mom and my bro), so every day is a perfect day now. As for day job: https://www.linkedin.com/in/jean-louis-van-belle-85b74b7a/
As for the formalism that you are introducing, I would recommend close(r) study of: (1) https://en.wikipedia.org/wiki/Geometrodynamics : my physics is a ‘mass without mass’ approach – but I do not believe charge can be further reduced (we need the concept to distinguish between matter and anti-matter, for example – geometry does not suffice to explain all degrees of freedom there); (2) The failure of Wittgenstein’s formalism – as he admitted himself in what is commonly referred to as the ‘Wittgenstein II’ (nothing more than some of his comments in letters on his little booklet). I studied Wittgenstein as part of my philosophy studies and I am not too impressed. I feel we need a bit of ‘common’ language to add nuance and meaning to the mathematical symbols. Without the ambiguity in them, they do not mean all that much to me. Also see: https://en.wikipedia.org/wiki/Ordinary_language_philosophy
To add – I also believe step (3) of the geometrodynamics is not possible. We can do without the mass concept (and still it is useful to use in the higher-level physics), but not without charge or fields. Charge and field are not further reducible. The last slide of my ‘philosophy and physics’ presentation on YouTube shows the fundamental ‘categories’ I believe in (categories in an Aristotelian sense). These concepts can be both ‘relative’ or ‘absolute’ (not-relative, in the sense of (special/general) relativity theory). https://www.youtube.com/watch?v=sJxAh_uCNjs&t=16s
One more thing, despite my criticism on ‘Wittgenstein-like’ formalism, his first statement in his Tractatus should obviously be the point of departure of any ‘metaphysics’ or epistemology: 1.1 Die Welt ist die Gesamtheit der Tatsachen, nicht der Dinge. Perhaps it is the only thing we can seriously say about ‘the world’ or ‘reality’. It serves as a ‘good enough’ definition to me, in any case. 🙂
I made a start with annotating all of my papers. I will arrange them in a paper of itself: working paper no. 30 on ResearchGate. I will date it on 6 December when finished, in honor of one my brothers who died on that day (6 December), from a cancer that visited me too. Jean-Claude was his name. He was a great guy. I miss him, and sometimes feel guilty of having survived. Hereunder follows the first draft – a sort of preview for those who like this blog and have encouraged me to go on.
The 29 papers which I published on ResearchGate end a long period of personal research, which started in earnest when I sent my very first paper, as a young student in applied economics and philosophy, to the 1995 ‘Einstein meets Magritte’ Conference in Brussels. I do no longer have that paper, but I remember it vehemently defended the point of view that the ‘uncertainty’ as modeled in the Uncertainty Principle must be some kind of statistical determinism: what else can it be? Paraphrasing the words of H.A. Lorentz, at the occasion of the 1927 Solvay Conference, a few months before his death, there is, effectively, no need to elevate indeterminism to a philosophical principle: scientists must keep determinism has to be kept as ‘an object of faith.’ That is what science is all about. All that is needed is to replace our notion of predictability by the notion of statistical determinism: we can no longer predict what is going to happen, because we can or do not know the initial conditions, or because our measurement disturbs the phenomenon we are analyzing, but that is it. There is nothing more to it. That is what Heisenberg’s now rather infamous Uncertainty Principle is all about it: it is just what he originally thought about it himself.
I found the metaphor of a fast-rotating airplane propeller a very apt one, and several people who wrote me also said it made them see what it was all about. One cannot say where the blades are, exactly, and if you would shoot bullets through it, those bullets will either hit a blade and be deflected or will, quite simply, just go straight through. There is no third possibility. We can only describe the moving propeller in terms of some density in space. This is why the probabilities in quantum physics are proportional to mass densities or, what amounts to the same because of Einstein’s mass-energy equivalence relation, energydensities.
The propeller metaphor is useful in other contexts too. It explains quantum-mechanical tunneling, for example: if one thinks of matter-particles as pointlike charges in motion – which is what we do – then the fields that surround them will be dynamic and, therefore, be like a propeller too: at one particular point in space and in time, the field will have a magnitude and a direction that will not allow another particle (think of it as a bullet) to get through – as the field acts as a force on the charge – but ‘holes appear in the wall’, so to speak, and they do so in a regular fashion, and then the incoming particle’s kinetic energy – while lower than the average potential energy of the barrier – will carry it through. There is, therefore, nothing weird or mysterious about tunneling.
Many more examples may be mentioned, but then I would be rewriting my papers, and that is not the purpose of this one, which is to conclude my research by revisiting and commenting on the rather vast mass of paper I produced previously: 29 papers in just one year (April 2020 – April 2021). These papers did not bring me fame, but did generate enough of a readership to produce a decent RG score – as evidenced below (sorry if this looks egotistical: it is not meant that way).
I have effectively been ridiculed by family, friends and – sadly – by quite a few fellow searchers for truth. But I have also been encouraged, and I prefer to remember the encouragements. One of my blog posts writes about the suicide of Paul Ehrenfest and other personal tragedies in the history of physics. It notes a remark from a former diplomat-friend of mine, who remarked this: “It is good you are studying physics only as a pastime. Professional physicists are often troubled people—miserable.”
I found it an interesting observation from a highly intelligent outsider who, as a diplomat, meets many people with very different backgrounds. I do understand this strange need to probe things at the deepest level—to be able to explain what might or might not be the case (I am using Wittgenstein’s definition of reality here). I also note all of the founding fathers of quantum mechanics ended up becoming pretty skeptical about the theory they had created. Even John Stewart Bell – one of the more famous figures in what may be referred to as the third generation of quantum physicists – did not like his own ‘No Go Theorem’ and thought that some “radical conceptual renewal” might disprove his conclusions.
It sounds arrogant, but I think my papers are representative of such renewal. It is, as great thinkers in the past would have said, an idea whose time has come. Einstein’s ‘unfinished revolution’ – as Lee Smolin calls it – was finished quite a while ago, but mainstream researchers just refuse to accept that. And those researchers who think quantum physicists are ‘lost in math’ are right but, unfortunately, usually make no effort by speaking up and showing the rather obvious way out. Sabine Hossenfelder uses as much guru-like talk as a Sean Carroll.
In May this year, after finishing what I thought of as my last paper on quantum physics, I went to hospital for surgery. Last year, one of my brothers died from prostate cancer at a rather young age: 56, my age bracket. He had been diagnosed but opted for a more experimental treatment instead of the usual surgery that is done, because the consequences of the surgery are effectively very unpleasant and take a lot of joy out of life. I spent a week in a hospital bed, and then a month in my bed at home. I stopped writing. I gave up other things too: I stopped doing sports, and picked up smoking instead. It is a bad habit: Einstein was a smoker and – like me – did not drink, but smoking is bad for health. I feel it. I will quit smoking too, one day – but not now.
The point is: after a long break (more than six months), I did start to engage again in a few conversations, and I also looked at my 29 papers on my ResearchGate page again, and I realized some of them should really be re-written or re-packaged so as to ensure a good flow. I also note now that some of the approaches were more productive than others (some did not lead anywhere at all, actually), and so I felt like I should point those out. There are some errors in logic here and there too (small ones, I think, but errors nevertheless), and then quite some typos. Hence, I thought I should, perhaps, produce an annotated version of these papers, with comments and corrections as mark-ups. Re-writing or re-structuring all of them would require too much work, so I do not want to go there.
So that is what this paper is about: I printed all of the papers, and I will quickly jot down some remarks so as to guide the reader through the package, and alert them to things I thought of good stuff at the time (otherwise I would not have written about it), but that I do think of as not-so-great now.
Before I do so, I should probably make a few general remarks. Let me separate those out in yet another introductory section of this paper.
1. The first remark is that I do repeat a few things quite a lot – across and within these papers. Too much, perhaps. However, there is one thing I just cannot repeat enough: one should not think of the matter-wave as something linear. It is an orbital oscillation. This is really where the Old Great Men went wrong. The paper that has been downloaded the most is, effectively, the one on what I refer to as de Broglie’s mistake: the intuition of the young Louis de Broglie that an electron has a frequency was a stroke of genius (and, fortunately, Einstein immediately saw this, so he could bring this young scientist under the attention of everyone else), but this frequency is an orbital frequency. That, I repeat a lot – because only a few people seem to get that (with ‘a few’, I mean the few thousand people who download that paper).
Having said that, I did not do a good job at pointing out the issues with Dirac’s wave equation: I sort of dismiss it out of hand referring to Oppenheimer and Dirac’s discussion at the occasion of the first post-WW II Solvay Conference in my brief history paper on quantum-mechanical ideas, during which they both agree it does not work but fail to provide a consistent alternative. However, I never elaborated on why the equation does not work, so let me do this now.
The reason that it does not work is, basically, the same as the reason why de Broglie’s wave-packet idea does not work: Dirac’s equation is based on the relativistic energy-momentum relation. Just look at Dirac’s 1933 Nobel Prize lecture, in which he gives us the basic equation he used to derive his (in)famous wave equation:
W2/c2 – pr2 – m2/c2 = 0
Dirac does not bother to tell us but this is, basically, just the relativistic energy-momentum relationship: m02c4 = E2 – p2c2 (see, for example, Feynman-I-16, formula 16.13). Indeed: just divide this formula by c2 and re-arrange and you get Dirac’s equation. That is why Dirac’s wave equation is essentially useless: it incorporates linear momentum only. As such, it repeats de Broglie’s mistake, and that is to interpret the ‘de Broglie’ wavelength as something linear. It is not: frequencies, wavelengths are orbital frequencies and orbital circumferences. So anything you would want to do with energy equations that are based on that, leads nowhere: one has to incorporate the reality of spin from the start. Spin-zero particles do not exist and any modeling that starts off from modeling spin-zero particles, therefore, fails: you cannot put spin back in through the back door once you are done with the basic model, so to speak. It just does not work. It is what gives us, for example, those nonsensical 720-degree symmetries, which prevent us from understanding what is actually happening.
2. The second remark that I should make is that I did not pay enough attention to the analysis of light-particles: photons and neutrinos and, possibly, their antiforce or antimatter counterparts. Huh? Their anti-force counterparts? Yes. Remember: energy is measured as a force over a distance, and a force acts on a charge. And then Einstein’s energy-mass energy equivalence relation tells us we should think of mass in terms of energy. Hence, if we know the force, we have got everything. Electrons and protons have a very different charge/mass ratio (q/m) and, therefore, involve two very different forces, even if we think of these two very different forces – which we could refer to as ‘weak’ and ‘strong’ respectively, but that would generate too much confusion because these terms have already been used – as acting on the same charge.
I refer to my paper(s) on this: the hypothesis is, basically, that we have two different forces, indeed! One that keeps, say, the electron together, which is nothing but the electromagnetic force, and one that is much stronger and seems to have a somewhat different structure. That is the force that keeps a muon-electron or a proton together. The structure of this much stronger force is the same because it also acts on a charge, and we also have two field vectors: think of the magnetic field vector lagging the electric field by 90 degrees. However, it is also not the same because the form factor differs: orbital oscillations can be either planar or spherical (2D or 3D).
I will not go into the detail here – again, I would be rewriting the papers, which is not what I want to do here – but the point is that antimatter is defined by an antiforce, which sees the magnetic field vector preceding the electric field vector by the same phase difference (90 degrees). It is just an application of Occam’s Razor Principle: the very same principle which made Dirac predict the existence of the positron: if the math shows there is some possibility of something else existing – a positively charged ‘electron’, at the time – then that possibility must be real, and we must find ‘that thing’. The history of science has shown scientists always did.
That is all clear enough (or not), but so the point here is this: the lightlike particles (photons and neutrinos) that carry the electromagnetic and nuclear force respectively (I refer to that strong(er) force as ‘nuclear’ for rather obvious reasons) must have anti-counterparts: antiphotons and antineutrinos. And so I regret that I did not do too much analysis on that. I am pretty sure, for example, that antiphotons must play a role in the creation of electron-positron pairs in experiments such as SLAC’s E144 experiment (pair production out of light-on-light (photonic) interaction).
In short, I regret I did not have enough time and/or inspiration to analyze such things much more in detail than I did in my paper on matter-antimatter pair production/annihilation, especially because that is a paper that gets a lot of downloads too, so I feel I should rework it to present more material and better analysis. It is unfortunate that energy and time is limited in a man’s life. The question is, effectively, very interesting because the ‘world view’ that emerges from my papers is a rather dualistic one: we have the concept of charge on the one hand, and the concept of a field on the other. Matter-antimatter pair creation/annihilation from/into photons suggest that charge may, after all, be reducible to something that is even more fundamental. That is why I bought a rather difficult book on chiral field theory (Lähde and Meißner, Nuclear Lattice Effective Field Theory, 2019), but an analysis of that will probably be a retirement project or something.
3. The remark above directly relates to something else I think I did not do so well, and that is to explain Mach-Zehnder interference by a model in which we think of circularly polarized photons (or elliptically polarized, I should say, to be somewhat more general) as consisting of two linear components, which we may actually split from each other by a beam splitter. That takes the mystery out of Mach-Zehnder interference, but I acknowledge my analysis in a paper like my ‘K-12 level paper’ on quantum behavior (which gives a one-page overview of the logic) may be too short to convince skeptical readers. The Annex to my rather philosophical paper on the difference between a theory, a calculation and an explanation is better, but even there I should have gone much further than I did.
4. I wrote quite a few papers that aim to develop a credible neutron and/or deuteron model. I think of the neutron in very much the same way as Ernest Rutherford, the intellectual giant who first hypothesized the existence of the neutron based on cosmological research, thought about neutrons: a positively charged proton or other nuclear particle attached to some kind of deep electron. It is worth quoting his instinct on this, as expressed at the occasion of the 1921 Solvay Conference, in response to a question during the discussions on Rutherford’s paper on the possibility of nuclear synthesis in stars or nebulae from the French physicist Jean Baptiste Perrin who, independently from the American chemist William Draper Harkins, had proposed the possibility of hydrogen fusion just the year before (1919):
“We can, in fact, think of enormous energies being released from hydrogen nuclei merging to form helium—much larger energies than what can come from the Kelvin-Helmholtz mechanism. I have been thinking that the hydrogen in the nebulae might come from particles which we may refer to as ‘neutrons’: these would consist of a positive nucleus with an electron at an exceedingly small distance (“un noyau positif avec un électron à toute petite distance“). These would mediate the assembly of the nuclei of more massive elements. It is, otherwise, difficult to understand how the positively charged particles could come together against the repulsive force that pushes them apart—unless we would envisage they are driven by enormous velocities.”
We may add that, just to make sure he gets this right, Rutherford is immediately requested to elaborate his point by the Danish physicist Martin Knudsen, who asks him this: “What’s the difference between a hydrogen atom and this neutron?” Rutherford simply answers as follows: “In a neutron, the electron would be very much closer to the nucleus.”
In light of the fact that it was only in 1932 that James Chadwick would experimentally prove the existence of neutrons (and positively charged protons), we should be deeply impressed by the foresightof Rutherford and the other pioneers here: the predictive powerof their theories and ideas is truly amazing by any standard—including today’s. It may have something to do with the fact that the distinction between theoretical and experimental physicists was not so clear then. The point is this: we fully subscribe to Rutherford’s intuition that a neutron should, somehow, be a composite particle consisting of a proton and an electron, but we did not succeed in modeling that convincingly. We explored two ways to go about it:
One is to think of a free neutron which, we should remind ourselves, is a semi-stable particle only (its lifetime is a bit less than 15 minutes, which is an eternity in comparison to other non-stable particles). The challenge is then to build a credible n0 = p+ + e– model.
The other option is to try to build a neutron model based on its stability inside of the deuteron nucleus. Such model should probably be based on Schrödinger’s D+ = p+ + e– + p+Platzwechsel model, which thinks of the electron as a sort of glue holding the two positive charges together.
The first model is based on the assumption that we have two forces, very much like the centripetal and centrifugal force inside of a double-star. The difference – with a double-star model, that is – is that the charges have no rest mass. The nature of those two forces is, therefore, very different than (1) the centripetal gravitational force that keeps the two stars together and (2) the centrifugal force that results from their kinetic energy and/or orbital momentum. We assumed the attractive force between the p+ and e– is the usual electromagnetic force between two opposite charges (so that keeps them together). However, because the two charges clearly do not just go and sit on top of each other, we also assumed a ‘nuclear’ force acts at very close distances, and we tried to model this by introducing a Yukawa-like nuclear potential.
We will discuss this more in detail when commenting on our papers in the next section, but the truth is that we feel we have not been able to develop a fully consistent model: it is not like our electron or proton model, which yields fully consistent calculations of the experimentally measured mass, radius, magnetic moment and other so-called intrinsic properties (e.g. the anomaly in the magnetic moment of the electron) of these two elementary particles. We could not do for the neutron. However, we hope some smart PhD student will try his or her hand at improving on our models and succeed where we did not.
As for the second model (the deuteron nucleus model), we did not work all that because that is, basically, an even more complicated problem than the math of a classical three-body problem which, as you know, has no analytical solution. So we inevitably have to lump two bodies together – the two protons might make for a nice massive pair, for example – but then you lose the idea of the neutron. In other words, it may give you a deuteron model, but nothing much in terms of a neutron model.
5. Those were the main frustrations, I think. We will probably point out others too in the more detailed paper-by-paper comments in the next section, but I would like to make one or two more remarks regarding style and conversation culture in physics now.
The main remark is this: I did some research in economics (various sub-disciplines ranging from micro-economics to the history of thought in economics) and I found the conversational style of fellow researchers in those fields much more congenial and friendly than in physics. It may have something to do with the fact such study was done while I was young (so that was almost 30 years ago and people were, quite simply, friendlier then, perhaps), but I also think there might be a different reason. I was (and still am) interested in quantum physics because I wanted to know: this search for truth in modeling (or whatever you want to call it) is rooted in a deep need or desire to understand reality. Personally, I think the Uncertainty Principle got elevated to some kind of metaphysical principle because some of the scientists wanted to reserve a space for God there. I am not religious at all, and if God exists, I am sure he would not to be hiding there but inside of our mind.
In any case, my point here is this: I think there is an emotional or religious aspect to discussions on fundamentals that is absent in the social sciences which, in most cases, turns these discussions quickly personal or even aggressive. As an example, I would refer to all these ‘relativity doubters’ that pop up in the more popular or general ResearchGate discussion threads on the ‘consistency’ of quantum physics, or the pros and cons of modern cosmological theories. I vented my frustration on that on my blog a few times (here is an example of my issues with SRT/GRT doubters), and so then I just stop arguing or contributing to these threads, but I do find it sad because a lot of people like me probably just do the same: they stop engaging, and that probably makes the ignorance even worse and then there is no progress at all, of course!
However, having said this, I also note unfriendliness is inversely proportional to expertise, knowledge and experience. In other words: never be put off by anyone. I did go through the trouble of contacting the PRad Research Lab and people like Dr. Randolf Pohl (Max Planck Institute), and I got curt but useful answers from them: answers that challenged me, but those challenges have helped me to think through my models and have contributed to solidifying my initial intuitions, which I would sum as follows: there is a logical interpretation of everything. I refer to it as a realist interpretation of quantum physics and, as far as I am concerned, it is pretty much the end of physics as a science. We do know it all now. There is no God throwing dices or tossing coins. Statistical determinism, yes, but it is all rooted in formulas and closed mathematical models representing real stuff in three-dimensional space and one-dimensional time.
Note: I briefly tried to hyperlink the titles (of the papers) to the papers themselves, but the blog editor (WordPress) returned an error. I guess this blog post is quite long and has to many links already. In any case, the titles do refer to the papers on my RG site, and the reader can consult them there.
No comments. We think this paper gives a rather nice overview of what made sense to us. We also like the two annexes because they talk about quantum-mechanical operators and show why and how the argument of the wavefunction incorporates (special) relativity (SRT/GRT naysayers should definitely read this).
There is a remnant of one of the things we tried and did not yield much: a series expansion of kinetic and/or potential energy from Einstein’s energy-mass equivalence relation. That result from a discussion with researchers trying to model other deep electron orbitals (other than the ‘deep’ electron in a neutron or a deuteron nucleus): they were thinking of potentials in terms of first-, second-, third-, etc.-order terms, so as to simplify things. I went along with it for a while because I thought it might yield something. But so it did not. Hence, I would leave that out now, because the reader probably wonders what it is that I am trying to do, and rightly so!
This is one in a series of what I jokingly thought of as a better or more concise version of Feynman’s Lectures on Physics. I wrote six of these. Feynman once selected ten ‘easy pieces’ and ten ‘not-so-easy’ pieces from his own lectures, if I am not mistaken¾but so these should qualify as relatively ‘easy’ pieces (in comparison with other papers, that is).
It downplays the concept of the gyromagnetic ratio in quantum mechanics somewhat by focusing on the very different charge/mass ratio for the electron and a proton (q/m) only. For the rest, there is nothing much to say about it: if you are a student in physics, this is the math you surely need to master!
This paper is one of those attempts to be as short as I can be. I guess I wanted it to be some kind of memorandum or something. It still developed into five pages, and it does not add anything to all of the longer papers. Because it is short and has no real purpose besides providing some summary of everything, I know think its value is rather limited. I should probably take it down.
This is one of the papers on a neutron or deuteron model. I think the approach is not bad. The use of orbital energy equations to try to model the orbital trajectories of (zero rest-mass) charges instead of the usual massive objects in gravitational models is promising. However, it is difficult to define what the equivalent of the center of mass would be in such models. One might think it should be the center of ‘energy’, but the energy concepts are dynamic (potential and kinetic energy vary all the time). Hence, it is difficult to precisely define the reference point for the velocity vector(s) and all that. We refer to our general remarks for what we think these papers might have yielded, and what not. For the rest, we let the reader go through them and, hopefully, try to do better.
We like this paper very much because it shows why quaternion math should be used far more often than it is actually done in physics: it captures the geometry of proton and neutron models so nicely. We probably will want to delve into this more as yet another retirement project. We also like this paper because it is short and crispy.
Probably not our best paper, and one that should or could be merged with others covering the same topics. However, the philosophical reflections in this paper – on the arrow of time and what is absolute and relative in physics – are nice and can be readily understood. They would probably come first if ever we would want to write a textbook or something. We also recommend the primordial dimensional analysis of basic equations in physics: modern-day papers usually do not bother to check or comment on these.
This is one of these papers which shows the shortcomings of our approach to modeling anything ‘nuclear’. The idea of two or three charges holding and pushing each other apart simultaneous – with two opposite forces acting, just like the centripetal and centrifugal force in any gravitational model – is nice, and we think the substitution of mass by some combination of charge and mass in the orbital energy equation is brilliant (sorry if this sounds egotistical again) but, as mentioned above, it is difficult to define what the equivalent of the center of mass would be in such models.
Also, because of the distance functions involved (the ‘nuclear’ force in such a model varies with the square of the distance and is, therefore, non-linear), one does not get any definite solution to the system: we derived a lower limit for a ‘range’ factor for the nuclear force, for example (and its magnitude corresponds more or less to what mainstream physicists – rather randomly – use when using Yukawa-like potentials).
It would be an interesting area for modeling if and when I would have more time and energy for these things, so I do hope others pick up on it and, hopefully, do better.
Same remarks as above: I like this paper because it is short. I also allow myself to blast away at quark-gluon theories (‘smoking gun physics’, as I call it). There are also the explanations of useful derivatives of the wavefunction, which show why and how our geometric interpretation of the wavefunction makes sense.
We also quickly demonstrate the limitations of the scattering matrix approach to modeling unstable particle and particle system processes, despite the fact we do love it: the problem is just that you lose track of directions and that we, therefore, cannot explain even very simple stuff such as scattering angles in Compton scattering processes using that S-matrix approach. Here too, we hope some clever people might ‘augment’ the approach.
We like this paper. It deserves a lot more downloads than it gets, we think. It is the proper alternative to all kinds of new ‘conservation laws’ – and the associated new ‘strange’ properties of particles – that were invented to make sense of the growing ‘particle zoo’. The catalogue of the Particle Data Group should be rewritten, we feel. 😊
Of course, any physicist should be interested in cosmology – if only because any Big Bang theory uses pair creation/annihilation theories rather extensively. As mentioned in our general remarks, we still struggle with these theories and, yes, definitely on our list as a retirement project.
The main value of the paper is that it offers a consistent explanation of ‘dark matter’ in terms of antimatter, and also that it does not present the apparently accelerating pace of the expansion of the Universe as something that is necessarily incongruent: there may be other Universes around, beyond what we can observe. The paper also offers some other ‘common-sense’ explanations: none of them involves serious doubts on standard theory (we do not doubt anything like SRT and/or GRT). We, therefore, think that this paper shows that I am much more ‘mainstream’ and far less ‘crackpot’ than my ‘enemies’ pretend I am. 😊
This is definitely my worst paper in terms of structure. It has no flow and jumps from this to that. Even when I read it myself, I wonder what it is trying to say. I must have been in a rather weird mood when I wrote it, and then it got too long and I probably then suddenly had enough of it. The conclusions do sound like I had gone mad: if my kids or someone else would have read it before I published it, they might have prevented me from doing so. Any case, it is there now. I will probably take it off one day.
Of course, I note the month of writing: my specialist had just confirmed my prostate cancer was very aggressive, and that I had to do the surgery sooner rather than later if I wanted to avoid what had killed my brother just months before: metastasis to kidneys and other organs. And my long-term girlfriend has just broke up – again. And I had just come back from yet another terrible consultancy job in Afghanistan. Looking into my diary of those days, I had probably relapsed into a bit of drinking, and too many parties with the ghosts of Oppenheimer and Ehrenfest. In short, I should take that paper of the web, but I will leave it there just for the record.
This paper is better than the one mentioned above but – at the same time – suffers from the same defects: no clear flow in the argument, ‘jumpy’, and lots of ‘deus ex machina’-like additions and sidekicks. Its only advantage is that it does offer a rather clear explanation of what works and probably cannot work in Wheeler’s geometrodynamicsprogramme: mass-without-mass models are fine. The way to go: forces act on charges, and energy is force over a distance, and mass relates to energy through Einstein’s mass-energy equivalence relation. No problem. But the concept of charge is difficult to reduce. Chiral field theories may yet prove to do that, but I am rather skeptical. I bought the most recent book(s) on that, but I need to find time and energy to work myself through it.
This is a much more focused paper. However, I cannot believe I inserted remarks on the ‘elasticity’ of spacetime there: that stinks of what physicist and Nobel Prize winner Robert B. Laughlin wrote:
“It is ironic that Einstein’s most creative work, the general theory of relativity, should boil down to conceptualizing space as a medium when his original premise [in special relativity] was that no such medium existed [..] The word ‘ether’ has extremely negative connotations in theoretical physics because of its past association with opposition to relativity. This is unfortunate because, stripped of these connotations, it rather nicely captures the way most physicists actually think about the vacuum. . . . Relativity actually says nothing about the existence or nonexistence of matter pervading the universe, only that any such matter must have relativistic symmetry. [..] It turns out that such matter exists. About the time relativity was becoming accepted, studies of radioactivity began showing that the empty vacuum of space had spectroscopic structure similar to that of ordinary quantum solids and fluids. Subsequent studies with large particle accelerators have now led us to understand that space is more like a piece of window glass than ideal Newtonian emptiness. It is filled with ‘stuff’ that is normally transparent but can be made visible by hitting it sufficiently hard to knock out a part. The modern concept of the vacuum of space, confirmed every day by experiment, is a relativistic ether. But we do not call it this because it is taboo.”
I was intrigued by that, because I was still struggling somewhat with the meaning of various ratios in my ‘oscillator’ model of elementary particles, but I now think any reference to an ‘aether-like’ quality of space time is not productive. Space and time are, effectively, categories of our mind – as Immanuel Kant had already pointed out about 240 years ago (it is interesting that the Wikipedia article on Einstein notes that Albert Einstein had digested all of Kant’s philosophy at the age of twelve) – and space and time are relativistically related (there is no ‘absolute’ time that ‘pervades’ all of 3D space) – but there is no reason whatsoever to think of relativistic spacetime as being aether-like. It is just the vacuum in which Maxwell’s electromagnetic waves propagate themselves. There is nothing more to it.
See the general remarks on my attempts to develop a decent model of the neutron and deuteron nucleus. They were triggered by interesting discussions with a Canadian astrophysicist (Andrew Meulenberg), an American retired SLAC researcher (Jerry Va’vra) and a French ‘cold fusion’ researcher (Jean-Luc Paillet). I was originally not very interested because these are aimed at proving a smaller version of the hydrogen (which is usually referred to as the ‘hydrino’) must exist, and that ‘hydrino’ would offer endless possibilities in terms of ‘new energy’ production. The whole enterprise is driven by one of the many crooks that give the field of ‘cold fusion’ a bad name, but managed to get lots of private funding nevertheless: Randell L. Mills, the promotor of the Brilliant Light Power company in New Jersey. The above-mentioned researchers are serious. I do not think as highly of Randell Mills, although I note he impresses people with his books on ‘classical quantum physics’. I note a lot of ‘hocus-pocus’ in these books.
This is one of those ‘Feynman-like’ lectures I wrote. I think of all of them as rather nice. I do not go into speculative things, and I take the trouble of writing everything out, so the reader does not have to do all that much thinking and just can ‘digest’ everything rather easily.
This is definitely one of the papers I wanted to further develop if ever I would have more time and energy. See my general remarks: SLAC’s E144 experiment (and similar experiments) are very intriguing because they do seem to indicate the quintessential concept of charge may be further reducible to ‘field-like’ oscillations. I must thank André Michaud here for kindly pointing that out to me.
I think of this paper as highly relevant and practical. It points out why the common view that Schrödinger’s wave equation would not be relativistically correct is erroneous: it is based on an erroneous simplification in the ‘heuristic’ derivation of this wave equation in the context of, yes, crystal lattices. Definitely one of the better papers when I look back at it now¾just like the other ‘lecture-like’ papers. The history of these ‘lecture-like’ papers is simple: I realized I needed to write more ‘K-12 level’ papers (although they are obviously not really K-12 level) so as to be able to communicate better on the ‘basics’ of my realist interpretation of quantum physics and the ‘essentials’ of my elementary particle models.
The paper usefully distinguishes concepts that are often used interchangeably, but must be distinguished clearly: waves, fields, oscillations, amplitudes and signals.
This is an oft-downloaded paper, and the number of downloads reflects its value: it does offer a rather clear overview of all of my work on ‘interpreting’ the wavefunction, and shows its geometrical meaning. Hence, I will not comment on it: it speaks for itself.
I like this paper. It wanted to present a sort of ‘short-cut’ for people who want to learn about physics fast and, therefore, will want to avoid all of the mistakes I made when trying to understand it.
This paper talks about where Feynman went wrong in his Lectures. Parvus error in principio magnus est in fine (as Aquinas and, before him, Aristotle said so eloquently), and the ‘small mistake at the beginning’ is surely not a ‘happy’ one! I consider the discovery of this ‘mistake’ to be my greatest personal ‘discovery’ in terms of making sense of it all, and so I do recommend any interested reader to go through the paper.
I appreciate this paper in the same vein: quite straightforward and to the point. It explains the basic ‘mysteries’ which are usually presented in the first course on quantum mechanics at any university in terms that are readily understandable, and shows these are not ‘mysteries’ after all!
Of all papers, definitely the one I would recommend reading if you have time for only one. See my general remarks on why mainstream QED/QFT does not work. The only thing I should have added are the remarks on Dirac’s equation (this paper has an Annex on wave equations, and so I should have talked about Dirac’s too). But so I did that in the introductory section with general remarks on all of my papers above.
I like this paper too. It is not so technical as all of the others, so the ‘lay’ reader may want to go through this. It traces a rather ‘bad’ history of ideas that led nowhere¾but so that is useful to see what should work, and does work, in the field of quantum physics!
I like this one too. It should probably be read in combination with the above-mentioned paper on the bad ideas in the history of quantum physics.
It is fifty (50!) pages, though. But it has some really interesting things, such as much more consistent presentation of why Mach-Zehnder interference (‘one-photon’ diffraction, or the so-called ‘interference with a photon with itself’) is not so mysterious as it appears to be. It surely should not be explained in terms of nonsensical concepts such as non-locality, entanglement and what have you in modern-day gibberish.
This was my very first ‘entry’ on ResearchGate. It is based on the 60-odd papers and the hundreds of blog posts I had published in the decades before, on sites such as viXra.org that are not considered to be mainstream and, therefore, shunned by most. In fact, in the very beginning, I copied my papers on three sites: ResearchGate, viXra.org and academia.org. I stopped doing that when things picked up on RG. I do think of it as the more serious site of the three. 😊
Well… That is it! If you got here, congratulations for your perseverance!
Jean Louis Van Belle, 6 December 2021
 I downloaded the image from a website selling Christmas presents long time ago, and I have not been able to trace back from where I have got it. If someone recognizes this as their picture, please let us know and we will acknowledge the source or remove it.
 Particles are small – very small – but not infinitesimally small: they have a non-zero spatial dimension, and structure! Only light-like particles – photons and neutrinos – are truly pointlike, but even they do have a structure as they propagate in relativistic spacetime.
 I got the label of ‘crackpot theorist’ or the reproach of ‘not understanding the basics’ a bit all too often, and too often from people who do have better academic credentials in the field, but a publication record which is far less impressive¾or in an unrelated field.
 See: John Stewart Bell, Speakable and unspeakable in quantum mechanics, pp. 169–172, Cambridge University Press, 1987 (quoted from Wikipedia). J.S. Bell died from a cerebral hemorrhage in 1990 – the year he was nominated for the Nobel Prize in Physics and which he, therefore, did not receive (Nobel Prizes are not awarded posthumously). He was just 62 years old then.
 We think the latest revision of SI units (2019) consecrates that: that revision completes physics. It defines a very precise number of constants in Nature, and simplifies the system such that the system is complete without redundancy. It, therefore, respects Occam’s Razor Principle: the number of degrees of freedom in the description matches that which we find in Nature. Besides prof. dr. Pohl’s contributions to solving the proton radius puzzle, his role in the relevant committees on this revision probably also make him one of the truly great scientists of our era.
 We contacted both. Ms. Hossenfelder never reacted to our emails. Mr. Carroll quoted some lines from John Baez’ ‘crackpot index’. I had heard such jokes before so I did not find them so amusing anymore.
 Sometimes I find an error even in a formula. That is annoying, but then it is also good: it makes readers double-check and look at the material more carefully. It makes them think for themselves, which is what they should do.
 Dirac basically expands this basic energy-momentum relation into a series, but the mathematical conditions for which such expansion is valid are, apparently, not there. The first-, second-, third-, fourth-, etc.-order terms do not converge, and one gets those ‘infinities’ which blow it all up¾which is why Dirac, nearing the end of his life, got so critical and annoyed by the very theory his wave equation led to: quantum field theory. Reading between the lines, a number of Nobel Prize winners in physics do seem to reject some of the theories for which they got the award. W.E. Lamb is one of them: he wrote a highly critical paper of the concept of a photon at rather old age, despite the fact that his contributions to this field of study had yielded him a Nobel Prize! Richard Feynman is another example: he got a Nobel Prize for a number of modern contributions, but his analysis of ‘properties’ such as ‘ strangeness’ in his 1963 Lectures on Physics can be read as being highly critical of the ‘ontologizing’ of concepts such as quarks and gluons, which he seems to think of as being mathematical concepts only. I talk a bit about that in my paper on the alternative to modern-day QED and QFT (a new S-matrix programme), so I will not say more about this here.
 I think I do a much better job at explaining interference and/or diffraction of electrons in the mentioned papers, although the reader may also be hungry for more detail there.
 The reader should note that, although the mass of an electron is only about 1/2000 of that of a proton, the radius of a (free) electron is actually much larger than the radius of a proton. That is a strange thing but it is what it is: a proton is very massive because of that very strong (nuclear) force inside. Hence, when trying to visualize these n = p + e models, one should think of something like an electron cloud with a massive positive charge whirling around in it¾rather than the other way around.
 The interested reader can google what this is about.
 It is a weird coincidence of history that the proceedings of the Solvay Conferences are publicly available in French, even if many papers must have been written in English. The young Louis de Broglie was one of those young secretaries tasked with translations in what was then a very prominent scientific language: French. It got him hooked, obviously.
 When reading modern-day articles in journals, one gets the impression a lot of people theorize an awful lot about very little empirical or experimental data.
 The idea is that the pointlike charge itself has no inertial mass. It, therefore, goes round and round at the speed of light. However, while doing so, it acquires an effective mass, which is (usually) half of the total mass of the particle as a whole. This ½ factor confuses many, but should not do so. It comes directly out of the energy equipartition principle, and can also be derived from rather straightforward relativistically correct oscillator energy calculations (see p. 9 of our paper on the meaning of the wavefunction).
 We get value that is twice as large as the usual 2.8 fm range. By the way, we think of the latter value as being ‘rather random’ because it is just the deuteron radius. Indeed, if, as a nuclear scientist, you do not have any idea about what range to use for a nuclear scale factor (which is pretty much the case), then that is surely a number that would come in handy, because it is empirical rather than theoretical. We honestly think there is nothing more to it, but I think academics will probably cry wolf and say that their models are much more sophisticated than what I suggest here. I will be frank: can you show me why and how, not approximately but exactly?
 If you click on the link, you will see my blog post on it, which also thinks of the Higgs particle – a ‘scalar’ particle, really? – as a figment of the mind. My criticism on these theories which can never really be proven goes back years ago, but has not softened. On the contrary.
 This is also a paper with a fair amount of types. On page 36, I talk of the prediction of the proton, for example. Of course, I meant to say: the prediction of the existence of the positron. Such typos are bad. I am ashamed.
 Some of these ‘sidekicks’ do get more attention in later papers (e.g. this paper has the early thinking on using orbital energy equations to model orbitals of pointlike charges instead of masses), but they come across as rather chaotic and not well thought-through in this paper, because they were chaotic and not well thought-through at that point in time.
1. The good thing is: I expanded my paper which deals with more advanced questions on this realist interpretation of QM (based on mass-without-mass models of elementary particles that I have been pursuing). I think I see everything clearly now: Maxwell’s equations only make sense as soon as the concepts of charge densities (expressed in coulomb per volume or area unit: C/m3 or C/m2) and currents (expressed in C/s) start making sense, which is only above the threshold of Planck’s quantum of action and within the quantization limits set by the Planck-Einstein relation. So, yes, we can, finally, confidently write this:
Quantum Mechanics = All of Physics = Maxwell’s equations + Planck-Einstein relation
To get my frustration out, I copy the exchange below – as it might be informative when you are confronted with weirdos on some scientific forum too! It starts with a rather non-sensical remark on the reality of infinities, and an equally non-sensical question on how we get quantization from classical equations (Maxwell’s equations and then Gauss and Stokes theorem), to which the answer has to be: we do not, of course! For that, you need to combine them with the Planck-Einstein relation!
Start of the conversation: Jean Louis Van Belle, I found Maxwell quite consistent with, for instance Stokes aether model. Can you explain how he ‘threw it out‘. It was a firm paradigm until Einstein removed it’s power to ‘change‘ light speed, yet said “space without aether is unthinkable.” (Leiden ’21). He then mostly re-instated it in his ’52 paper correcting 1905 interpretations in bounded ‘spaces in motion within spaces) completed in the DFM. ‘QM’ then emerges.
My answer: Dear Peter – As you seem to believe zero-dimensional objects can have properties and, therefore, exist, and also seem to believe infinity is also real (not just a mathematical idealization), then we’re finished talking, because – for example – no sensible interpretation of the Planck-Einstein relation is possible in such circumstances. Also, all of physics revolves around conjugate variables, and these combine in products or product sums that have very small but finite values (think of typical canonic commutator relations, for example): products of infinity and zero are undefined – in mathematics too, by the way! I attach a ‘typically Feynman’ explanation of one of these commutator relations, which talks about the topic rather well. I could also refer to Dirac’s definition of the Dirac function (real probability functions do not collapse into an infinite probability density), or his comments on the infinities appearing in the perturbation theory he himself had developed, and which he then distanced himself from exactly because it generated infinities, which could not be ‘real’ according to him. I’ve got the feeling you’re stuck in 19th century classical physics. Perhaps you missed one or two other points from Einstein as well (apart from the references you give).To relate this discussion to the original question of this thread, I’d say: physicists who mistake mathematical idealizations for reality do obviously not understand quantum mechanics. Cheers – JL
PS: We may, of course, in our private lives believe that God ‘exists’ and that he is infinite and whatever, but that’s personal conviction or opinion: it is not science, nothing empirical that has been verified and can be verified again at any time. Oh – and to answer your specific question on Maxwell’s equations and vector algebra (Gauss and Stokes theorem), they do not incorporate the Planck-Einstein relation. That’s all. Planck-Einstein (quantization of reality) + Maxwell (classical EM) = quantum physics.
Immediate reply: Jean Louis Van Belle , I don’t invoke either zero dimensional objects, infinity or God! Neither the Planck length or Wolframs brilliant 10-93 is ‘zero’. Fermion pair scale is the smallest ‘Condensed Matter‘ but I suggest we must think beyond that to the condensate & ‘vacuum energy’ scales to advance understanding. More 22nd than 19th century! Einstein is easy to ‘cherry pick’ but his search for SR’s ‘physical’ state bore fruit in 1952!
[This Peter actually did refer to infinities and zeroes in math as being more than mathematical idealizations, but then edited out these specific stupidities.]
My answer: Dear Peter – I really cannot understand why you want to disprove SRT. SRT (or, at least, the absoluteness of lightspeed) comes out of Maxwell’s equations. Einstein just provided a rather heuristic argument to ‘prove’ it. Maxwell’s equations are the more ‘real thing’ – so to speak. And then GRT just comes from combining SRT and Mach’s principle. What problem are you trying to solve? I understand that, somehow, QM does NOT come across as ‘consistent’ to you (so I do not suffer from that: all equations look good to me – I just have my own ‘interpretation’ of it, but I do not question their validity). You seem to suspect something is wrong with quantum physics somewhere, but I don’t see exactly where.
Also, can you explain in a few words what you find brilliant about Wolfram’s number? I find the f/m = c2/h = 1.35639248965213E50 number brilliant, because it gives us a frequency per unit mass which is valid for all kinds of mass (electron, proton, or whatever combination of charged and neutral matter you may think of), but so that comes out of the E = mc2 and E = hf, and so it is not some new ‘God-given’ number or something ‘very special’: it is just a straight combination of two fundamental constants of Nature that we already know. I also find the fine-structure constant (and the electric/magnetic constants) ‘brilliant numbers’ but, again, I don’t think they are something mysterious. So what is Wolfram’s number about? What kind of ratio or combination of functions or unexplained explanation or new undiscovered simplification of existing mainstream explanations does it bring? Is it a new proportionality constant – some elasticity of spacetime, perhaps? A combination of Planck-scale units? Does it connect g and the electric constant? An update of (the inverse of) Eddington’s estimate of the number of protons in the Universe based on latest measurements of the cosmological constant? Boltzmann’s number and Avogadro’s constant (or, in light of the negative exponent, their inverse) – through the golden ratio or a whole new ‘holographic’ theory? New numbers are usually easy to explain in terms of existing theory – or in terms of what they propose to change to existing theory, no?
Perhaps an easy start is to give us a physical dimension for Wolfram’s number. My 1.35639248965213E50 number is the (exact) number of oscillations per kg, for example – not oscillations of ‘aether’ or something, but of charge in motion. Except for the fine-structure constant, all numbers in physics have a physical dimension (except if they’re scaling or coupling constants, such as the fine-structure constant), even if it’s only a scalar (plain number), it’s a number describing x units of something) or a density (then it is x per m3 or m2, per J, per kg, per coulomb, per ampere, etcetera – whatever SI unit or combination of SI units you want to choose).
On a very different note, I think that invoking some statement or a late paper of Einstein in an attempt to add ‘authority’ to some kind of disproof of SRT invokes the wrong kind of authority. 🙂 If you would say Heisenberg or Bohr or Dirac or Feynman or Oppenheimer started doubting SRT near the end of their lives, I’d look up and say: what? Now, no. Einstein had the intellectual honesty to speak up, and speak up rather loudly (cf. him persuading the US President to build the bomb).
Post scriptum (26 April): I added another five-pager on fundamental concepts on ResearchGate, which may or may not help to truly understand what might be the case (I am paraphrasing Wittgenstein’s definition of reality here). It is on potentials, and it explains why thinking in terms of neat 1/r or 1/r2 functions is not all that helpful: reality is fuzzier than that. Even a simple electrostatic potential may be not very simple. The fuzzy concept of near and far fields remains useful.
I am actually very happy with the paper, because it sort of ‘completes’ my thinking on elementary particles in terms of ring currents. It made me feel like it is the first time I truly understand the complementarity/uncertainty principle – and that I invoke it to make an argument.
I just wrapped up a discussion with some mainstream physicists, producing what I think of as a final paper on the nuclear force. I was struggling with the apparent non-conservative nature of the nuclear potential, but now I have the solution. It is just like an electric dipole field: not spherically symmetric. Nice and elegant.
I can’t help copying the last exchange with one of the researchers. He works at SLAC and seems to believe hydrinos might really exist. It is funny, and then it is not.
Me: “Dear X – That is why I am an amateur physicist and don’t care about publication. I do not believe in quarks and gluons. 😊 Do not worry: it does not prevent me from being happy. JL”
X: “Dear Jean Louis – The whole physics establishment believes that neutron is composed of three quarks, gluons and a see of quark-antiquark pairs. How does that fit into your picture? Best regards, X”
Me: “I see the neutron as a tight system between positive and negative electric charge – combining electromagnetic and nuclear force. The ‘proton + electron’ idea is vague. The idea of an elementary particle is confusing in discussions and must be defined clearly: stable, not-reducible, etcetera. Neutrons decay (outside of the nucleus), so they are reducible. I do not agree with Heisenberg on many fronts (especially not his ‘turnaround’ on the essence of the Uncertainty Principle) so I don’t care about who said what – except Schroedinger, who fell out with both Dirac and Heisenberg, I feel. His reason to not show up at the Nobel Prize occasion in 1933 (where Heisenberg received the prize of the year before, and Dirac/Schroedinger the prize of the year itself) was not only practical, I think – but that’s Hineininterpretierung which doesn’t matter in questions like this. JL”
X: “Dear Jean Louis – I want to to make doubly sure. Do I understand you correctly that you are saying that neutron is really a tight system of proton and electron ? If that is so, it is interesting that Heisenberg, inventor of the uncertainty principle, believed the same thing until 1935 (I have it from Pais book). Then the idea died because. Pauli’s argument won, that the neutron spin 1/2 follows the Fermi-Dirac statistics and this decided that the neutron is indeed an elementary particle. This would very hard sell, if you now, after so many years, agree with Heisenberg. By the way, I say in my Phys. Lett. B paper, which uses k1/r + k2/r2 potential, that the radius of the small hydrogen is about 5.671 Fermi. But this is very sensitive to what potential one is using. Best regards, X.”
The notes must be somewhere in some unexplored archive. If there are Holy Grails to be found in the history of physics, then these notes are surely one of them. There is a book about a mysterious woman, who might have inspired Schrödinger, but I have not read it, yet: it is on my to-read list. I will prioritize it (read: order it right now). 🙂
Oh – as for the math and physics of the wave equation, you should also check the Annex to the paper: I think the nuclear oscillation can only be captured by a wave equation when using quaternion math (an extension to complex math).
A sympathetic researcher, Steve Langford, sent me some of his papers, as well as a link to his ResearchGate site, where you will find more. Optical minerology is his field. Fascinating work – or, at the very least, a rather refreshing view on the nitty-gritty of actually measuring stuff by gathering huge amounts of data, and then analyzing it in meaningful ways. I learnt a lot of new things already (e.g. kriging or Gaussian regression analysis, and novel ways of applying GLM modelling).
Dr. Langford wrote me because he wants to connect his work to more theory – quantum math, and all that. That is not so easy. He finds interesting relations between temperature and refractive indices (RIs), as measured from a single rock sample in Hawaii. The equipment he used, is shown below. I should buy that stuff too! I find it amazing one can measure light spectra with nanometer precision with these tools (the dial works with 0.1 nm increments, to be precise). He knows all about Bragg’s Law and crystal structures, toys with statistical and graphical software tools such as JMP, Surfer, and talks about equipping K-12 level students with dirt-cheap modular computer-connected optical devices and open software tools to automate the data gathering process. In short, I am slightly jealous of the practical value of his work, and the peace of mind he must have to do all of this! At the very least, he can say he actually did something in his life! 🙂
Having showered all that praise, I must admit I have no clue about how to connect all of this to quantum effects. All I know about temperature – about what it actually is (vibrational motion of molecules and atoms within molecules, with multiple degrees of freedom (n > 3) in that motion) – is based on Feynman’s Lectures (Chapters 40 to 45 of the first Volume). Would all that linear, orbital and vibrational motion generate discernible shifts of spectral lines? Moreover, would it do so in the visible light spectrum (X-rays are usually used – increases measurement precision – but such equipment is more expensive)? I have no idea.
Or… Well, of course I do have some intuitions. Shifts in frequency spectra are well explained by combining statistics and the Planck-Einstein relation. But can we see quantum physics in the data? In the spectral lines themselves? No. Not really. And so that’s what’s got me hooked. Explaining a general shift of the frequency spectrum and discerning quantum effects in RIs in data sets (analyzing shifts of spectral lines) are two very different things. So how could we go about that?
Energy is surely quantized, and any small difference in energy must probably translate into small shifts of the frequencies of the spectral lines themselves (as opposed to the general shift of the spectrum as such, which, as mentioned above, is well-explained by quantum physics) respecting the Planck-Einstein relation for photons (E = hf). I do not know if anyone tried to come up with some kind of quantum-mechanical definition of the concept of entropy, (but I have not googled anything on that, so I expect there must be valuable resources on that out there), and Boltzmann’s constant was re-defined at the occasion of the 2019 revision of the SI system of units – and a careful examination of the rationale of that revision or re-definition should yield deeper insights in this regard, especially because I think that revision firmly anchors what I refer to as a realist interpretation of quantum physics. Thermal radiation is microwave-range radiation, so a 0.1 nm resolution should be enough to capture a shift in spectral lines – if it is there, that is.
I need to think on this. As for now, I look at Langford’s work as art, and one of his interests is, effectively, to connect art and science. Let me quote one of his commentaries on one of his images: “Light and matter dance at 30°C, upon what is essentially a Calcium-Silicate substrate through which light and various chemicals flow. Swirling Yin-Yang patterns reminiscent of Solar flares and magnetic lines of force also remind me of fractal patterns.” [My italics.]
He does phrase it very beautifully, doesn’t he? Maybe I will find some deeper meaning to it later. Dr. Langford’s suggestion to re-phrase quantum-mechanical models in terms of Poynting vectors is one that strikes a note, and there are other ideas there as well. It must be possible to find quantum-mechanical effects by further analyzing, for example, the relation between temperature and RIs, indeed – and to use the formal (consistent and complete!) language of quantum mechanics to (also) explain Dr. Langford’s findings. This would conclusively relate the micro-level of quantum physics to the macro-level of crystals (isotropic or anisotropic structures), and it would not require supercooled condensates or massive investments in new accelerator facilities.
It would also provide amateur physicists with a way to discover and verify all by themselves. That would be a great result in itself. 🙂
Post scriptum (27 March): Looking at the papers again, I do not see a shift in spectral lines. Spectral lines correspond to differences between quantized energies in electron orbitals. These are either atomic orbitals or molecular orbitals (valence electrons), and shifts between orbitals corresponds to spectral lines in the visible spectrum (Rydberg-scale energies) or, in case of molecular orbitals, microwave photons being absorbed or emitted. Temperature just increases the intensity of photon beams going in and out of the system (the rock sample, in this case), and so it causes a shift of the spectrum, but the lines are what they are: their energy is and remains what it is (E = hf). Of course, the superposition principe tells us the energies of microwave and visual-spectrum energies can combine in what resembles a normal distribution around a mean (which, yes, shifts with temperature alright).
As for the gist of the matter, yes, of course, what Dr. Langford is seeing, are quantum-mechanical effects alright.
Post scriptum (9 April 2021): In the preceding week, I found that Dr. Langford seems to find my math too difficult, and turns to pseudo-scientists such as Nassim Haramein, and contributes to Haramein’s Resonance Science Foundation. I dissociate completely from such references and like associations. Everyone is free to seek inspiration elsewhere, but Haramein’s mystical stories are definitely not my cup of tea.
Post scriptum (25 March 2021): Because this post is so extremely short and happy, I want to add a sad anecdote which illustrates what I have come to regard as the sorry state of physics as a science.
A few days ago, an honest researcher put me in cc of an email to a much higher-brow researcher. I won’t reveal names, but the latter – I will call him X – works at a prestigious accelerator lab in the US. The gist of the email was a question on an article of X: “I am still looking at the classical model for the deep orbits. But I have been having trouble trying to determine if the centrifugal and spin-orbit potentials have the same relativistic correction as the Coulomb potential. I have also been having trouble with the Ademko/Vysotski derivation of the Veff = V×E/mc2 – V2/2mc2 formula.”
I was greatly astonished to see X answer this: “Hello – What I know is that this term comes from the Bethe-Salpeter equation, which I am including (#1). The authors say in their book that this equation comes from the Pauli’s theory of spin. Reading from Bethe-Salpeter’s book [Quantum mechanics of one and two electron atoms]: “If we disregard all but the first three members of this equation, we obtain the ordinary Schroedinger equation. The next three terms are peculiar to the relativistic Schroedinger theory”. They say that they derived this equation from covariant Dirac equation, which I am also including (#2). They say that the last term in this equation is characteristic for the Dirac theory of spin ½ particles. I simplified the whole thing by choosing just the spin term, which is already used for hyperfine splitting of normal hydrogen lines. It is obviously approximation, but it gave me a hope to satisfy the virial theorem. Of course, now I know that using your Veff potential does that also. That is all I know.” [I added the italics/bold in the quote.]
So I see this answer while browsing through my emails on my mobile phone, and I am disgusted – thinking: Seriously? You get to publish in high-brow journals, but so you do not understand the equations, and you just drop terms and pick the ones that suit you to make your theory fit what you want to find? And so I immediately reply to all, politely but firmly: “All I can say, is that I would not use equations which I do not fully understand. Dirac’s wave equation itself does not make much sense to me. I think Schroedinger’s original wave equation is relativistically correct. The 1/2 factor in it has nothing to do with the non-relativistic kinetic energy, but with the concept of effective mass and the fact that it models electron pairs (two electrons – neglect of spin). Andre Michaud referred to a variant of Schroedinger’s equation including spin factors.”
Now X replies this, also from his iPhone: “For me the argument was simple. I was desperate trying to satisfy the virial theorem after I realized that ordinary Coulomb potential will not do it. I decided to try the spin potential, which is in every undergraduate quantum mechanical book, starting with Feynman or Tippler, to explain the hyperfine hydrogen splitting. They, however, evaluate it at large radius. I said, what happens if I evaluate it at small radius. And to my surprise, I could satisfy the virial theorem. None of this will be recognized as valid until one finds the small hydrogen experimentally.That is my main aim. To use theory only as a approximate guidance. After it is found, there will be an explosion of “correct” theories.” A few hours later, he makes things even worse by adding: “I forgot to mention another motivation for the spin potential. I was hoping that a spin flip will create an equivalent to the famous “21cm line” for normal hydrogen, which can then be used to detect the small hydrogen in astrophysics. Unfortunately, flipping spin makes it unstable in all potential configurations I tried so far.”
I have never come across a more blatant case of making a theory fit whatever you want to prove (apparently, X believes Mills’ hydrinos (hypothetical small hydrogen) are not a fraud), and it saddens me deeply. Of course, I do understand one will want to fiddle and modify equations when working on something, but you don’t do that when these things are going to get published by serious journals. Just goes to show how physicists effectively got lost in math, and how ‘peer reviews’ actually work: they don’t.
I added an Annex to a paper that talks about all of the fancy stuff quantum physicists like to talk about, like scattering matrices and high-energy particle events. The Annex, however, is probably my simplest and shortest summary of the ordinariness of wavefunction math, including a quick overview of what quantum-mechanical operators actually are. It does not make use of state vector algebra or the usual high-brow talk about Gilbert spaces and what have you: you only need to know what a derivative is, and combine it with our realist interpretation of what the wavefunction actually represents.
I think I should do a paper on the language of physics. To show how (i) rotations (i, j, k), (ii) scalars (constants or just numerical values) and (iii) vectors (real vectors (e.g. position vectors) and pseudovectors (e.g. angular frequency or momentum)), and (iv) operators (derivatives of the wavefunction with respect to time and spatial directions) form ‘words’ (e.g. energy and momentum operators), and how these ‘words’ then combine into meaningful statements (e.g. Schroedinger’s equation).
All of physics can then be summed up in a half-page or so. All the rest is thermodynamics 🙂 JL
PS: You only get collapsing wavefunctions when adding uncertainty to the models (i.e. our own uncertainty about the energy and momentum). The ‘collapse’ of the wavefunction (let us be precise, the collapse of the (dissipating) wavepacket) thus corresponds to the ‘measurement’ operation. 🙂
PS2: Incidentally, the analysis also gives an even more intuitive explanation of Einstein’s mass-energy equivalence relation, which I summarize in a reply to one of the many ‘numerologist’ physicists on ResearchGate (copied below).
I quote: “Seen are Golgi apparatus, mitochondria, endoplasmic reticulum, cell wall, and hundreds of protein structures and membrane-bound organelles. The cell structure is of a Eukaryote cell i.e. a multicellular organism which means it can correspond to the cell structure of humans, dogs, or even fungi and plants.” These images were apparently put together from “X-ray, nuclear magnetic resonance (NMR) and cryoelectron microscopy datasets.”
I think it is one of those moments where it feels great to be human. 🙂
The electromagnetic force has an asymmetry: the magnetic field lags the electric field. The phase shift is 90 degrees. We can use complex notation to write the E and B vectors as functions of each other. Indeed, the Lorentz force on a charge is equal to: F = qE + q(v×B). Hence, if we know the (electric field) E, then we know the (magnetic field) B: B is perpendicular to E, and its magnitude is 1/c times the magnitude of E. We may, therefore, write:
B = –iE/c
The minus sign in the B = –iE/c expression is there because we need to combine several conventions here. Of course, there is the classical (physical) right-hand rule for E and B, but we also need to combine the right-hand rule for the coordinate system with the convention that multiplication with the imaginary unit amounts to a counterclockwise rotation by 90 degrees. Hence, the minus sign is necessary for the consistency of the description. It ensures that we can associate the aeiEt/ħ and ae–iEt/ħ functions with left and right-handed spin (angular momentum), respectively.
Now, we can easily imagine a antiforce: an electromagnetic antiforce would have a magnetic field which precedes the electric field by 90 degrees, and we can do the same for the nuclear force (EM and nuclear oscillations are 2D and 3D oscillations respectively). It is just an application of Occam’s Razor principle: the mathematical possibilities in the description (notations and equations) must correspond to physical realities, and vice versa (one-on-one). Hence, to describe antimatter, all we have to do is to put a minus sign in front of the wavefunction. [Of course, we should also take the opposite of the charge(s) of its antimatter counterpart, and please note we have a possible plural here (charges) because we think of neutral particles (e.g. neutrons, or neutral mesons) as consisting of opposite charges.] This is just the principle which we already applied when working out the equation for the neutral antikaon (see Annex IV and V of the above-referenced paper):
Don’t worry if you do not understand too much of the equations: we just put them there to impress the professionals. 🙂 The point is this: matter and antimatter are each other opposite, literally: the wavefunctions aeiEt/ħ and –aeiEt/ħ add up to zero, and they correspond to opposite forces too! Of course, we also have lightparticles, so we have antiphotons and antineutrinos too.
We think this explains the rather enormous amount of so-called dark matter and dark energy in the Universe (the Wikipedia article on dark matter says it accounts for about 85% of the total mass/energy of the Universe, while the article on the observable Universe puts it at about 95%!). We did not say much about this in our YouTube talk about the Universe, but we think we understand things now. Dark matter is called dark because it does not appear to interact with the electromagnetic field: it does not seem to absorb, reflect or emit electromagnetic radiation, and is, therefore, difficult to detect. That should not be a surprise: antiphotons would not be absorbed or emitted by ordinary matter. Only anti-atoms (i.e. think of a antihydrogen atom as a antiproton and a positron here) would do so.
So did we explain the mystery? We think so. 🙂
We will conclude with a final remark/question. The opposite spacetime signature of antimatter is, obviously, equivalent to a swap of the real and imaginary axes. This begs the question: can we, perhaps, dispense with the concept of charge altogether? Is geometry enough to understand everything? We are not quite sure how to answer this question but we do not think so: a positron is a positron, and an electron is an electron¾the sign of the charge (positive and negative, respectively) is what distinguishes them! We also think charge is conserved, at the level of the charges themselves (see our paper on matter/antimatter pair production and annihilation).
We, therefore, think of charge as the essence of the Universe. But, yes, everything else is sheer geometry! 🙂
There are two branches of physics. The nicer branch studies equilibrium states: simple laws, stable particles (electrons and protons, basically), the expanding (oscillating?) Universe, etcetera. This branch includes the study of dynamical systems which we can only describe in terms of probabilities or approximations: think of kinetic gas theory (thermodynamics) or, much simpler, hydrostatics (the flow of water, Feynman, Vol. II, chapters 40 and 41), about which Feynman writes this:
“The simplest form of the problem is to take a pipe that is very long and push water through it at high speed. We ask: to push a given amount of water through that pipe, how much pressure is needed? No one can analyze it from first principles and the properties of water. If the water flows very slowly, or if we use a thick goo like honey, then we can do it nicely. You will find that in your textbook. What we really cannot do is deal with actual, wet water running through a pipe. That is the central problem which we ought to solve some day, and we have not.” (Feynman, I-3-7)
Still, we believe first principles do apply to the flow of water through a pipe. In contrast, the second branch of physics – we think of the study of non-stable particles here: transients (charged kaons and pions, for example) or resonances (very short-lived intermediate energy states). The class of physicists who studies these must be commended, but they resemble econometrists modeling input-output relations: if they are lucky, they will get some kind of mathematical description of what goes in and what goes out, but the math does not tell them how stuff actually happens. It leads one to think about the difference between a theory, a calculation and an explanation. Simplifying somewhat, we can represent such input-output relations by thinking of a process that will be operating on some state |ψ⟩ to produce some other state |ϕ⟩, which we write like this:
A is referred to as a Hermitian matrix if the process is reversible. Reversibility looks like time reversal, which can be represented by taking the complex conjugate ⟨ϕ|A|ψ⟩* = ⟨ψ|A†|ϕ⟩: we put a minus sign in front of the imaginary unit, so we have –i instead of i in the wavefunctions (or i instead of –i with respect to the usual convention for denoting the direction of rotation). Processes may not reversible, in which case we talk about symmetry-breaking: CPT-symmetry is always respected so, if T-symmetry (time) is broken, CP-symmetry is broken as well. There is nothing magical about that.
Physicists found the description of these input-output relations can be simplified greatly by introducing quarks (see Annex II of our paper on ontology and physics). Quarks have partial charge and, more generally, mix physical dimensions (mass/energy, spin or (angular) momentum). They create some order – think of it as some kind of taxonomy – in the vast zoo of (unstable) particles, which is great. However, we do not think there was a need to give them some kind of ontological status: unlike plants or insects, partial charges do not exist.
We also think the association between forces and (virtual) particles is misguided. Of course, one might say forces are being mediated by particles (matter- or light-particles), because particles effectively pack energy and angular momentum (light-particles – photons and neutrinos – differ from matter-particles (electrons, protons) in that they carry no charge, but they do carry electromagnetic and/or nuclear energy) and force and energy are, therefore, being transferred through particle reactions, elastically or non-elastically. However, we think it is important to clearly separate the notion of fields and particles: they are governed by the same laws (conservation of charge, energy, and (linear and angular) momentum, and – last but not least – (physical) action) but their nature is very different.
W.E. Lamb (1995), nearing the end of his very distinguished scientific career, wrote about “a comedy of errors and historical accidents”, but we think the business is rather serious: we have reached the End of Science. We have solved Feynman’s U = 0 equation. All that is left, is engineering: solving practical problems and inventing new stuff. That should be exciting enough. 🙂
Post scriptum: I added an Annex (III) to my paper on ontology and physics, with what we think of as a complete description of the Universe. It is abstruse but fun (we hope!): we basically add a description of events to Feynman’s U = 0 (un)worldliness formula. 🙂
One sometimes wonders what keeps amateur physicists awake. Why is it that they want to understand quarks and wave equations, or delve into complicated math (perturbation theory, for example)? I believe it is driven by the same human curiosity that drives philosophy. Physics stands apart from other sciences because it examines the smallest of smallest – the essence of things, so to speak.
Unlike other sciences (the human sciences in particular, perhaps), physicists also seek to reduce the number of concepts, rather than multiply them – even if, sadly, enough, they do not always a good job at that. However, generally speaking, physics and math may, effectively, be considered to be the King and Queen of Science, respectively.
The Queen is an eternal beauty, of course, because Her Language may mean anything. Physics, in contrast, talks specifics: physical dimensions (force, distance, energy, etcetera), as opposed to mathematical dimensions – which are mere quantities (scalars and vectors).
Science differs from religion in that it seeks to experimentally verify its propositions. It measures rather than believes. These measurements are cross-checked by a global community and, thereby, establish a non-subjective reality. The question of whether reality exists outside of us, is irrelevant: it is a category mistake (Ryle, 1949). It is like asking why we are here: we just are.
All is in the fundamental equations. An equation relates a measurement to Nature’s constants. Measurements – energy/mass, or velocities – are relative. Nature’s constants do not depend on the frame of reference of the observer and we may, therefore, label them as being absolute. This corresponds to the difference between variables and parameters in equations. The speed of light (c) and Planck’s quantum of action (h) are parameters in the E/m = c2 and E = hf, respectively.
Feynman (II-25-6) is right that the Great Law of Nature may be summarized as U = 0 but that “this simple notation just hides the complexity in the definitions of symbols is just a trick.” It is like talking of the night “in which all cows are equally black” (Hegel, Phänomenologie des Geistes, Vorrede, 1807). Hence, the U = 0 equation needs to be separated out. I would separate it out as:
We imagine things in 3D space and one-directional time (Lorentz, 1927, and Kant, 1781). The imaginary unit operator (i) represents a rotation in space. A rotation takes time. Its physical dimension is, therefore, s/m or -s/m, as per the mathematical convention in place (Minkowski’s metric signature and counter-clockwise evolution of the argument of complex numbers, which represent the (elementary) wavefunction).
Velocities can be linear or tangential, giving rise to the concepts of linear versus angular momentum. Tangential velocities imply orbitals: circular and elliptical orbitals are closed. Particles are pointlike charges in closed orbitals. We are not sure if non-closed orbitals might correspond to some reality: linear oscillations are field particles, but we do not think of lines as non-closed orbitals: the curvature of real space (the Universe we live in) suggest we should but we are not sure such thinking is productive (efforts to model gravity as a residual force have failed so far).
Space and time are innate or a priori categories (Kant, 1781). Elementary particles can be modeled as pointlike charges oscillating in space and in time. The concept of charge could be dispensed with if there were not lightlike particles: photons and neutrinos, which carry energy but no charge. The pointlike charge which is oscillating is pointlike but may have a finite (non-zero) physical dimension, which explains the anomalous magnetic moment of the free (Compton) electron. However, it only appears to have a non-zero dimension when the electromagnetic force is involved (the proton has no anomalous magnetic moment and is about 3.35 times smaller than the calculated radius of the pointlike charge inside of an electron). Why? We do not know: elementary particles are what they are.
We have two forces: electromagnetic and nuclear. One of the most remarkable things is that the E/m = c2 holds for both electromagnetic and nuclear oscillations, or combinations thereof (superposition theorem). Combined with the oscillator model (E = ma2ω2 = mc2 and, therefore, c must be equal to c = aω), this makes us think of c2 as modeling an elasticity or plasticity of space. Why two oscillatorymodes only? In 3D space, we can only imagine oscillations in one, two and three dimensions (line, plane, and sphere). The idea of four-dimensional spacetime is not relevant in this context.
Photons and neutrinos are linear oscillations and, because they carry no charge, travel at the speed of light. Electrons and muon-electrons (and their antimatter counterparts) are 2D oscillations packing electromagnetic and nuclear energy, respectively. The proton (and antiproton) pack a 3D nuclear oscillation. Neutrons combine positive and negative charge and are, therefore, neutral. Neutrons may or may not combine the electromagnetic and nuclear force: their size (more or less the same as that of the proton) suggests the oscillation is nuclear.
orbital electron (e.g.: 1H)
pions (π±/ π0)?
n (neutron)? D+ (deuteron)?
corresponding field particle
The theory is complete: each theoretical/mathematical/logical possibility corresponds to a physical reality, with spin distinguishing matter from antimatter for particles with the same form factor.
When reading this, my kids might call me and ask whether I have gone mad. Their doubts and worry are not random: the laws of the Universe are deterministic (our macro-time scale introduces probabilistic determinism only). Free will is real, however: we analyze and, based on our analysis, we determine the best course to take when taking care of business. Each course of action is associated with an anticipated cost and return. We do not always choose the best course of action because of past experience, habit, laziness or – in my case – an inexplicable desire to experiment and explore new territory.
The work on the neutron model inspired me to have another look at the 1/4 factor which bothered me when applying mass-without-mass models to the proton. I think I nailed it: it is just another form factor. Have a look at the proton paper. Mystery solved – finally ! 🙂
The proton model will be key. We cannot explain it in the typical ‘mass without mass’ model of zittering charges: we get a 1/4 factor in the explanation of the proton radius, which is impossible to get rid of unless we assume some ‘strong’ force come into play. That is why I prioritize a ‘straight’ attack on the electron and the proton-electron bond in a primitive neutron model.
The calculation of forces inside a muon-electron and a proton (see ) is an interesting exercise: it is the only thing which explains why an electron annihilates a positron but electrons and protons can live together (the ‘anti-matter’ nature of charged particles only shows because of opposite spin directions of the fields – so it is only when the ‘structure’ of matter-antimatter pairs is different that they will not annihilate each other).
In short, 2021 will be an interesting year for me. The intent of my last two papers (on the deuteron model and the primitive neutron model) was to think of energy values: the energy value of the bond between electron and proton in the neutron, and the energy value of the bond between proton and neutron in a deuteron nucleus. But, yes, the more fundamental work remains to be done !
In my ‘signing off’ post, I wrote I had enough of physics but that my last(?) ambition was to “contribute to an intuitive, realist and mathematically correct model of the deuteron nucleus.” Well… The paper is there. And I am extremely pleased with the result. Thank you, Mr. Meulenberg. You sure have good intuition.
I took the opportunity to revisit Yukawa’s nuclear potential and demolish his modeling of a new nuclear force without a charge to act on. Looking back at the past 100 years of physics history, I now start to think that was the decisive destructive moment in physics: that 1935 paper, which started off all of the hype on virtual particles, quantum field theory, and a nuclear force that could not possibly be electromagnetic plus – totally not done, of course ! – utter disregard for physical dimensions and the physical geometry of fields in 3D space or – taking retardation effects into account – 4D spacetime. Fortunately, we have hope: the 2019 fixing of SI units puts physics firmly back onto the road to reality – or so we hope.
Paolo Di Sia‘s and my paper show one gets very reasonable energy and separation distances for nuclear bonds and inter-nucleon distances when assuming the presence of magnetic and/or electric dipole fields arising from deep electron orbitals. The model shows one of the protons pulling the ‘electron blanket’ from another proton (the neutron) towards its own side so as to create an electric dipole moment. So it is just like a valence electron in a chemical bond. So it is like water, then? Water is a polar molecule but we do not necessarily need to start with polar configurations when trying to expand this model so as to inject some dynamics into it (spherically symmetric orbitals are probably easier to model). Hmm… Perhaps I need to look at the thermodynamical equations for dry versus wet water once again… Phew ! Where to start?
I have no experience – I have very little math, actually – with modeling molecular orbitals. So I should, perhaps, contact a friend from a few years ago now – living in Hawaii and pursuing more spiritual matters too – who did just that long time ago: orbitals using Schroedinger’s wave equation (I think Schroedinger’s equation is relativistically correct – just a misinterpretation of the concept of ‘effective mass’ by the naysayers). What kind of wave equation are we looking at? One that integrates inverse square and inverse cube force field laws arising from charges and the dipole moments they create while moving. [Hey! Perhaps we can relate these inverse square and cube fields to the second- and third-order terms in the binomial development of the relativistic mass formula (see the section on kinetic energy in my paper on one of Feynman’s more original renderings of Maxwell’s equations) but… Well… Probably best to start by seeing how Feynman got those field equations out of Maxwell’s equations. It is a bit buried in his development of the Liénard and Wiechert equations, which are written in terms of the scalar and vector potentials φ and A instead of E and B vectors, but it should all work out.]
If the nuclear force is electromagnetic, then these ‘nuclear orbitals’ should respect the Planck-Einstein relation. So then we can calculate frequencies and radii of orbitals now, right? The use of natural units and imaginary units to represent rotations/orthogonality in space might make calculations easy (B = iE). Indeed, with the 2019 revision of SI units, I might need to re-evaluate the usefulness of natural units (I always stayed away from it because it ‘hides’ the physics in the math as it makes abstraction of their physical dimension).
Hey ! Perhaps we can model everything with quaternions, using imaginary units (i and j) to represent rotations in 3D space so as to ensure consistent application of the appropriate right-hand rules always (special relativity gets added to the mix so we probably need to relate the (ds)2 = (dx)2 + (dy)2 + (dz)2 – (dct)2 to the modified Hamilton’s q = a + ib + jc – kd expression then). Using vector equations throughout and thinking of has a vector when using the E = hf and h = pλ Planck-Einstein relation (something with a magnitude and a direction) should do the trick, right? [In case you wonder how we can write fas a vector: angular frequency is a vector too. The Planck-Einstein relation is valid for both linear as well as circular oscillations: see our paper on the interpretation of de Broglie wavelength.]
Oh – and while special relativity is there because of Maxwell’s equation, gravity (general relativity) should be left out of the picture. Why? Because we would like to explain gravity as a residual very-far-field force. And trying to integrate gravity inevitable leads one to analyze particles as ‘black holes.’ Not nice, philosophically speaking. In fact, any 1/rn field inevitably leads one to think of some kind of black hole at the center, which is why thinking of fundamental particles in terms ring currents and dipole moments makes so much sense! [We need nothingness and infinity as mathematical concepts (limits, really) but they cannot possibly represent anything real, right?]
The consistent use of the Planck-Einstein law to model these nuclear electron orbitals should probably involve multiples of h to explain their size and energy: E = nhf rather than E = hf. For example, when calculating the radius of an orbital of a pointlike charge with the energy of a proton, one gets a radius that is only 1/4 of the proton radius (0.21 fm instead of 0.82 fm, approximately). To make the radius fit that of a proton, one has to use the E = 4hf relation. Indeed, for the time being, we should probably continue to reject the idea of using fractions of h to model deep electron orbitals. I also think we should avoid superluminal velocity concepts.
This post sounds like madness? Yes. And then, no! To be honest, I think of it as one of the better Aha! moments in my life. 🙂
Brussels, 30 December 2020
Post scriptum (1 January 2021): Lots of stuff coming together here ! 2021 will definitely see the Grand Unified Theory of Classical Physics becoming somewhat more real. It looks like Mills is going to make a major addition/correction to his electron orbital modeling work and, hopefully, manage to publish the gist of it in the eminent mainstream Nature journal. That makes a lot of sense: to move from an atom to an analysis of nuclei or complex three-particle systems, one should combine singlet and doublet energy states – if only to avoid reduce three-body problems to two-body problems. 🙂 I still do not buy the fractional use of Planck’s quantum of action, though. Especially now that we got rid of the concept of a separate ‘nuclear’ charge (there is only one charge: the electric charge, and it comes in two ‘colors’): if Planck’s quantum of action is electromagnetic, then it comes in wholes or multiples. No fractions. Fractional powers of distance functions in field or potential formulas are OK, however. 🙂
In 1995, W.E. Lamb Jr. wrote the following on the nature of the photon: “There is no such thing as a photon. Only a comedy of errors and historical accidents led to its popularity among physicists and optical scientists. I admit that the word is short and convenient. Its use is also habit forming. Similarly, one might find it convenient to speak of the “aether” or “vacuum” to stand for empty space, even if no such thing existed. There are very good substitute words for “photon”, (e.g., “radiation” or “light”), and for “photonics” (e.g., “optics” or “quantum optics”). Similar objections are possible to use of the word “phonon”, which dates from 1932. Objects like electrons, neutrinos of finite rest mass, or helium atoms can, under suitable conditions, be considered to be particles, since their theories then have viable non-relativistic and non-quantum limits.”
The opinion of a Nobel Prize laureate carries some weight, of course, but we think the concept of a photon makes sense. As the electron moves from one (potential) energy state to another – from one atomic or molecular orbital to another – it builds an oscillating electromagnetic field which has an integrity of its own and, therefore, is not only wave-like but also particle-like.
We, therefore, dedicated the fifth chapter of our re-write of Feynman’s Lectures to a dual analysis of EM radiation (and, yes, this post is just an announcement of the paper so you are supposed to click the link to read it). It is, basically, an overview of a rather particular expression of Maxwell’s equations which Feynman uses to discuss the laws of radiation. I wonder how to – possibly – ‘transform’ or ‘transpose’ this framework so it might apply to deep electron orbitals and – possibly – proton-neutron oscillations.
Those who read this blog, or my papers, know that the King of Science, physics, is in deep trouble. [In case you wonder, the Queen of Science is math.]
The problem is rather serious: a lack of credibility. It would kill any other business, but things work differently in academics. The question is this: how many professional physicists would admit this? An even more important question is: how many of those who admit this, would try to do something about it?
We hope the proportion of both is increasing – so we can trust that at least the dynamics of all of this are OK. I am hopeful – but I would not bet on it.
“Sean Carroll is one of the Gurus that is part of the problem rather than the solution: he keeps peddling approaches that have not worked in the past, and can never be made to work in the future. I am an amateur physicist only, but I have not come across a problem that cannot be solved by ‘old’ quantum physics, i.e. a combination of Maxwell’s equations and the Planck-Einstein relation. Lamb shift, anomalous magnetic moment, electron-positron pair creation/annihilation (a nuclear process), behavior of electrons in semiconductors, superconduction etc. There is a (neo-)classical solution for everything: no quantum field and/or perturbation theories are needed. Proton and electrons as elementary particles (and neutrons as the bound state of an proton and a nuclear electron), and photons and neutrinos as lightlike particles, carrying electromagnetic and strong field energy respectively. That’s it. Nothing more. Nothing less. Everyone who thinks otherwise is ‘lost in math’, IMNSHO.”
Brutal? Yes. Very much so. The more important question is this: is it true? I cannot know for sure, but it comes across as being truthful to me.
The creation and annihilation of matter-antimatter pairs is usually taken as proof that, somehow, fields can condense into matter-particles or, conversely, that matter-particles can somehow turn into light-particles (photons), which are nothing but traveling electromagnetic fields. However, pair creation always requires the presence of another particle and one may, therefore, legitimately wonder whether the electron and positron were not already present, somehow.
Carl Anderson’s original discovery of the positron involved cosmic rays hitting atmospheric molecules, a process which involves the creation of unstable particles including pions. Cosmic rays themselves are, unlike what the name suggests, no rays – not like gamma rays, at least – but highly energetic protons and atomic nuclei. Hence, they consist of matter-particles, not of photons. The creation of electron-positron pairs from cosmic rays also involves pions as intermediate particles:
1. The π+ and π– particles have net positive and negative charge of 1 e+ and 1 e– respectively. According to mainstream theory, this is because they combine a u and d quark but – abandoning the quark hypothesis – we may want to think their charge could be explained, perhaps, by the presence of an electron!
2. The neutral pion, in turn, might, perhaps, consist of an electron and a positron, which should annihilate but take some time to do so!
Neutral pions have a much shorter lifetime – in the order of 10-18 s only – than π+ and π– particles, whose lifetime is a much more respectable 2.6 times 10-8 s. Something you can effectively measure, in order words. In short, despite similar energies, neutral pions do not seem to have a lot in common with π+ and π– particles. Even the energy difference is quite substantial when measured in terms of the electron mass: the neutral pion has an energy of about 135 MeV, while π+ and π– particles have an energy of almost 140 MeV. To be precise, the difference is about 4.6 MeV. That is quite a lot: the electron rest energy is 0.511 MeV only. So it is not stupid to think that π+ and π– particles might carry an extra positron or electron, somehow. In our not-so-humble view, this is as legitimate as thinking – like Rutherford did – that a neutron should, somehow, combine a proton and an electron.
The whole analysis – both in the QED as well as in the QCD sector of quantum physics – would radically alter when thinking of neutral particles – such as neutrons and π0 particles – not as consisting of quarks but of protons/antiprotons and/or electrons/positrons cancelling each other’s charges out. We have not seen much – if anything – which convinces us this cannot be correct. We, therefore, believe a more realist interpretation of quantum physics should be possible for high-energy phenomena as well. With a more realist theory, we mean one that does not involve quantum field and/or renormalization theory.
Such new theory would not be contradictory to the principle that, in Nature, the number of charged particles is no longer conserved, but that total (net) charge is actually being conserved, always. Hence, charged particles could appear and disappear, but they would be part of neutral particles. All particles in such processes are very short-lived anyway, so what is a particle here? We should probably think of these things as an unstable combination of various bits and bobs, isn’t it? 😊
So, yes, we did a paper on this. And we like it. Have a look: it’s on ResearchGate, academia.edu, and – as usual – Phil Gibb’s site (which has all of our papers, including our very early ones, which you might want to take with a pinch of salt). 🙂
 You may be so familiar with quarks that you do not want to question this hypothesis anymore. If so, let me ask you: where do the quarks go when a π± particle disintegrates into a muon-e±?
 They disintegrate into muons (muon-electrons or muon-positrons), which themselves then decay into an electron or a positron respectively.
 The point estimate of the lifetime of a neutral pion of the Particle Data Group (PDG) is about 8.5 times 10-17 s. Such short lifetimes cannot measured in a classical sense: such particles are usually referred to as resonances (rather than particles) and the lifetime is calculated from a so-called resonance width. We may discuss this approach in more detail later.
 Of course, it is much smaller when compared to the proton (rest) energy, which it is about 938 MeV.