I just wrapped up a discussion with some mainstream physicists, producing what I think of as a final paper on the nuclear force. I was struggling with the apparent non-conservative nature of the nuclear potential, but now I have the solution. It is just like an electric dipole field: not spherically symmetric. Nice and elegant.
I can’t help copying the last exchange with one of the researchers. He works at SLAC and seems to believe hydrinos might really exist. It is funny, and then it is not.
Me: “Dear X – That is why I am an amateur physicist and don’t care about publication. I do not believe in quarks and gluons. 😊 Do not worry: it does not prevent me from being happy. JL”
X: “Dear Jean Louis – The whole physics establishment believes that neutron is composed of three quarks, gluons and a see of quark-antiquark pairs. How does that fit into your picture? Best regards, X”
Me: “I see the neutron as a tight system between positive and negative electric charge – combining electromagnetic and nuclear force. The ‘proton + electron’ idea is vague. The idea of an elementary particle is confusing in discussions and must be defined clearly: stable, not-reducible, etcetera. Neutrons decay (outside of the nucleus), so they are reducible. I do not agree with Heisenberg on many fronts (especially not his ‘turnaround’ on the essence of the Uncertainty Principle) so I don’t care about who said what – except Schroedinger, who fell out with both Dirac and Heisenberg, I feel. His reason to not show up at the Nobel Prize occasion in 1933 (where Heisenberg received the prize of the year before, and Dirac/Schroedinger the prize of the year itself) was not only practical, I think – but that’s Hineininterpretierung which doesn’t matter in questions like this. JL”
X: “Dear Jean Louis – I want to to make doubly sure. Do I understand you correctly that you are saying that neutron is really a tight system of proton and electron ? If that is so, it is interesting that Heisenberg, inventor of the uncertainty principle, believed the same thing until 1935 (I have it from Pais book). Then the idea died because. Pauli’s argument won, that the neutron spin 1/2 follows the Fermi-Dirac statistics and this decided that the neutron is indeed an elementary particle. This would very hard sell, if you now, after so many years, agree with Heisenberg. By the way, I say in my Phys. Lett. B paper, which uses k1/r + k2/r2 potential, that the radius of the small hydrogen is about 5.671 Fermi. But this is very sensitive to what potential one is using. Best regards, X.”
In this blog, we talked a lot about the Zitterbewegung model of an electron, which is a model which allows us to think of the elementary wavefunction as representing a radius or position vector. We write:
ψ = r = a·e±iθ = a·[cos(±θ) + i · sin(±θ)]
It is just an application of Parson’s ring current or magneton model of an electron. Note we use boldface to denote vectors, and that we think of the sine and cosine here as vectors too! You should note that the sine and cosine are the same function: they differ only because of a 90-degree phase shift: cosθ = sin(θ + π/2). Alternatively, we can use the imaginary unit (i) as a rotation operator and use the vector notation to write: sinθ = i·cosθ.
In one of our introductory papers (on the language of math), we show how and why this all works like a charm: when we take the derivative with respect to time, we get the (orbital or tangential) velocity (dr/dt = v), and the second-order derivative gives us the (centripetal) acceleration vector (d2r/dt2 = a). The plus/minus sign of the argument of the wavefunction gives us the direction of spin, and we may, perhaps, add a plus/minus sign to the wavefunction as a whole to model matter and antimatter, respectively (the latter assertion is very speculative though, so we will not elaborate that here).
One orbital cycle packs Planck’s quantum of (physical) action, which we can write either as the product of the energy (E) and the cycle time (T), or the momentum (p) of the charge times the distance travelled, which is the circumference of the loop λ in the inertial frame of reference (we can always add a classical linear velocity component when considering an electron in motion, and we may want to write Planck’s quantum of action as an angular momentum vector (h or ħ) to explain what the Uncertainty Principle is all about (statistical uncertainty, nothing ontological), but let us keep things simple as for now):
h = E·T = p·λ
It is important to distinguish between the electron and the charge, which we think of being pointlike: the electron is charge in motion. Charge is just charge: it explains everything and its nature is, therefore, quite mysterious: is it really a pointlike thing, or is there some fractal structure? Of these things, we know very little, but the small anomaly in the magnetic moment of an electron suggests its structure might be fractal. Think of the fine-structure constant here, as the factor which distinguishes the classical, Compton and Bohr radii of the electron: we associate the classical electron radius with the radius of the poinlike charge, but perhaps we can drill down further.
We also showed how the physical dimensions work out in Schroedinger’s wave equation. Let us jot it down to appreciate what it might model, and appreciate why complex numbers come in handy:
This is, of course, Schroedinger’s equation in free space, which means there are no other charges around and we, therefore, have no potential energy terms here. The rather enigmatic concept of the effective mass (which is half the total mass of the electron) is just the relativistic mass of the pointlike charge as it whizzes around at lightspeed, so that is the motion which Schroedinger referred to as its Zitterbewegung (Dirac confused it with some motion of the electron itself, further compounding what we think of as de Broglie’s mistaken interpretation of the matter-wave as a linear oscillation: think of it as an orbital oscillation). The 1/2 factor is there in Schroedinger’s wave equation for electron orbitals, but he replaced the effective mass rather subtly (or not-so-subtly, I should say) by the total mass of the electron because the wave equation models the orbitals of an electron pair (two electrons with opposite spin). So we might say he was lucky: the two mistakes together (not accounting for spin, and adding the effective mass of two electrons to get a mass factor) make things come out alright. 🙂
However, we will not say more about Schroedinger’s equation for the time being (we will come back to it): just note the imaginary unit, which does operate like a rotation operator here. Schroedinger’s wave equation, therefore, must model (planar) orbitals. Of course, the plane of the orbital itself may be rotating itself, and most probably is because that is what gives us those wonderful shapes of electron orbitals (subshells). Also note the physical dimension of ħ/m: it is a factor which is expressed in m2/s, but when you combine that with the 1/m2 dimension of the ∇2 operator, then you get the 1/s dimension on both sides of Schroedinger’s equation. [The ∇2 operator is just the generalization of the d2r/dx2 but in three dimensions, so x becomes a vector: x, and we apply the operator to the three spatial coordinates and get another vector, which is why we call ∇2 a vector operator. Let us move on, because we cannot explain each and every detail here, of course!]
We need to talk forces and fields now. This ring current model assumes an electromagnetic field which keeps the pointlike charge in its orbit. This centripetal force must be equal to the Lorentz force (F), which we can write in terms of the electric and magnetic field vectors E and B (fields are just forces per unit charge, so the two concepts are very intimately related):
We use a different imaginary unit here (j instead of i) because the plane in which the magnetic field vector B is going round and round is orthogonal to the plane in which E is going round and round, so let us call these planes the xy– and xz-planes respectively. Of course, you will ask: why is the B-plane not the yz-plane? We might be mistaken, but the magnetic field vector lags the electric field vector, so it is either of the two, and so now you can check for yourself of what we wrote above is actually correct. Also note that we write 1 as a vector (1) or a complex number: 1 = 1 + i·0. [It is also possible to write this: 1 = 1 + i·0 or 1 = 1 + i·0. As long as we think of these things as vectors – something with a magnitude and a direction – it is OK.]
You may be lost in math already, so we should visualize this. Unfortunately, that is not easy. You may to google for animations of circularly polarized electromagnetic waves, but these usually show the electric field vector only, and animations which show bothE and B are usually linearly polarized waves. Let me reproduce the simplest of images: imagine the electric field vector E going round and round. Now imagine the field vector B being orthogonal to it, but also going round and round (because its phase follows the phase of E). So, yes, it must be going around in the xz– or yz-plane (as mentioned above, we let you figure out how the various right-hand rules work together here).
You should now appreciate that the E and B vectors – taken together – will also form a plane. This plane is not static: it is not the xy-, yz– or xz-plane, nor is it some static combination of two of these. No! We cannot describe it with reference to our classical Cartesian axes because it changes all the time as a result of the rotation of both the E and B vectors. So how we can describe that plane mathematically?
The Irish mathematician William Rowan Hamilton – who is also known for many other mathematical concepts – found a great way to do just that, and we will use his notation. We could say the plane formed by the E and B vectors is the E–B plane but, in line with Hamilton’s quaternion algebra, we will refer to it as the k-plane. How is it related to what we referred to as the i– and j-planes, or the xy– and xz-plane as we used to say? At this point, we should introduce Hamilton’s notation: he did write i and j in boldface (we do not like that, but you may want to think of it as just a minor change in notation because we are using these imaginary units in a new mathematical space: the quaternion number space), and he referred to them as basic quaternions in what you should think of as an extension of the complex number system. More specifically, he wrote this on a now rather famous bridge in Dublin:
i2 = -1
j2 = -1
k2 = -1
i·j = k
The first three rules are the ones you know from complex number math: two successive rotations by 90 degrees will bring you from 1 to -1. The order of multiplication in the other two rules ( i·j = k and j·i = –k ) gives us not only the k-plane but also the spin direction. All other rules in regard to quaternions (we can write, for example, this: i ·j·k = -1), and the other products you will find in the Wikipedia article on quaternions) can be derived from these, but we will not go into them here.
Now, you will say, we do not really need that k, do we? Just distinguishing between i and j should do, right? The answer to that question is: yes, when you are dealing with electromagnetic oscillations only! But it is no when you are trying to model nuclear oscillations! That is, in fact, exactly why we need this quaternion math in quantum physics!
Let us think about this nuclear oscillation. Particle physics experiments – especially high-energy physics experiments – effectively provide evidence for the presence of a nuclear force. To explain the proton radius, one can effectively think of a nuclear oscillation as an orbital oscillation in three rather than just two dimensions. The oscillation is, therefore, driven by two (perpendicular) forces rather than just one, with the frequency of each of the oscillators being equal to ω = E/2ħ = mc2/2ħ.
Each of the two perpendicular oscillations would, therefore, pack one half-unit of ħ only. The ω = E/2ħ formula also incorporates the energy equipartition theorem, according to which each of the two oscillations should pack half of the total energy of the nuclear particle (so that is the proton, in this case). This spherical view of a proton fits nicely with packing models for nucleons and yields the experimentally measured radius of a proton:
Of course, you can immediately see that the 4 factor is the same factor 4 as the one appearing in the formula for the surface area of a sphere (A = 4πr2), as opposed to that for the surface of a disc (A = πr2). And now you should be able to appreciate that we should probably represent a proton by a combination of two wavefunctions. Something like this:
What about a wave equation for nuclear oscillations? Do we need one? We sure do. Perhaps we do not need one to model a neutron as some nuclear dance of a negative and a positive charge. Indeed, think of a combination of a proton and what we will refer to as a deep electron here, just to distinguish it from an electron in Schroedinger’s atomic electron orbitals. But we might need it when we are modeling something more complicated, such as the different energy states of, say, a deuteron nucleus, which combines a proton and a neutron and, therefore, two positive charges and one deep electron.
According to some, the deep electron may also appear in other energy states and may, therefore, give rise to a different kind of hydrogen (they are referred to as hydrinos). What do I think of those? I think these things do not exist and, if they do, they cannot be stable. I also think these researchers need to come up with a wave equation for them in order to be credible and, in light of what we wrote about the complications in regard to the various rotational planes, that wave equation will probably have all of Hamilton’s basic quaternions in it. [But so, as mentioned above, I am waiting for them to come up with something that makes sense and matches what we can actually observe in Nature: those hydrinos should have a specific spectrum, and we do not such see such spectrum from, say, the Sun, where there is so much going on so, if hydrinos exist, the Sun should produce them, right? So, yes, I am rather skeptical here: I do think we know everything now and physics, as a science, is sort of complete and, therefore, dead as a science: all that is left now is engineering!]
But, yes, quaternion algebra is a very necessary part of our toolkit. It completes our description of everything! 🙂
The notes must be somewhere in some unexplored archive. If there are Holy Grails to be found in the history of physics, then these notes are surely one of them. There is a book about a mysterious woman, who might have inspired Schrödinger, but I have not read it, yet: it is on my to-read list. I will prioritize it (read: order it right now). 🙂
Oh – as for the math and physics of the wave equation, you should also check the Annex to the paper: I think the nuclear oscillation can only be captured by a wave equation when using quaternion math (an extension to complex math).
I just finished a very short paper recapping the basics of my model of the nuclear force. I wrote it a bit as a reaction to a rather disappointing exchange that is still going on between a few researchers who seem to firmly believe some crook who claims he can produce smaller hydrogen atoms (hydrinos) and get energy out of them. I wrote about my disappointment on one of my other blogs (I also write on politics and more general matters). Any case, the thing I want to do here, is to firmly state my position in regard to cold and hot fusion: I do not believe in either. Theoretically, yes. Of course. But, practically speaking, no. And that’s a resounding no!
The illustration below (from Wikimedia Commons) shows how fusion actually happens in our Sun (I wrote more about that in one of my early papers). As you can see, there are several pathways, and all of these pathways are related through critical masses of radiation and feedback loops. So it is not like nuclear fission, which (mainly) relies on cascaded neutron production. No. It is much more complicated, and you would have to create and contain a small star on Earth to recreate the conditions that are prevalent in the Sun. Containing a relatively small amount of hydrogen plasma in incredibly energy-intensive electromagnetic fields will not do the trick. First, the reaction will peter out. Second, the reaction will yield no net energy: the plasma and electromagnetic fields that are needed to contain the plasma will suck everything up, and much more than that. So, yes, The ITER project is a huge waste of taxpayers’ money.
As for cold fusion, I believe the small experiments showing anomalous heat reactions (or low-energy nuclear reactions as these phenomena are also referred to) are real (see my very first blog post on these) but (1) researchers have done a poor job at replicating these experiments consistently, (2) have failed to provide a firm theoretical basis for those reactions, and (3) whatever theory there is, also strongly hints we should not hope to ever get net energy out of it. This explains why public funding for cold fusion is very limited. Furthermore, scientists who continue to support frauds like Dr. Mills will soon erase whatever credibility smaller research labs in this field have painstakingly built up. So, no, it won’t happen. Too bad, because LENR research itself is quite interesting, and may yield more insights than the next mega-project of CERN, SLAC and what have you.
Post scriptum: On the search for hydrinos (hypothetical small hydrogen), following exchange with a scientist working for a major accelerator lab in the US – part of a much longer one – is probably quite revealing. When one asks why it has not been discovered yet, the answer is invariably the same: we need a new accelerator project for that. I’ll hide the name of the researcher by calling him X.
Dear Jean Louis – They cannot be produced in the Sun, as electron has to be very relativistic. According to my present calculation one has to have a total energy of Etotal ~34.945 MeV. Proton of the same velocity has to have total energy Etotal ~64.165 GeV. One can get such energies in very energetic evens in Universe. On Earth, it would take building special modifications of existing accelerators. This is why it has not been discovered so far.
Best regards, [X]
From: Jean Louis Van Belle <firstname.lastname@example.org> Date: Wednesday, March 31, 2021 at 9:24 AM To: [X] Cc: [Two other LENR/CF researchers] Subject: Calculations and observations…
Interesting work, but hydrino-like structures should show a spectrum with gross lines, split in finer lines and hyperfine lines (spin coupling between nucleon(s) and (deep) electron. If hydrinos exist, they should be produced en masse in the Sun. Is there any evidence from unusual spectral lines? Until then, I think of the deep electron as the negative charge in the neutron or in the deuteron nucleus. JL
A sympathetic researcher, Steve Langford, sent me some of his papers, as well as a link to his ResearchGate site, where you will find more. Optical minerology is his field. Fascinating work – or, at the very least, a rather refreshing view on the nitty-gritty of actually measuring stuff by gathering huge amounts of data, and then analyzing it in meaningful ways. I learnt a lot of new things already (e.g. kriging or Gaussian regression analysis, and novel ways of applying GLM modelling).
Dr. Langford wrote me because he wants to connect his work to more theory – quantum math, and all that. That is not so easy. He finds interesting relations between temperature and refractive indices (RIs), as measured from a single rock sample in Hawaii. The equipment he used, is shown below. I should buy that stuff too! I find it amazing one can measure light spectra with nanometer precision with these tools (the dial works with 0.1 nm increments, to be precise). He knows all about Bragg’s Law and crystal structures, toys with statistical and graphical software tools such as JMP, Surfer, and talks about equipping K-12 level students with dirt-cheap modular computer-connected optical devices and open software tools to automate the data gathering process. In short, I am slightly jealous of the practical value of his work, and the peace of mind he must have to do all of this! At the very least, he can say he actually did something in his life! 🙂
Having showered all that praise, I must admit I have no clue about how to connect all of this to quantum effects. All I know about temperature – about what it actually is (vibrational motion of molecules and atoms within molecules, with multiple degrees of freedom (n > 3) in that motion) – is based on Feynman’s Lectures (Chapters 40 to 45 of the first Volume). Would all that linear, orbital and vibrational motion generate discernible shifts of spectral lines? Moreover, would it do so in the visible light spectrum (X-rays are usually used – increases measurement precision – but such equipment is more expensive)? I have no idea.
Or… Well, of course I do have some intuitions. Shifts in frequency spectra are well explained by combining statistics and the Planck-Einstein relation. But can we see quantum physics in the data? In the spectral lines themselves? No. Not really. And so that’s what’s got me hooked. Explaining a general shift of the frequency spectrum and discerning quantum effects in RIs in data sets (analyzing shifts of spectral lines) are two very different things. So how could we go about that?
Energy is surely quantized, and any small difference in energy must probably translate into small shifts of the frequencies of the spectral lines themselves (as opposed to the general shift of the spectrum as such, which, as mentioned above, is well-explained by quantum physics) respecting the Planck-Einstein relation for photons (E = hf). I do not know if anyone tried to come up with some kind of quantum-mechanical definition of the concept of entropy, (but I have not googled anything on that, so I expect there must be valuable resources on that out there), and Boltzmann’s constant was re-defined at the occasion of the 2019 revision of the SI system of units – and a careful examination of the rationale of that revision or re-definition should yield deeper insights in this regard, especially because I think that revision firmly anchors what I refer to as a realist interpretation of quantum physics. Thermal radiation is microwave-range radiation, so a 0.1 nm resolution should be enough to capture a shift in spectral lines – if it is there, that is.
I need to think on this. As for now, I look at Langford’s work as art, and one of his interests is, effectively, to connect art and science. Let me quote one of his commentaries on one of his images: “Light and matter dance at 30°C, upon what is essentially a Calcium-Silicate substrate through which light and various chemicals flow. Swirling Yin-Yang patterns reminiscent of Solar flares and magnetic lines of force also remind me of fractal patterns.” [My italics.]
He does phrase it very beautifully, doesn’t he? Maybe I will find some deeper meaning to it later. Dr. Langford’s suggestion to re-phrase quantum-mechanical models in terms of Poynting vectors is one that strikes a note, and there are other ideas there as well. It must be possible to find quantum-mechanical effects by further analyzing, for example, the relation between temperature and RIs, indeed – and to use the formal (consistent and complete!) language of quantum mechanics to (also) explain Dr. Langford’s findings. This would conclusively relate the micro-level of quantum physics to the macro-level of crystals (isotropic or anisotropic structures), and it would not require supercooled condensates or massive investments in new accelerator facilities.
It would also provide amateur physicists with a way to discover and verify all by themselves. That would be a great result in itself. 🙂
Post scriptum (27 March): Looking at the papers again, I do not see a shift in spectral lines. Spectral lines correspond to differences between quantized energies in electron orbitals. These are either atomic orbitals or molecular orbitals (valence electrons), and shifts between orbitals corresponds to spectral lines in the visible spectrum (Rydberg-scale energies) or, in case of molecular orbitals, microwave photons being absorbed or emitted. Temperature just increases the intensity of photon beams going in and out of the system (the rock sample, in this case), and so it causes a shift of the spectrum, but the lines are what they are: their energy is and remains what it is (E = hf). Of course, the superposition principe tells us the energies of microwave and visual-spectrum energies can combine in what resembles a normal distribution around a mean (which, yes, shifts with temperature alright).
As for the gist of the matter, yes, of course, what Dr. Langford is seeing, are quantum-mechanical effects alright.
Post scriptum (9 April 2021): In the preceding week, I found that Dr. Langford seems to find my math too difficult, and turns to pseudo-scientists such as Nassim Haramein, and contributes to Haramein’s Resonance Science Foundation. I dissociate completely from such references and like associations. Everyone is free to seek inspiration elsewhere, but Haramein’s mystical stories are definitely not my cup of tea.
Post scriptum (25 March 2021): Because this post is so extremely short and happy, I want to add a sad anecdote which illustrates what I have come to regard as the sorry state of physics as a science.
A few days ago, an honest researcher put me in cc of an email to a much higher-brow researcher. I won’t reveal names, but the latter – I will call him X – works at a prestigious accelerator lab in the US. The gist of the email was a question on an article of X: “I am still looking at the classical model for the deep orbits. But I have been having trouble trying to determine if the centrifugal and spin-orbit potentials have the same relativistic correction as the Coulomb potential. I have also been having trouble with the Ademko/Vysotski derivation of the Veff = V×E/mc2 – V2/2mc2 formula.”
I was greatly astonished to see X answer this: “Hello – What I know is that this term comes from the Bethe-Salpeter equation, which I am including (#1). The authors say in their book that this equation comes from the Pauli’s theory of spin. Reading from Bethe-Salpeter’s book [Quantum mechanics of one and two electron atoms]: “If we disregard all but the first three members of this equation, we obtain the ordinary Schroedinger equation. The next three terms are peculiar to the relativistic Schroedinger theory”. They say that they derived this equation from covariant Dirac equation, which I am also including (#2). They say that the last term in this equation is characteristic for the Dirac theory of spin ½ particles. I simplified the whole thing by choosing just the spin term, which is already used for hyperfine splitting of normal hydrogen lines. It is obviously approximation, but it gave me a hope to satisfy the virial theorem. Of course, now I know that using your Veff potential does that also. That is all I know.” [I added the italics/bold in the quote.]
So I see this answer while browsing through my emails on my mobile phone, and I am disgusted – thinking: Seriously? You get to publish in high-brow journals, but so you do not understand the equations, and you just drop terms and pick the ones that suit you to make your theory fit what you want to find? And so I immediately reply to all, politely but firmly: “All I can say, is that I would not use equations which I do not fully understand. Dirac’s wave equation itself does not make much sense to me. I think Schroedinger’s original wave equation is relativistically correct. The 1/2 factor in it has nothing to do with the non-relativistic kinetic energy, but with the concept of effective mass and the fact that it models electron pairs (two electrons – neglect of spin). Andre Michaud referred to a variant of Schroedinger’s equation including spin factors.”
Now X replies this, also from his iPhone: “For me the argument was simple. I was desperate trying to satisfy the virial theorem after I realized that ordinary Coulomb potential will not do it. I decided to try the spin potential, which is in every undergraduate quantum mechanical book, starting with Feynman or Tippler, to explain the hyperfine hydrogen splitting. They, however, evaluate it at large radius. I said, what happens if I evaluate it at small radius. And to my surprise, I could satisfy the virial theorem. None of this will be recognized as valid until one finds the small hydrogen experimentally.That is my main aim. To use theory only as a approximate guidance. After it is found, there will be an explosion of “correct” theories.” A few hours later, he makes things even worse by adding: “I forgot to mention another motivation for the spin potential. I was hoping that a spin flip will create an equivalent to the famous “21cm line” for normal hydrogen, which can then be used to detect the small hydrogen in astrophysics. Unfortunately, flipping spin makes it unstable in all potential configurations I tried so far.”
I have never come across a more blatant case of making a theory fit whatever you want to prove (apparently, X believes Mills’ hydrinos (hypothetical small hydrogen) are not a fraud), and it saddens me deeply. Of course, I do understand one will want to fiddle and modify equations when working on something, but you don’t do that when these things are going to get published by serious journals. Just goes to show how physicists effectively got lost in math, and how ‘peer reviews’ actually work: they don’t.
I added an Annex to a paper that talks about all of the fancy stuff quantum physicists like to talk about, like scattering matrices and high-energy particle events. The Annex, however, is probably my simplest and shortest summary of the ordinariness of wavefunction math, including a quick overview of what quantum-mechanical operators actually are. It does not make use of state vector algebra or the usual high-brow talk about Gilbert spaces and what have you: you only need to know what a derivative is, and combine it with our realist interpretation of what the wavefunction actually represents.
I think I should do a paper on the language of physics. To show how (i) rotations (i, j, k), (ii) scalars (constants or just numerical values) and (iii) vectors (real vectors (e.g. position vectors) and pseudovectors (e.g. angular frequency or momentum)), and (iv) operators (derivatives of the wavefunction with respect to time and spatial directions) form ‘words’ (e.g. energy and momentum operators), and how these ‘words’ then combine into meaningful statements (e.g. Schroedinger’s equation).
All of physics can then be summed up in a half-page or so. All the rest is thermodynamics 🙂 JL
PS: You only get collapsing wavefunctions when adding uncertainty to the models (i.e. our own uncertainty about the energy and momentum). The ‘collapse’ of the wavefunction (let us be precise, the collapse of the (dissipating) wavepacket) thus corresponds to the ‘measurement’ operation. 🙂
PS2: Incidentally, the analysis also gives an even more intuitive explanation of Einstein’s mass-energy equivalence relation, which I summarize in a reply to one of the many ‘numerologist’ physicists on ResearchGate (copied below).
I just did a short paper with, yes, all you need to know about cosmology. It recapitulates my theory of dark matter (antimatter), how we might imagine the Big Bang (not a single one, probably!), the possibility of an oscillating Universe, possible extraterrestrial life, interstellar communication, and, yes, life itself. It also tries to offer a more intuitive explanation of SRT/GRT based on an analysis of the argument of the quantum-mechanical wavefunction – although it may not come across as being very ‘intuitive’ (my math is, without any doubt, much more intuitive to me than to you – if only because it is a ‘language’ I developed over years!).
I introduced the paper with a rather long comment on one of the ResearchGate discussion threads: Is QM consistent?. I copy it here for the convenience of my readers. 🙂
The concept of ‘dimension’ may well be the single most misunderstood concept in physics. The bare minimum rule to get out of the mess and have fruitful exchanges with other (re)searchers is to clearly distinguish between mathematical and physical dimensions. Physical dimensions are covered by the 2019 revision of SI units, which may well be the most significant consolidation of theory which science has seen over the past hundred years or so (since Einstein’s SRT/GRT theories, in fact). Its definitions (e.g. the definition of the fine-structure constant) – combined with the CODATA values for commonly repeated measurements – sum up all of physics.
A few months before his untimely demise, H.A. Lorentz delivered his last contributions to quantum physics (Solvay Conference, 1927, General Discussion). He did not challenge the new physics, but did remark it failed to prove a true understanding of what was actually going on by not providing a consistent interpretation of the equations (which he did not doubt were true, in the sense of representing scientifically established facts and repeated measurements) in other words. Among various other remarks, he made this one: “We are trying to represent phenomena. We try to form an image of them in our mind. Till now, we always tried to do using the ordinary notions of space and time. These notions may be innate; they result, in any case, from our personal experience, from our daily observations. To me, these notions are clear, and I admit I am not able to have any idea about physics without those notions. The image I want to have when thinking physical phenomena has to be clear and well defined, and it seems to me that cannot be done without these notions of a system defined in space and in time.”
Systems of equations may be reduced or expanded to include more or less mathematical (and physical) dimensions, but one has to be able to reduce them to the basic laws of physics (the mass-energy equivalence relation, the relativistically correct expression of Newton’s force law, the Planck-Einstein relation, etcetera), whose dimensions are physical. The real and imaginary part of the wavefunction represents kinetic and potential energy sloshing back and forth in a system, always adding up to the total energy of the system. The sum of squares of the real and imaginary part adding up to give us the energy density (non-normalized wavefunction) at each point in space or, after normalization, a probability P(r) to find the electron as a function of the position vector r. The argument of the wavefunction itself is invariant and, therefore, is consistent with both SRT as well as GRT (see Annex I and II of The Finite Universe).
The quantum-mechanical wavefunction is, therefore, the pendant to both the Planck-Einstein relation and the mass-energy equivalence relation. Indeed, all comes out of the E = h·f = p·λ and E = mc2 equations (or their reduced forms) combined with Maxwell’s equations written in terms of the scalar and vector potential. The indeterminacy in regard to the position is statistical only: it arises because of the high velocity of the pointlike charge, which makes it impossible to accurately determine its position at any point in time. In other words, the problem is that we are not able to determine the initial condition of the system. If we would be able to do so, we would be able to substitute the indefinite integrals used to derive and define the quantum-mechanical operators to definite integrals, and so we would have a completely defined system. [See: The Meaning of Uncertainty and the Geometry of the Wavefunction.]
Quarks make sense as mathematical form factors only: they reduce the complexity of the scattering matrix, but they are no equivalent to a full and consistent application to the conservation and symmetry laws (conservation of energy, linear and angular momentum, physical action, and elementary charge). The quark hypothesis suffers from the same defect or weakness as the one that H.A. Lorentz noted in regard to the Uncertainty Principle, or in regard to 19th century aether theories. I paraphrase: “The conditions of an experiment are such that, from a practical point of view, we would have indeterminism, but there is no need to elevate indeterminism to a philosophical principle.” Likewise, the elevation of quarks – the belief that these mathematical form factors have some kind of ontological status – may satisfy some kind of deeper religious thirst for knowledge, but that is all there is to it.
Post-WWII developments saw a confluence of (Cold War) politics and scientific dogma – which is not at all unusual in the history of thought, but which has been documented now sufficiently well to get over it (see: Oliver Consa, February 2020, Something is rotten in the state of QED). Of course, there was also a more innocent driver here, which Feynman writes about rather explicitly: students were no longer electing physics as a study because everything was supposed to be solved in that field, and all that was left was engineering. Hence, Feynman and many others probably did try to re-establish an original sense of mystery and wonder to attract the brightest. As Feynman’s writes in the epilogue to his Lectures: “The main purpose of my teaching has not been to prepare you for some examination—it was not even to prepare you to serve industry or the military. I [just] wanted most to give you some appreciation of the wonderful world and the physicist’s way of looking at it, which, I believe, is a major part of the true culture of modern times.”
In any case, I think Caltech’s ambitious project to develop an entirely new way of presenting the subject was very successful. I see very few remaining fundamental questions, except – perhaps – the questions related to the nature of electric charge (fractal?), but all other questions mentioned as ‘unsolved problems’ on Wikipedia’s list for physics and cosmology (see: https://en.wikipedia.org/wiki/List_of_unsolved_problems_in_physics), such as the question of dark matter (antimatter), the arrow of time, one-photon Mach-Zehnder interference, the anomaly in the magnetic moment of an electron, etcetera, come across as comprehensible and, therefore, ‘solved’ to me. As such, I repeat what I think of as a logical truth: quantum physics is fully consistent. ‘Numerical’ interpretations of quantum physics (such as SO(4), for example) may not be wrong, but they do not provide me with the kind of understanding I was looking for, and finally – after many years of deep questioning myself and others – have found.
Feynman is right that the Great Law of Nature may be summarized as U = 0 (Lectures, II-25-6) but also notes this: “This simple notation just hides the complexity in the definitions of symbols: it is just a trick.” It is like talking of “the night in which all cows are equally black” (Hegel, Phänomenologie des Geistes, Vorrede, 1807). Hence, the U = 0 equation needs to be separated out. I note a great majority of people on this forum try to do that in a very sensible way, i.e. they are aware that science differs from religion in that it seeks to experimentally verify its propositions: it measures rather than believes, and these measurements are cross-checked by a global community and, thereby, establish a non-subjective reality, of which I feel part. A limited number of searchers may believe their version of truth is more true than mainstream views, but I would suggest they do some more reading before trying to re-invent the wheel.
For the rest, we should heed Wittgenstein’s final philosophical thesis on this forum, I think: “Wovon man nicht sprechen kann, darüber muß man schweigen.” Again, this applies to scientific discourse only, of course. We are all free to publish whatever nonsense we want on other forums. Chances are more people would read me there, but as the scope for some kind of consensus decreases accordingly, I try to refrain from doing so.
PS: To understand relativity theory, one must agree on the notion of ‘synchronized clocks’. Synchronization in the context of SRT does not correspond to the everyday usage of the concept. It is not a matter of making them ‘tick’ the same: we must simply assume that the clock that is used to measure the distance from A to B does not move relative to the clock that is used to measure the distance from B to A: clocks that are moving relative to each other cannot be made to tick the same. An observer in the inertial reference frame can only agree to a t = t’ = 0 point (or, as we are talking time, a t = t’ = 0 instant, we should say). From an ontological perspective, this entails both observers can agree on the notion of an infinitesimally small point in space and an infinitesimally small instant of time. Again, these notions are mathematical concepts and do not correspond to the physical concept of quantization of energy, which is given by the Planck-Einstein relation. But the mathematical or philosophical notion does not come across as problematic to me. Likewise, the idea of instantaneous or momentaneous momentum may or may not correspond to a physical reality, but I do not think of it as problematic. When everything is said and done, we do need math to describe physical reality. Feynman’s U = 0 (un)worldliness equation is, effectively, like a very black cow in a very dark night: I just cannot ‘see’ it. 🙂 The notion of infinitesimally small time and distance scales is just like reading the e-i*pi = -1 identity, the ei0 = e0 = 1 or i2 = -1 relations for me. Interpreting i as a rotation by 90 degrees along the circumference of a circle ensures these notions come across as obvious logical (or mathematical/philosophical) truths. 🙂 What is amazing is that complex numbers describe Nature so well, but then mankind took a long time to find that out! [Remember: Euler was an 18th century mathematician, and Louis de Broglie a 20th century physicist so, yes, they are separated by two full centuries!]
I quote: “Seen are Golgi apparatus, mitochondria, endoplasmic reticulum, cell wall, and hundreds of protein structures and membrane-bound organelles. The cell structure is of a Eukaryote cell i.e. a multicellular organism which means it can correspond to the cell structure of humans, dogs, or even fungi and plants.” These images were apparently put together from “X-ray, nuclear magnetic resonance (NMR) and cryoelectron microscopy datasets.”
I think it is one of those moments where it feels great to be human. 🙂
The electromagnetic force has an asymmetry: the magnetic field lags the electric field. The phase shift is 90 degrees. We can use complex notation to write the E and B vectors as functions of each other. Indeed, the Lorentz force on a charge is equal to: F = qE + q(v×B). Hence, if we know the (electric field) E, then we know the (magnetic field) B: B is perpendicular to E, and its magnitude is 1/c times the magnitude of E. We may, therefore, write:
B = –iE/c
The minus sign in the B = –iE/c expression is there because we need to combine several conventions here. Of course, there is the classical (physical) right-hand rule for E and B, but we also need to combine the right-hand rule for the coordinate system with the convention that multiplication with the imaginary unit amounts to a counterclockwise rotation by 90 degrees. Hence, the minus sign is necessary for the consistency of the description. It ensures that we can associate the aeiEt/ħ and ae–iEt/ħ functions with left and right-handed spin (angular momentum), respectively.
Now, we can easily imagine a antiforce: an electromagnetic antiforce would have a magnetic field which precedes the electric field by 90 degrees, and we can do the same for the nuclear force (EM and nuclear oscillations are 2D and 3D oscillations respectively). It is just an application of Occam’s Razor principle: the mathematical possibilities in the description (notations and equations) must correspond to physical realities, and vice versa (one-on-one). Hence, to describe antimatter, all we have to do is to put a minus sign in front of the wavefunction. [Of course, we should also take the opposite of the charge(s) of its antimatter counterpart, and please note we have a possible plural here (charges) because we think of neutral particles (e.g. neutrons, or neutral mesons) as consisting of opposite charges.] This is just the principle which we already applied when working out the equation for the neutral antikaon (see Annex IV and V of the above-referenced paper):
Don’t worry if you do not understand too much of the equations: we just put them there to impress the professionals. 🙂 The point is this: matter and antimatter are each other opposite, literally: the wavefunctions aeiEt/ħ and –aeiEt/ħ add up to zero, and they correspond to opposite forces too! Of course, we also have lightparticles, so we have antiphotons and antineutrinos too.
We think this explains the rather enormous amount of so-called dark matter and dark energy in the Universe (the Wikipedia article on dark matter says it accounts for about 85% of the total mass/energy of the Universe, while the article on the observable Universe puts it at about 95%!). We did not say much about this in our YouTube talk about the Universe, but we think we understand things now. Dark matter is called dark because it does not appear to interact with the electromagnetic field: it does not seem to absorb, reflect or emit electromagnetic radiation, and is, therefore, difficult to detect. That should not be a surprise: antiphotons would not be absorbed or emitted by ordinary matter. Only anti-atoms (i.e. think of a antihydrogen atom as a antiproton and a positron here) would do so.
So did we explain the mystery? We think so. 🙂
We will conclude with a final remark/question. The opposite spacetime signature of antimatter is, obviously, equivalent to a swap of the real and imaginary axes. This begs the question: can we, perhaps, dispense with the concept of charge altogether? Is geometry enough to understand everything? We are not quite sure how to answer this question but we do not think so: a positron is a positron, and an electron is an electron¾the sign of the charge (positive and negative, respectively) is what distinguishes them! We also think charge is conserved, at the level of the charges themselves (see our paper on matter/antimatter pair production and annihilation).
We, therefore, think of charge as the essence of the Universe. But, yes, everything else is sheer geometry! 🙂
There are two branches of physics. The nicer branch studies equilibrium states: simple laws, stable particles (electrons and protons, basically), the expanding (oscillating?) Universe, etcetera. This branch includes the study of dynamical systems which we can only describe in terms of probabilities or approximations: think of kinetic gas theory (thermodynamics) or, much simpler, hydrostatics (the flow of water, Feynman, Vol. II, chapters 40 and 41), about which Feynman writes this:
“The simplest form of the problem is to take a pipe that is very long and push water through it at high speed. We ask: to push a given amount of water through that pipe, how much pressure is needed? No one can analyze it from first principles and the properties of water. If the water flows very slowly, or if we use a thick goo like honey, then we can do it nicely. You will find that in your textbook. What we really cannot do is deal with actual, wet water running through a pipe. That is the central problem which we ought to solve some day, and we have not.” (Feynman, I-3-7)
Still, we believe first principles do apply to the flow of water through a pipe. In contrast, the second branch of physics – we think of the study of non-stable particles here: transients (charged kaons and pions, for example) or resonances (very short-lived intermediate energy states). The class of physicists who studies these must be commended, but they resemble econometrists modeling input-output relations: if they are lucky, they will get some kind of mathematical description of what goes in and what goes out, but the math does not tell them how stuff actually happens. It leads one to think about the difference between a theory, a calculation and an explanation. Simplifying somewhat, we can represent such input-output relations by thinking of a process that will be operating on some state |ψ⟩ to produce some other state |ϕ⟩, which we write like this:
A is referred to as a Hermitian matrix if the process is reversible. Reversibility looks like time reversal, which can be represented by taking the complex conjugate ⟨ϕ|A|ψ⟩* = ⟨ψ|A†|ϕ⟩: we put a minus sign in front of the imaginary unit, so we have –i instead of i in the wavefunctions (or i instead of –i with respect to the usual convention for denoting the direction of rotation). Processes may not reversible, in which case we talk about symmetry-breaking: CPT-symmetry is always respected so, if T-symmetry (time) is broken, CP-symmetry is broken as well. There is nothing magical about that.
Physicists found the description of these input-output relations can be simplified greatly by introducing quarks (see Annex II of our paper on ontology and physics). Quarks have partial charge and, more generally, mix physical dimensions (mass/energy, spin or (angular) momentum). They create some order – think of it as some kind of taxonomy – in the vast zoo of (unstable) particles, which is great. However, we do not think there was a need to give them some kind of ontological status: unlike plants or insects, partial charges do not exist.
We also think the association between forces and (virtual) particles is misguided. Of course, one might say forces are being mediated by particles (matter- or light-particles), because particles effectively pack energy and angular momentum (light-particles – photons and neutrinos – differ from matter-particles (electrons, protons) in that they carry no charge, but they do carry electromagnetic and/or nuclear energy) and force and energy are, therefore, being transferred through particle reactions, elastically or non-elastically. However, we think it is important to clearly separate the notion of fields and particles: they are governed by the same laws (conservation of charge, energy, and (linear and angular) momentum, and – last but not least – (physical) action) but their nature is very different.
W.E. Lamb (1995), nearing the end of his very distinguished scientific career, wrote about “a comedy of errors and historical accidents”, but we think the business is rather serious: we have reached the End of Science. We have solved Feynman’s U = 0 equation. All that is left, is engineering: solving practical problems and inventing new stuff. That should be exciting enough. 🙂
Post scriptum: I added an Annex (III) to my paper on ontology and physics, with what we think of as a complete description of the Universe. It is abstruse but fun (we hope!): we basically add a description of events to Feynman’s U = 0 (un)worldliness formula. 🙂
I was a bit bored today (Valentine’s Day but no Valentine playing for me), and so I did a video on the Universe and (the possibility) of Life elsewhere. It is simple (I managed to limit it to 40 minutes!) but it deals with all of the Big Questions: fundamental forces and distance scales; the geometric approach to gravity and the curvature of the Universe; Big Bang(s) and – who knows? – an oscillating Universe; and, yes, Life here and, perhaps, elsewhere. Enjoy ! The corresponding paper is available on ResearchGate.
PS: I’ve also organized my thoughts on quarks in a (much more) moderate annex to my paper on ontology and physics. Quite a productive Valentine’s Day – despite the absence of a Valentina ! 🙂 JL
One sometimes wonders what keeps amateur physicists awake. Why is it that they want to understand quarks and wave equations, or delve into complicated math (perturbation theory, for example)? I believe it is driven by the same human curiosity that drives philosophy. Physics stands apart from other sciences because it examines the smallest of smallest – the essence of things, so to speak.
Unlike other sciences (the human sciences in particular, perhaps), physicists also seek to reduce the number of concepts, rather than multiply them – even if, sadly, enough, they do not always a good job at that. However, generally speaking, physics and math may, effectively, be considered to be the King and Queen of Science, respectively.
The Queen is an eternal beauty, of course, because Her Language may mean anything. Physics, in contrast, talks specifics: physical dimensions (force, distance, energy, etcetera), as opposed to mathematical dimensions – which are mere quantities (scalars and vectors).
Science differs from religion in that it seeks to experimentally verify its propositions. It measures rather than believes. These measurements are cross-checked by a global community and, thereby, establish a non-subjective reality. The question of whether reality exists outside of us, is irrelevant: it is a category mistake (Ryle, 1949). It is like asking why we are here: we just are.
All is in the fundamental equations. An equation relates a measurement to Nature’s constants. Measurements – energy/mass, or velocities – are relative. Nature’s constants do not depend on the frame of reference of the observer and we may, therefore, label them as being absolute. This corresponds to the difference between variables and parameters in equations. The speed of light (c) and Planck’s quantum of action (h) are parameters in the E/m = c2 and E = hf, respectively.
Feynman (II-25-6) is right that the Great Law of Nature may be summarized as U = 0 but that “this simple notation just hides the complexity in the definitions of symbols is just a trick.” It is like talking of the night “in which all cows are equally black” (Hegel, Phänomenologie des Geistes, Vorrede, 1807). Hence, the U = 0 equation needs to be separated out. I would separate it out as:
We imagine things in 3D space and one-directional time (Lorentz, 1927, and Kant, 1781). The imaginary unit operator (i) represents a rotation in space. A rotation takes time. Its physical dimension is, therefore, s/m or -s/m, as per the mathematical convention in place (Minkowski’s metric signature and counter-clockwise evolution of the argument of complex numbers, which represent the (elementary) wavefunction).
Velocities can be linear or tangential, giving rise to the concepts of linear versus angular momentum. Tangential velocities imply orbitals: circular and elliptical orbitals are closed. Particles are pointlike charges in closed orbitals. We are not sure if non-closed orbitals might correspond to some reality: linear oscillations are field particles, but we do not think of lines as non-closed orbitals: the curvature of real space (the Universe we live in) suggest we should but we are not sure such thinking is productive (efforts to model gravity as a residual force have failed so far).
Space and time are innate or a priori categories (Kant, 1781). Elementary particles can be modeled as pointlike charges oscillating in space and in time. The concept of charge could be dispensed with if there were not lightlike particles: photons and neutrinos, which carry energy but no charge. The pointlike charge which is oscillating is pointlike but may have a finite (non-zero) physical dimension, which explains the anomalous magnetic moment of the free (Compton) electron. However, it only appears to have a non-zero dimension when the electromagnetic force is involved (the proton has no anomalous magnetic moment and is about 3.35 times smaller than the calculated radius of the pointlike charge inside of an electron). Why? We do not know: elementary particles are what they are.
We have two forces: electromagnetic and nuclear. One of the most remarkable things is that the E/m = c2 holds for both electromagnetic and nuclear oscillations, or combinations thereof (superposition theorem). Combined with the oscillator model (E = ma2ω2 = mc2 and, therefore, c must be equal to c = aω), this makes us think of c2 as modeling an elasticity or plasticity of space. Why two oscillatorymodes only? In 3D space, we can only imagine oscillations in one, two and three dimensions (line, plane, and sphere). The idea of four-dimensional spacetime is not relevant in this context.
Photons and neutrinos are linear oscillations and, because they carry no charge, travel at the speed of light. Electrons and muon-electrons (and their antimatter counterparts) are 2D oscillations packing electromagnetic and nuclear energy, respectively. The proton (and antiproton) pack a 3D nuclear oscillation. Neutrons combine positive and negative charge and are, therefore, neutral. Neutrons may or may not combine the electromagnetic and nuclear force: their size (more or less the same as that of the proton) suggests the oscillation is nuclear.
orbital electron (e.g.: 1H)
pions (π±/ π0)?
n (neutron)? D+ (deuteron)?
corresponding field particle
The theory is complete: each theoretical/mathematical/logical possibility corresponds to a physical reality, with spin distinguishing matter from antimatter for particles with the same form factor.
When reading this, my kids might call me and ask whether I have gone mad. Their doubts and worry are not random: the laws of the Universe are deterministic (our macro-time scale introduces probabilistic determinism only). Free will is real, however: we analyze and, based on our analysis, we determine the best course to take when taking care of business. Each course of action is associated with an anticipated cost and return. We do not always choose the best course of action because of past experience, habit, laziness or – in my case – an inexplicable desire to experiment and explore new territory.
The work on the neutron model inspired me to have another look at the 1/4 factor which bothered me when applying mass-without-mass models to the proton. I think I nailed it: it is just another form factor. Have a look at the proton paper. Mystery solved – finally ! 🙂
The proton model will be key. We cannot explain it in the typical ‘mass without mass’ model of zittering charges: we get a 1/4 factor in the explanation of the proton radius, which is impossible to get rid of unless we assume some ‘strong’ force come into play. That is why I prioritize a ‘straight’ attack on the electron and the proton-electron bond in a primitive neutron model.
The calculation of forces inside a muon-electron and a proton (see ) is an interesting exercise: it is the only thing which explains why an electron annihilates a positron but electrons and protons can live together (the ‘anti-matter’ nature of charged particles only shows because of opposite spin directions of the fields – so it is only when the ‘structure’ of matter-antimatter pairs is different that they will not annihilate each other).
In short, 2021 will be an interesting year for me. The intent of my last two papers (on the deuteron model and the primitive neutron model) was to think of energy values: the energy value of the bond between electron and proton in the neutron, and the energy value of the bond between proton and neutron in a deuteron nucleus. But, yes, the more fundamental work remains to be done !
In my ‘signing off’ post, I wrote I had enough of physics but that my last(?) ambition was to “contribute to an intuitive, realist and mathematically correct model of the deuteron nucleus.” Well… The paper is there. And I am extremely pleased with the result. Thank you, Mr. Meulenberg. You sure have good intuition.
I took the opportunity to revisit Yukawa’s nuclear potential and demolish his modeling of a new nuclear force without a charge to act on. Looking back at the past 100 years of physics history, I now start to think that was the decisive destructive moment in physics: that 1935 paper, which started off all of the hype on virtual particles, quantum field theory, and a nuclear force that could not possibly be electromagnetic plus – totally not done, of course ! – utter disregard for physical dimensions and the physical geometry of fields in 3D space or – taking retardation effects into account – 4D spacetime. Fortunately, we have hope: the 2019 fixing of SI units puts physics firmly back onto the road to reality – or so we hope.
Paolo Di Sia‘s and my paper show one gets very reasonable energy and separation distances for nuclear bonds and inter-nucleon distances when assuming the presence of magnetic and/or electric dipole fields arising from deep electron orbitals. The model shows one of the protons pulling the ‘electron blanket’ from another proton (the neutron) towards its own side so as to create an electric dipole moment. So it is just like a valence electron in a chemical bond. So it is like water, then? Water is a polar molecule but we do not necessarily need to start with polar configurations when trying to expand this model so as to inject some dynamics into it (spherically symmetric orbitals are probably easier to model). Hmm… Perhaps I need to look at the thermodynamical equations for dry versus wet water once again… Phew ! Where to start?
I have no experience – I have very little math, actually – with modeling molecular orbitals. So I should, perhaps, contact a friend from a few years ago now – living in Hawaii and pursuing more spiritual matters too – who did just that long time ago: orbitals using Schroedinger’s wave equation (I think Schroedinger’s equation is relativistically correct – just a misinterpretation of the concept of ‘effective mass’ by the naysayers). What kind of wave equation are we looking at? One that integrates inverse square and inverse cube force field laws arising from charges and the dipole moments they create while moving. [Hey! Perhaps we can relate these inverse square and cube fields to the second- and third-order terms in the binomial development of the relativistic mass formula (see the section on kinetic energy in my paper on one of Feynman’s more original renderings of Maxwell’s equations) but… Well… Probably best to start by seeing how Feynman got those field equations out of Maxwell’s equations. It is a bit buried in his development of the Liénard and Wiechert equations, which are written in terms of the scalar and vector potentials φ and A instead of E and B vectors, but it should all work out.]
If the nuclear force is electromagnetic, then these ‘nuclear orbitals’ should respect the Planck-Einstein relation. So then we can calculate frequencies and radii of orbitals now, right? The use of natural units and imaginary units to represent rotations/orthogonality in space might make calculations easy (B = iE). Indeed, with the 2019 revision of SI units, I might need to re-evaluate the usefulness of natural units (I always stayed away from it because it ‘hides’ the physics in the math as it makes abstraction of their physical dimension).
Hey ! Perhaps we can model everything with quaternions, using imaginary units (i and j) to represent rotations in 3D space so as to ensure consistent application of the appropriate right-hand rules always (special relativity gets added to the mix so we probably need to relate the (ds)2 = (dx)2 + (dy)2 + (dz)2 – (dct)2 to the modified Hamilton’s q = a + ib + jc – kd expression then). Using vector equations throughout and thinking of has a vector when using the E = hf and h = pλ Planck-Einstein relation (something with a magnitude and a direction) should do the trick, right? [In case you wonder how we can write fas a vector: angular frequency is a vector too. The Planck-Einstein relation is valid for both linear as well as circular oscillations: see our paper on the interpretation of de Broglie wavelength.]
Oh – and while special relativity is there because of Maxwell’s equation, gravity (general relativity) should be left out of the picture. Why? Because we would like to explain gravity as a residual very-far-field force. And trying to integrate gravity inevitable leads one to analyze particles as ‘black holes.’ Not nice, philosophically speaking. In fact, any 1/rn field inevitably leads one to think of some kind of black hole at the center, which is why thinking of fundamental particles in terms ring currents and dipole moments makes so much sense! [We need nothingness and infinity as mathematical concepts (limits, really) but they cannot possibly represent anything real, right?]
The consistent use of the Planck-Einstein law to model these nuclear electron orbitals should probably involve multiples of h to explain their size and energy: E = nhf rather than E = hf. For example, when calculating the radius of an orbital of a pointlike charge with the energy of a proton, one gets a radius that is only 1/4 of the proton radius (0.21 fm instead of 0.82 fm, approximately). To make the radius fit that of a proton, one has to use the E = 4hf relation. Indeed, for the time being, we should probably continue to reject the idea of using fractions of h to model deep electron orbitals. I also think we should avoid superluminal velocity concepts.
This post sounds like madness? Yes. And then, no! To be honest, I think of it as one of the better Aha! moments in my life. 🙂
Brussels, 30 December 2020
Post scriptum (1 January 2021): Lots of stuff coming together here ! 2021 will definitely see the Grand Unified Theory of Classical Physics becoming somewhat more real. It looks like Mills is going to make a major addition/correction to his electron orbital modeling work and, hopefully, manage to publish the gist of it in the eminent mainstream Nature journal. That makes a lot of sense: to move from an atom to an analysis of nuclei or complex three-particle systems, one should combine singlet and doublet energy states – if only to avoid reduce three-body problems to two-body problems. 🙂 I still do not buy the fractional use of Planck’s quantum of action, though. Especially now that we got rid of the concept of a separate ‘nuclear’ charge (there is only one charge: the electric charge, and it comes in two ‘colors’): if Planck’s quantum of action is electromagnetic, then it comes in wholes or multiples. No fractions. Fractional powers of distance functions in field or potential formulas are OK, however. 🙂
In 1995, W.E. Lamb Jr. wrote the following on the nature of the photon: “There is no such thing as a photon. Only a comedy of errors and historical accidents led to its popularity among physicists and optical scientists. I admit that the word is short and convenient. Its use is also habit forming. Similarly, one might find it convenient to speak of the “aether” or “vacuum” to stand for empty space, even if no such thing existed. There are very good substitute words for “photon”, (e.g., “radiation” or “light”), and for “photonics” (e.g., “optics” or “quantum optics”). Similar objections are possible to use of the word “phonon”, which dates from 1932. Objects like electrons, neutrinos of finite rest mass, or helium atoms can, under suitable conditions, be considered to be particles, since their theories then have viable non-relativistic and non-quantum limits.”
The opinion of a Nobel Prize laureate carries some weight, of course, but we think the concept of a photon makes sense. As the electron moves from one (potential) energy state to another – from one atomic or molecular orbital to another – it builds an oscillating electromagnetic field which has an integrity of its own and, therefore, is not only wave-like but also particle-like.
We, therefore, dedicated the fifth chapter of our re-write of Feynman’s Lectures to a dual analysis of EM radiation (and, yes, this post is just an announcement of the paper so you are supposed to click the link to read it). It is, basically, an overview of a rather particular expression of Maxwell’s equations which Feynman uses to discuss the laws of radiation. I wonder how to – possibly – ‘transform’ or ‘transpose’ this framework so it might apply to deep electron orbitals and – possibly – proton-neutron oscillations.
The presentations were very good (especially those on the experimental results and the recent involvement of some very respectable institutions in addition to the usual suspects and, sadly, some fly-by-night operators too), and the follow-on conversation with one of the co-organizers convinced me that the researchers are serious, open-minded and – while not quite being able to provide all of the answers we are all seeking – very ready to discuss them seriously. Most, if not all, experiments involve transmutions of nuclei triggered by low-energy inputs such as a low-energy radiation (irradiation and transmutation of palladium by, say, a now-household 5 mW laser beam is just one of the examples). One experiment even triggered a current just by adding plain heat which, as you know, is nothing but very low-energy (infrared) radiation, although I must admit this was one I would like to see replicated en masse before believing it to be real (the equipment was small and simple, and so the experimenters could have shared it easily with other labs).
When looking at these experiments, the comparison that comes to mind is that of an opera singer shattering crystal with his or her voice: some frequency in the sound causes the material to resonate at, yes, its resonant frequency (most probably an enormous but integer multiple of the sound frequency), and then the energy builds up – like when you give a child on a swing an extra push every time when you should – as the amplitude becomes larger and larger – till the breaking point is reached. Another comparison is the failing of a suspension bridge when external vibrations (think of the rather proverbial soldier regiment here) cause similar resonance phenomena. So, yes, it is not unreasonable to believe that one could be able to induce neutron decay and, thereby, release the binding energy between the proton and the electron in the process by some low-energy stimulation provided the frequencies are harmonic.
The problem with the comparison – and for the LENR idea to be truly useful – is this: one cannot see any net production of energy here. The strain or stress that builds up in the crystal glass is a strain induced by the energy in the sound wave (which is why the singing demos usually include amplifiers to attain the required power/amplitude ratio, i.e. the required decibels). In addition, the breaking of crystal or a suspension bridge typically involves a weaker link somewhere, or some directional aspect (so that would be the equivalent of an impurity in a crystal structure, I guess), but that is a minor point, and a point that is probably easier to tackle than the question on the energy equation.
LENR research has probably advanced far enough now (the first series of experiments started in 1989) to slowly start focusing on the whole chain of these successful experiments: what is the equivalent, in these low-energy reactions, of the nuclear fuel in high-energy fission or fusion experiments? And, if it can be clearly identified, the researchers need to show that the energy that goes into the production of this fuel is much less than the energy you get out of it by burning it (and, of course, with ‘burning’ I mean the decay reaction here). [In case you have heard about Randell Mills’ hydrino experiments, he should show the emission spectrum of these hydrinos. Otherwise, one might think he is literally burning hydrogen. Attracting venture capital and providing scientific proof are not mutually exclusive, are they? In the meanwhile, I hope that what he is showing is real, in the way all LENR researchers hope it is real.]
LENR research may also usefully focus on getting the fundamental theory right. The observed anomalous heat and/or transmutation reactions cannot be explained by mainstream quantum physics (I am talking QCD here, so that’s QFT, basically). That should not surprise us: one does not need quarks or gluons to explain high-energy nuclear processes such as fission or fusion, either! My theory is, of course, typically simplistically simple: the energy that is being unlocked is just the binding energy between the nuclear electron and the protons, in the neutron itself or in a composite nucleus, the simplest of which is the deuteron nucleus. I talk about that in my paper on matter-antimatter pair creation/annihilation as a nuclear process but you do not need to be an adept of classical or realist interpretations of quantum mechanics to understand this point. To quote a motivational writer here: it is OK for things to be easy. 🙂
So LENR theorists just need to accept they are not mainstream – yet, that is – and come out with a more clearly articulated theory on why their stuff works the way it does. For some reason I do not quite understand, they come across as somewhat hesitant to do so. Fears of being frozen out even more by the mainstream? Come on guys ! You are coming out of the cold anyway, so why not be bold and go all the way? It is a time of opportunities now, and the field of LENR is one of them, both theoretically as well as practically speaking. I honestly think it is one of those rare moments in the history of physics where experimental research may be well ahead of theoretical physics, so they should feel like proud trailblazers!
Personally, I do not think it will replace big classical nuclear energy plants anytime soon but, in a not-so-distant future, it might yield much very useful small devices: lower energy, and, therefore, lower risk also. I also look forward to LENR research dealing the fatal blow to standard theory by confirming we do not need perturbation and renormalization theories to explain reality. 🙂
Post scriptum: If low-energy nuclear reactions are real, mainstream (astro)physicists will also have to rework their stories on cosmogenesis and the (future) evolution of the Universe. The standard story may well be summed up in the brief commentary of the HyperPhysics entry on the deuteron nucleus:
“The stability of the deuteron is an important part of the story of the universe. In the Big Bang model it is presumed that in early stages there were equal numbers of neutrons and protons since the available energies were much higher than the 0.78 MeV required to convert a proton and electron to a neutron. When the temperature dropped to the point where neutrons could no longer be produced from protons, the decay of free neutrons began to diminish their population. Those which combined with protons to form deuterons were protected from further decay. This is fortunate for us because if all the neutrons had decayed, there would be no universe as we know it, and we wouldn’t be here!“
If low-energy nuclear reactions are real – and I think they are – then the standard story about the Big Bang is obviously bogus too. I am not necessarily doubting the reality of the Big Bang itself (the ongoing expansion of the Universe is a scientific fact so, yes, the Universe must have been much smaller and (much) more energy-dense long time ago), but the standard calculations on proton-neutron reactions taking place, or not, at cut-off temperatures/energies above/below 0.78 MeV do not make sense anymore. One should, perhaps, think more in terms of how matter-antimatter ratios might or might not have evolved (and, of course, one should keep an eye on the electron-proton ratio, but that should work itself out because of charge conservation) to correctly calculate the early evolution of the Universe, rather than focusing so much on proton-neutron ratios.
Why do I say that? Because neutrons do appear to consist of a proton and an electron – rather than of quarks and gluons – and they continue to decay and then recombine again, so these proton-neutron reactions must not be thoughts of as some historic (discontinuous) process.
[…] Hmm… The more I look at the standard stories, the more holes I see… This one, however, is very serious. If LENR and/or cold fusion is real, then it will also revolutionize the theories on cosmogenesis (the evolution of the Universe). I instinctively like that, of course, because – just like quantization – I had the impression the discontinuities are there, but not quite in the way mainstream physicists – thinking more in terms of quarks and gluons rather than in terms of stuff that we can actually measure – portray the whole show.
I have been exploring the weird wonderland of physics for over seven years now. At several occasions, I thought I should just stop. It was rewarding, but terribly exhausting at times as well! I am happy I did not give up, if only because I finally managed to come up with a more realist interpretation of the ‘mystery’ of matter-antimatter pair production/annihilation. So, yes, I think I can confidently state I finally understand physics the way I want to understand it. It was an extraordinary journey, and I am happy I could share it with many fellow searchers (300 posts and 300,000 hits on my first website now, 10,000+ downloads of papers (including the downloads from Phil Gibb’s site and academia.edu) and, better still, lots of interesting conversations.
One of these conversations was with a fine nuclear physicist, Andrew Meulenberg. We were in touch on the idea of a neutron (some kind of combination of a proton and a ‘nuclear’ electron—following up on Rutherford’s original idea, basically). More importantly, we chatted about, perhaps, developing a model for the deuterium nucleus (deuteron)—the hydrogen isotope which consists of a proton and a neutron. However, I feel I need to let go here, if only because I do not think I have the required mathematical skills for a venture like this. I feel somewhat guilty of letting him down. Hence, just in case someone out there feels he could contribute to this, I am copying my last email to him below. It sort of sums up my basic intuitions in terms of how one could possibly approach this.
Can it be done? Maybe. Maybe not. All I know is that not many have been trying since Bohr’s young wolves hijacked scientific discourse after the 1927 Solvay Conference and elevated a mathematical technique – perturbation theory – to the scientific dogma which is now referred to as quantum field theory.
So, yes, now I am really signing off. Thanks for reading me, now or in the past—I wrote my first post here about seven years ago! I hope it was not only useful but enjoyable as well. Oh—And please check out my YouTube channel on Physics ! 🙂
From: Jean Louis Van Belle Sent: 14 November 2020 17:59 To: Andrew Meulenberg Subject: Time and energy…
These things are hard… You are definitely much smarter with these things than I can aspire too… But I do have ideas. We must analyze the proton in terms of a collection of infinitesimally small charges – just like Feynman’s failed assembly of the electron (https://www.feynmanlectures.caltech.edu/II_28.html#Ch28-S3): it must be possible to do this and it will give us the equivalent of electromagnetic mass for the strong force. The assembly of the proton out of infinitesimally small charge bits will work because the proton is, effectively, massive. Not like an electron which effectively appears as a ‘cloud’ of charge and, therefore, has several radii and, yes, can pass through the nucleus and also ‘envelopes’ a proton when forming a neutron with it.
I cannot offer much in terms of analytical skills here. All of quantum physics – the new model of a hydrogen atom – grew out of the intuition of a young genius (Louis de Broglie) and a seasoned mathematical physicist (Erwin Schroedinger) finding a mathematical equation for it. That model is valid still – we just need to add spin from the outset (cf. the plus/minus sign of the imaginary unit) and acknowledge the indeterminacy in it is just statistical, but these are minor things.
I have not looked at your analysis of a neutron as an (hyper-)excited state of the hydrogen atom yet but it must be correct: what else can it be? It is what Rutherford said it should be when he first hypothesized the existence of a neutron.
I do not know how much time I want to devote to this (to be honest, I am totally sick of academic physics) but – whatever time I have – I want to contribute to an intuitive, realist and mathematically correct model of the deuteron nucleus.