Bell’s No-Go Theorem

I’ve been asked a couple of times: “What about Bell’s No-Go Theorem, which tells us there are no hidden variables that can explain quantum-mechanical interference in some kind of classical way?” My answer to that question is quite arrogant, because it’s the answer Albert Einstein would give when younger physicists would point out that his objections to quantum mechanics (which he usually expressed as some new  thought experiment) violated this or that axiom or theorem in quantum mechanics: “Das ist mir wur(sch)t.

In English: I don’t care. Einstein never lost the discussions with Heisenberg or Bohr: he just got tired of them. Like Einstein, I don’t care either – because Bell’s Theorem is what it is: a mathematical theorem. Hence, it respects the GIGO principle: garbage in, garbage out. In fact, John Stewart Bell himself – one of the third-generation physicists, we may say – had always hoped that some “radical conceptual renewal”[1] might disprove his conclusions. We should also remember Bell kept exploring alternative theories – including Bohm’s pilot wave theory, which is a hidden variables theory – until his death at a relatively young age. [J.S. Bell died from a cerebral hemorrhage in 1990 – the year he was nominated for the Nobel Prize in Physics. He was just 62 years old then.]

So I never really explored Bell’s Theorem. I was, therefore, very happy to get an email from Gerard van der Ham, who seems to have the necessary courage and perseverance to research this question in much more depth and, yes, relate it to a (local) realist interpretation of quantum mechanics. I actually still need to study his papers, and analyze the YouTube video he made (which looks much more professional than my videos), but this is promising.

To be frank, I got tired of all of these discussions – just like Einstein, I guess. The difference between realist interpretations of quantum mechanics and the Copenhagen dogmas is just a factor 2 or π in the formulas, and Richard Feynman famously said we should not care about such factors (Feynman’s Lectures, III-2-4). Modern physicists fudge them away consistently. They’ve done much worse than that, actually. :-/ They are not interested in truth. Convention, dogma, indoctrination – – non-scientific historical stuff – seems to prevent them from that. And modern science gurus – the likes of Sean Carroll or Sabine Hossenfelder etc. – play the age-old game of being interesting: they pretend to know something you do not know or – if they don’t – that they are close to getting the answers. They are not. They have them already. They just don’t want to tell you that because, yes, it’s the end of physics.


[1] See: John Stewart Bell, Speakable and unspeakable in quantum mechanics, pp. 169–172, Cambridge University Press, 1987.

Feynman’s religion

Perhaps I should have titled this post differently: the physicist’s worldview. We may, effectively, assume that Richard Feynman’s Lectures on Physics represent mainstream sentiment, and he does get into philosophy—less or more liberally depending on the topic. Hence, yes, Feynman’s worldview is pretty much that of most physicists, I would think. So what is it? One of his more succinct statements is this:

“Often, people in some unjustified fear of physics say you cannot write an equation for life. Well, perhaps we can. As a matter of fact, we very possibly already have an equation to a sufficient approximation when we write the equation of quantum mechanics.” (Feynman’s Lectures, p. II-41-11)

He then jots down that equation which Schrödinger has on his grave (shown below). It is a differential equation: it relates the wavefunction (ψ) to its time derivative through the Hamiltonian coefficients that describe how physical states change with time (Hij), the imaginary unit (i) and Planck’s quantum of action (ħ).

hl_alpb_3453_ptplr

Feynman, and all modern academic physicists in his wake, claim this equation cannot be understood. I don’t agree: the explanation is not easy, and requires quite some prerequisites, but it is not anymore difficult than, say, trying to understand Maxwell’s equations, or the Planck-Einstein relation (E = ħ·ω = h·f).

In fact, a good understanding of both allows you to not only understand Schrödinger’s equation but all of quantum physics. The basics are this: the presence of the imaginary unit tells us the wavefunction is cyclical, and that it is an oscillation in two dimensions. The presence of Planck’s quantum of action in this equation tells us that such oscillation comes in units of ħ. Schrödinger’s wave equation as a whole is, therefore, nothing but a succinct representation of the energy conservation principle. Hence, we can understand it.

At the same time, we cannot, of course. We can only grasp it to some extent. Indeed, Feynman concludes his philosophical remarks as follows:

“The next great era of awakening of human intellect may well produce a method of understanding the qualitative content of equations. Today we cannot. Today we cannot see that the water flow equations contain such things as the barber pole structure of turbulence that one sees between rotating cylinders. We cannot see whether Schrödinger’s equation contains frogs, musical composers, or morality—or whether it does not. We cannot say whether something beyond it like God is needed, or not. And so we can all hold strong opinions either way.” (Feynman’s Lectures, p. II-41-12)

I think that puts the matter to rest—for the time being, at least. 🙂

Signing off…

I am done with reading Feynman and commenting on it—especially because this site just got mutilated by the third DMCA takedown of material (see below). Follow me to my new blog. No Richard Feynman, Mr. Gottlieb or DMCA there! Pure logic only. This site has served its purpose, and that is to highlight the Rotten State of QED. 🙂

Long time ago – in 1996, to be precise – I studied Wittgenstein’s TLP—part of a part-time BPhil degree program. At the time, I did not like it. The lecture notes were two or three times the volume of the work itself, and I got pretty poor marks for it. I guess one has to go through life to get an idea of what he was writing about. With all of the nonsense lately, I thought about one of the lines in that little book: “One must, so to speak, throw away the ladder after he has climbed up it. One must transcend the propositions, and then he will see the world aright.” (TLP, 6-54)

For Mr. Gottlieb and other narrow-minded zealots and mystery wallahs – who would not be interested in Wittgenstein anyway – I’ll just quote Wittgenstein’s quote of Ferdinand Kürnberger:

“. . . und alles, was man weiss, nicht bloss rauschen und brausen gehört hat, lässt sich in drei Worten sagen.

I will let you google-translate that and, yes, sign off here—in the spirit of Ludwig Boltzmann and Paul Ehrenfest. [Sorry for being too lengthy or verbose here.]

“Bring forward what is true. Write it so that it is clear. Defend it to your last breath.” (Boltzmann)

Knox (Automattic)

Jun 20, 2020, 4:30 PM UTC

Hello,

We’ve received the DMCA takedown notice below regarding material published on your WordPress.com site, which means the complainant is asserting ownership of this material and claiming that your use of it is not permitted by them or the law. As required by the DMCA, we have disabled public access to the material.

Repeated incidents of copyright infringement will also lead to the permanent suspension of your WordPress.com site. We certainly don’t want that to happen, so please delete any other material you may have uploaded for which you don’t have the necessary rights and refrain from uploading additional material that you do not have permission to upload. Although we can’t provide legal advice, these resources might help you make this determination:

https://wordpress.com/support/counter-notice/#what-is-fair-use

If you believe that this DMCA takedown notice was received in error, or if you believe your usage of this material would be considered fair use, it’s important that you submit a formal DMCA counter notice to ensure that your WordPress.com site remains operational. If you submit a valid counter notice, we will return the material to your site in 10 business days if the complainant does not reply with legal action.

Please refer to the following pages for more information:

Please note that republishing the material yourself, without permission from the copyright holder (even after you have submitted a counter notice) will result in the permanent suspension of your WordPress.com site and/or account.

Thank you.

[…]

Well… Thank you, WordPress. I guess you’ll first suspend the site and then the account? :-/ I hope you’ll give me some time to create another account, at least? If not, this spacetime rebel will have to find another host for his site. 🙂

Lasers, masers, two-state systems and Feynman’s Lectures

The past few days I re-visited Feynman’s lectures on quantum math—the ones in which he introduces the concept of probability amplitudes (I will provide no specific reference or link to them because that is apparently unfair use of copyrighted material). The Great Richard Feynman introduces the concept of probability amplitudes as part of a larger discussion of two-state systems—and lasers and masers are a great example of such two-state systems. I have done a few posts on that while building up this blog over the past few years but because these have been mutilated by DMCA take-downs of diagrams and illustrations as a result of such ‘unfair use’, I won’t refer to them either. The point is this:

I have come to the conclusion we actually do not need the machinery of state vectors and probability amplitudes to explain how a maser (and, therefore, a laser) actually works.

The functioning of masers and lasers crucially depends on a dipole moment (of an ammonia molecule for a maser and of light-emitting atoms for a laser) which will flip up and down in sync with an external oscillating electromagnetic field. It all revolves around the resonant frequency (ω0), which depends on the tiny difference between the energies of the ‘up’ and ‘down’ states. This tiny energy difference (the A in the Hamiltonian matrix) is given by the product of the dipole moment (μ) and the external electromagnetic field that gets the thing going (Ɛ0). [Don’t confuse the symbols with the magnetic and electric constants here!] And so… Well… I have come to the conclusion that we can analyze this as just any other classical electromagnetic oscillation. We can effectively directly use the Planck-Einstein relation to determine the frequency instead of having to invoke all of the machinery that comes with probability amplitudes, base states, Hamiltonian matrices and differential equations:

ω0 = E/ħ = A/ħ = μƐ0/ħ

All the rest follows logically.

You may say: so what? Well… I find this very startling. I’ve been systematically dismantling a lot of ‘quantum-mechanical myths’, and so this seemed to be the last myth standing. It has fallen now: here is the link to the paper.

What’s the implication? The implication is that we can analyze all of the QED sector now in terms of classical mechanics: oscillator math, Maxwell’s equations, relativity theory and the Planck-Einstein relation will do. All that was published before the first World War broke out, in other words—with the added discoveries made by the likes of Holly Compton (photon-electron interactions), Carl Anderson (the discovery of anti-matter), James Chadwick (experimental confirmation of the existence of the neutron) and a few others after the war, of course! But that’s it, basically: nothing more, nothing less. So all of the intellectual machinery that was invented after World War I (the Bohr-Heisenberg theory of quantum mechanics) and after World War II (quantum field theory, the quark hypothesis and what have you) may be useful in the QCD sector of physics but − IMNSHO − even that remains to be seen!

I actually find this more than startling: it is shocking! I started studying Feynman’s Lectures – and everything that comes with it – back in 2012, only to find out that my idol had no intention whatsoever to make things easy. That is OK. In his preface, he writes he wanted to make sure that even the most intelligent student would be unable to completely encompass everything that was in the lectures—so that’s why we were attracted to them, of course! But that is, of course, something else than doing what he did, and that is to promote a Bright Shining Lie

[…]

Long time ago, I took the side of Bill Gates in the debate on Feynman’s qualities as a teacher. For Bill Gates, Feynman was, effectively, “the best teacher he never had.” One of those very bright people who actually had him as a teacher (John F. McGowan, PhD and math genius) paints a very different picture, however. I would take the side of McGowan in this discussion now—especially when it turns out that Mr. Feynman’s legacy can apparently no longer be freely used as a reference anyway.

Philip Anderson and Freeman Dyson died this year—both at the age of 96. They were the last of what is generally thought of as a brilliant generation of quantum physicists—the third generation, we might say. May they all rest in peace.

Post scriptum: In case you wonder why I refer to them as the third rather than the second generation: I actually consider Heisenberg’s generation to be the second generation of quantum physicists—first was the generation of the likes of Einstein!

As for the (intended) irony in my last remarks, let me quote from an interesting book on the state of physics that was written by Doris Teplitz back in 1982: “The state of the classical electromagnetic theory reminds one of a house under construction that was abandoned by its working workmen upon receiving news of an approaching plague. The plague was in this case, of course, quantum theory.” I now very much agree with this bold statement. So… Well… I think I’ve had it with studying Feynman’s Lectures. Fortunately, I spent only ten years on them or so. Academics have to spend their whole life on what Paul Ehrenfest referred to as the ‘unendlicher Heisenberg-Born-Dirac-Schrödinger Wurstmachinen-Physik-Betrieb.:-/

Neutrons as composite particles and electrons as gluons?

Neutrons as composite particles

In our rather particular conception of the world, we think of photons, electrons, and protons – and neutrinos – as elementary particles. Elementary particles are, obviously, stable: they would not be elementary, otherwise. The difference between photons and neutrinos on the one hand, and electrons, protons, and other matter-particles on the other, is that we think all matter-particles carry charge—even if they are neutral.

Of course, to be neutral, one must combine positive and negative charge: neutral particles can, therefore, not be elementary—unless we accept the quark hypothesis, which we do not like to do (not now, at least). A neutron must, therefore, be an example of a neutral (composite) matter-particle. We know it is unstable outside of the nucleus but its longevity – as compared to other non-stable particles – is quite remarkable: it survives about 15 minutes—for other unstable particles, we usually talk about micro- or nano-seconds, or worse!

Let us explore what the neutron might be—if only to provide some kind of model for analyzing other unstable particle, perhaps. We should first note that the neutron radius is about the same as that of a proton. How do we know this? NIST only gives the rms charge radius for a proton based on the various proton radius measurements. We, therefore, only have a CODATA value for the Compton wavelength for a neutron, which is more or less the same as that for the proton. To be precise, the two values are this:

λneutron = 1.31959090581(75)10-15 m

λproton = 1.32140985539(40)×10-15 m

These values are just mechanical calculations based on the mass or energy of protons and neutrons respectively: the Compton wavelength is, effectively, calculated as λ = h/mc.[1] However, you should, of course, not only rely on CODATA values only: you should google for experiments measuring the size of a neutron directly or indirectly to get an idea of what is going on here.

Let us look at the energies. The neutron’s energy is about 939,565,420 eV. The proton energy is about 938,272,088 eV. Hence, the difference is about 1,293,332 eV. This mass difference, combined with the fact that neutrons spontaneously decay into protons but – conversely – there is no such thing as spontaneous proton decay[2], confirms we are probably justified in thinking that a neutron must, somehow, combine a proton and an electron. The mass of an electron is 0.511 MeV/c2, so that is only about 40% of the energy difference, but the kinetic and binding energy could make up for the remainder.[3]

So, yes, we will want to think of a neutron as carrying both positive and negative charge inside. These charges balance each other out (there is no net electric charge) but their respective motion still yields a small magnetic moment, which we think of as some net result from the motion of the positive and negative charge inside.

Let us now move to the next grand idea which emerges here.

Electrons as gluons?

The negative charge inside of a neutron may help to keep the nucleus together. We can, therefore, think of this charge as some kind of nuclear glue. We tentatively explored this idea in a paper: Electrons as gluons? The basic idea is this: the electromagnetic force keeps electrons close to the positively charged nucleus and we should, therefore, not exclude that a similar arrangement of positive and negative charges – but one involving some strong(er) force to explain the difference in scale – might exist within the nucleus.

Nonsense? We don’t think so. Consider this: one never finds a proton pair without one or more neutrons. The main isotope of helium (4He), for example, has a nucleus consisting of two protons and two neutrons, while a helium-3 (3He) nucleus consists of two protons and one neutron. When we find a pair of nucleons, like in deuterium (2H), this will always consist of a proton and a neutron. The idea of a negative charge acting as an in-between to keep two positive charges together is, therefore, quite logical. Think of it as the opposite of a positively charged nucleus keeping electrons together in a multi-electron atom.

Does this make sense to you? It does to me, so I’d appreciate any converging or diverging thoughts you might have on this. 🙂

[1] The reader should note that the Compton wavelength and, therefore, the Compton radius is inversely proportional to the mass: a more massive particle is, therefore, associated with a smaller radius. This is somewhat counterintuitive but it is what it is.

[2] None of the experiments (think of the Super-Kamiokande detector here) found any evidence of proton decay so far.

[3] The reader should note that the mass of a proton and an electron add up to less than the mass of a neutron, which is why it is only logical that a neutron should decay into a proton and an electron. Binding energies – think of Feynman’s calculations of the radius of the hydrogen atom, for example – are usually negative.

The mystery of the elementary charge

As part of my ‘debunking quantum-mechanical myths’ drive, I re-wrote Feynman’s introductory lecture on quantum mechanics. Of course, it has got nothing to do with Feynman’s original lecture—titled: on Quantum Behavior: I just made some fun of Feynman’s preface and that’s basically it in terms of this iconic reference. Hence, Mr. Gottlieb should not make too much of a fuss—although I hope he will, of course, because it would draw more attention to the paper. It was a fun exercise because it encouraged me to join an interesting discussion on ResearchGate (I copied the topic and some up and down below) which, in turn, made me think some more about what I wrote about the form factor in the explanation of the electron, muon and proton. Let me copy the relevant paragraph:

When we talked about the radius of a proton, we promised you we would talk some more about the form factor. The idea is very simple: an angular momentum (L) can always be written as the product of a moment of inertia (I) and an angular frequency (ω). We also know that the moment of inertia for a rotating mass or a hoop is equal to I = mr2, while it is equal to I = mr2/4 for a solid disk. So you might think this explains the 1/4 factor: a proton is just an anti-muon but in disk version, right? It is like a muon because of the strong force inside, but it is even smaller because it packs its charge differently, right?

Maybe. Maybe not. We think probably not. Maybe you will have more luck when playing with the formulas but we could not demonstrate this. First, we must note, once again, that the radius of a muon (about 1.87 fm) and a proton (0.83-0.84 fm) are both smaller than the radius of the pointlike charge inside of an electron (α·ħ/mec ≈ 2.818 fm). Hence, we should start by suggesting how we would pack the elementary charge into a muon first!

Second, we noted that the proton mass is 8.88 times that of the muon, while the radius is only 2.22 times smaller – so, yes, that 1/4 ratio once more – but these numbers are still weird: even if we would manage to, somehow, make abstraction of this form factor by accounting for the different angular momentum of a muon and a proton, we would probably still be left with a mass difference we cannot explain in terms of a unique force geometry.

Perhaps we should introduce other hypotheses: a muon is, after all, unstable, and so there may be another factor there: excited states of electrons are unstable too and involve an n = 2 or some other number in Planck’s E = n·h·f equation, so perhaps we can play with that too.

Our answer to such musings is: yes, you can. But please do let us know if you have more luck then us when playing with these formulas: it is the key to the mystery of the strong force, and we did not find it—so we hope you do!

So… Well… This is really as far as a realist interpretation of quantum mechanics will take you. One can solve most so-called mysteries in quantum mechanics (interference of electrons, tunneling and what have you) with plain old classical equations (applying Planck’s relation to electromagnetic theory, basically) but here we are stuck: the elementary charge itself is a most mysterious thing. When packing it into an electron, a muon or a proton, Nature gives it a very different shape and size.

The shape or form factor is related to the angular momentum, while the size has got to do with scale: the scale of a muon and proton is very different than that of an electron—smaller even than the pointlike Zitterbewegung charge which we used to explain the electron. So that’s where we are. It’s like we’ve got two quanta—rather than one only: Planck’s quantum of action, and the elementary charge. Indeed, Planck’s quantum of action may also be said to express itself itself very differently in space or in time (h = E·T versus h = p·λ). Perhaps there is room for additional simplification, but I doubt it. Something inside of me says that, when everything is said and done, I will just have to accept that electrons are electrons, and protons are protons, and a muon is a weird unstable thing in-between—and all other weird unstable things in-between are non-equilibrium states which one cannot explain with easy math.

Would that be good enough? For you? I cannot speak for you. Is it a good enough explanation for me? I am not sure. I have not made my mind up yet. I am taking a bit of a break from physics for the time being, but the question will surely continue to linger in the back of my mind. We’ll keep you updated on progress ! Thanks for staying tuned ! JL

PS: I realize the above might sound a bit like crackpot theory but that is just because it is very dense and very light writing at the same time. If you read the paper in full, you should be able to make sense of it. 🙂 You should also check the formulas for the moments of inertia: the I = mr2/4 formula for a solid disk depends on your choice of the axis of symmetry.

Research Gate

Peter Jackson

Dear Peter – Thanks so much for checking the paper and your frank comments. That is very much appreciated. I know I have gone totally overboard in dismissing much of post-WW II developments in quantum physics – most notably the idea of force-carrying particles (bosons – including Higgs, W/Z bosons and gluons). My fundamental intuition here is that field theories should be fine for modeling interactions (I’ll quote Dirac’s 1958 comments on that at the very end of my reply here) and, yes, we should not be limiting the idea of a field to EM fields only. So I surely do not want to give the impression I think classical 19th/early 20th century physics – Planck’s relation, electromagnetic theory and relativity – can explain everything.

Having said that, the current state of physics does resemble the state of scholastic philosophy before it was swept away by rationalism: I feel there has been a multiplication of ill-defined concepts that did not add much additional explanation of what might be the case (the latter expression is Wittgenstein’s definition of reality). So, yes, I feel we need some reincarnation of William of Occam to apply his Razor and kick ass. Fortunately, it looks like there are many people trying to do exactly that now – a return to basics – so that’s good: I feel like I can almost hear the tectonic plates moving. 🙂

My last paper is a half-serious rewrite of Feynman’s first Lecture on Quantum Mechanics. Its intention is merely provocative: I want to highlight what of the ‘mystery’ in quantum physics is truly mysterious and what is humbug or – as Feynman would call it – Cargo Cult Science. The section on the ‘form factor’ (what is the ‘geometry’ of the strong force?) in that paper is the shortest and most naive paragraph in that text but it actually does highlight the one and only question that keeps me awake: what is that form factor, what different geometry do we need to explain a proton (or a muon) as opposed to, say, an electron? I know I have to dig into the kind of stuff that you are highlighting – and Alex Burinskii’s Dirac-Kerr-Newman models (also integrating gravity) to find elements that – one day – may explain why a muon is not an electron, and why a proton is not a positron.

Indeed, I think the electron and photon model are just fine: classical EM and Planck’s relation are all that’s needed and so I actually don’t waste to more time on the QED sector. But a decent muon and proton model will, obviously, require ”something else’ than Planck’s relation, the electric charge and electromagnetic theory. The question here is: what is that ‘something else’, exactly?

Even if we find another charge or another field theory to explain the proton, then we’re just at the beginning of explaining the QCD sector. Indeed, the proton and muon are stable (fairly stable – I should say – in case of the muon – which I want to investigate because of the question of matter generations). In contrast, transient particles and resonances do not respect Planck’s relation – that’s why they are unstable – and so we are talking non-equilibrium states and so that’s an entirely different ballgame. In short, I think Dirac’s final words in the very last (fourth) edition of his ‘Principles of Quantum Mechanics’ still ring very true today. They were written in 1958 so Dirac was aware of the work of Gell-Man and Nishijima (the contours of quark-gluon theory) and, clearly, did not think much of it (I understand he also had conversations with Feynman on this):

“Quantum mechanics may be defined as the application of equations of motion to particles. […] The domain of applicability of the theory is mainly the treatment of electrons and other charged particles interacting with the electromagnetic field⎯a domain which includes most of low-energy physics and chemistry.

Now there are other kinds of interactions, which are revealed in high-energy physics and are important for the description of atomic nuclei. These interactions are not at present sufficiently well understood to be incorporated into a system of equations of motion. Theories of them have been set up and much developed and useful results obtained from them. But in the absence of equations of motion these theories cannot be presented as a logical development of the principles set up in this book. We are effectively in the pre-Bohr era with regard to these other interactions. It is to be hoped that with increasing knowledge a way will eventually be found for adapting the high-energy theories into a scheme based on equations of motion, and so unifying them with those of low-energy physics.”

Again, many thanks for reacting and, yes, I will study the references you gave – even if I am a bit skeptical of Wolfram’s new project. Cheers – JL

Paul Ehrenfest and the search for truth

On 25 September 1933, Paul Ehrenfest took his son Wassily, who was suffering from Down syndrome, for a walk in the park. He shot him, and then killed himself. He was only 53. That’s my age bracket. From the letters he left (here is a summary in Dutch), we know his frustration of not being able to arrive at some kind of common-sense interpretation of the new quantum physics played a major role in the anxiety that had brought him to this point. He had taken courses from Ludwig Boltzmann as an aspiring young man. We, therefore, think Boltzmann’s suicide – for similar reasons – might have troubled him too.

His suicide did not come unexpectedly: he had announced it. In one of his letters to Einstein, he complains about ‘indigestion’ from the ‘unendlicher Heisenberg-Born-Dirac-Schrödinger Wurstmachinen-Physik-Betrieb.’ I’ll let you google-translate that. :-/ He also seems to have gone through the trouble of summarizing all his questions on the new approach in an article in what was then one of the top journals for physics: Einige die Quantenmechanik betreffende Erkundigungsfrage, Zeitschrift für Physik 78 (1932) 555-559 (quoted in the above-mentioned review article). This I’ll translate: Some Questions about Quantum Mechanics.

Ehrenfest

Paul Ehrenfest in happier times (painting by Harm Kamerlingh Onnes in 1920)

A diplomat-friend of mine once remarked this: “It is good you are studying physics only as a pastime. Professional physicists are often troubled people—miserable.” It is an interesting observation from a highly intelligent outsider. To be frank, I understand this strange need to probe things at the deepest level—to be able to explain what might or might not be the case (I am using Wittgenstein’s definition of reality here). Even H.A. Lorentz, who – fortunately, perhaps – died before his successor did what he did, was becoming quite alarmist about the sorry state of academic physics near the end of his life—and he, Albert Einstein, and so many others were not alone. Not then, and not now. All of the founding fathers of quantum mechanics ended up becoming pretty skeptical about the theory they had created. We have documented that elsewhere so we won’t talk too much about it here. Even John Stewart Bell himself – one of the third generation of quantum physicists, we may say – did not like his own ‘No Go Theorem’ and thought that some “radical conceptual renewal”[1] might disprove his conclusions.

The Born-Heisenberg revolution has failed: most – if not all – of contemporary high-brow physicist are pursuing alternative theories—in spite, or because, of the academic straitjackets they have to wear. If a genius like Ehrenfest didn’t buy it, then I won’t buy it either. Furthermore, the masses surely don’t buy it and, yes, truth – in this domain too – is, fortunately, being defined more democratically nowadays. The Nobel Prize Committee will have to do some serious soul-searching—if not five years from now, then ten.

We feel sad for the physicists who died unhappily—and surely for those who took their life out of depression—because the common-sense interpretation they were seeking is so self-evident: de Broglie’s intuition in regard to matter being wavelike was correct. He just misinterpreted its nature: it is not a linear but a circular wave. We quickly insert the quintessential illustration (courtesy of Celani, Vassallo and Di Tommaso) but we refer the reader for more detail to our articles or – more accessible, perhaps – our manuscript for the general public.

aa 2

The equations are easy. The mass of an electron – any matter-particle, really – is the equivalent mass of the oscillation of the charge it carries. This oscillation is, most probably, statistically regular only. So we think it’s chaotic, actually, but we also think the words spoken by Lord Pollonius in Shakespeare’s Hamlet apply to it: “Though this be madness, yet there is method in ‘t.” This means we can meaningfully speak of a cycle time and, therefore, of a frequency. Erwin Schrödinger stumbled upon this motion while exploring solutions to Dirac’s wave equation for free electrons, and Dirac immediately grasped the significance of Schrödinger’s discovery, because he mentions Schrödinger’s discovery rather prominently in his Nobel Prize Lecture:

“It is found that an electron which seems to us to be moving slowly, must actually have a very high frequency oscillatory motion of small amplitude superposed on the regular motion which appears to us. As a result of this oscillatory motion, the velocity of the electron at any time equals the velocity of light. This is a prediction which cannot be directly verified by experiment, since the frequency of the oscillatory motion is so high and its amplitude is so small. But one must believe in this consequence of the theory, since other consequences of the theory which are inseparably bound up with this one, such as the law of scattering of light by an electron, are confirmed by experiment.” (Paul A.M. Dirac, Theory of Electrons and Positrons, Nobel Lecture, December 12, 1933)

Unfortunately, Dirac confuses the concept of the electron as a particle with the concept of the (naked) charge inside. Indeed, the idea of an elementary (matter-)particle must combine the idea of a charge and its motion to account for both the particle- as well as the wave-like character of matter-particles. We do not want to dwell on all of this because we’ve written too many papers on this already. We just thought it would be good to sum up the core of our common-sense interpretation of physics. Why? To honor Boltzmann and Ehrenfest: I think of their demise as a sacrifice in search for truth.

[…]

OK. That sounds rather tragic—sorry for that! For the sake of brevity, we will just describe the electron here.

I. Planck’s quantum of action (h) and the speed of light (c) are Nature’s most fundamental constants. Planck’s quantum of action relates the energy of a particle to its cycle time and, therefore, to its frequency:

(1) h = E·T = E/f ⇔ ħ = E/ω

The charge that is whizzing around inside of the electron has zero rest mass, and so it whizzes around at the speed of light: the slightest force on it gives it an infinite acceleration. It, therefore, acquires a relativistic mass which is equal to mγ = me/2 (we refer to our paper(s) for a relativistically correct geometric argument). The momentum of the pointlike charge, in its circular or orbital motion, is, therefore, equal to p = mγ·c = me·c/2.

The (angular) frequency of the oscillation is also given by the formula for the (angular) velocity:

(2) c = a·ω ⇔ ω = c/a

While Eq. (1) is a fundamental law of Nature, Eq. (2) is a simple geometric or mathematical relation only.

II. From (1) and (2), we can now calculate the radius of this tiny circular motion as:

(3a) ħ = E/ω = E·a/c a = (ħ·c)/E

Because we know the mass of the electron is the inertial mass of the state of motion of the pointlike charge, we may use Einstein’s mass-energy equivalence relation to rewrite this as the Compton radius of the electron:

(3b) a = (ħ·c)/E = (ħ·c)/(me·c2) = ħ/(me·c)

Note that we only used two fundamental laws of Nature so far: the Planck-Einstein relation and Einstein’s mass-energy equivalence relation.

III. We must also be able to express the Planck-Einstein quantum as the product of the momentum (p) of the pointlike charge and some length λ:

(4) h = p·λ

The question here is: what length? The circumference of the loop, or its radius? The same geometric argument we used to derive the effective mass of the pointlike charge as it whizzes around at lightspeed around its center, tells us the centripetal force acts over a distance that is equal to two times the radius. Indeed, the relevant formula for the centripetal force is this:

(5) F = (mγ/me)·(E/a) = E/2a

We can therefore reduce Eq. (4) by dividing it by 2π. We then get reduced, angular or circular (as opposed to linear) concepts:

(6) ħ = (p·λ)/(2π) = (me·c/2)·(λ/π) = (me·c/2)·(2a) = me·c·a ⇔ ħ/a = me·c

We can verify the logic of our reasoning by substituting for the Compton radius:

ħ = p·λ = me·c·= me·c·a = me·c·ħ/(me·c) = ħ

IV. We can, finally, re-confirm the logic of our reason by re-deriving Einstein’s mass-energy equivalence relation as well as the Planck-Einstein relation using the ω = c/a and the ħ/a = me·c relations:

(7) ħ·ω = ħ·c/a = (ħ/ac = (me·cc = me·c2 = E

Of course, we note all of the formulas we have derived are interdependent. We, therefore, have no clear separation between axioms and derivations here. If anything, we are only explaining what Nature’s most fundamental laws (the Planck-Einstein relation and Einstein’s mass-energy equivalence relation) actually mean or represent. As such, all we have is a simple description of reality itself—at the smallest scale, of course! Everything that happens at larger scales involves Maxwell’s equations: that’s all electromagnetic in nature. No need for strong or weak forces, or for quarks—who invented that? Ehrenfest, Lorentz and all who suffered with truly understanding the de Broglie’s concept of the matter-wave might have been happier physicists if they would have seen these simple equations!

The gist of the matter is this: the intuition of Einstein and de Broglie in regard to the wave-nature of matter was, essentially, correct. However, de Broglie’s modeling of it as a wave packet was not: modeling matter-particles as some linear oscillation does not do the trick. It is extremely surprising no one thought of trying to think of some circular oscillation. Indeed, the interpretation of the elementary wavefunction as representing the mentioned Zitterbewegung of the electric charge solves all questions: it amounts to interpreting the real and imaginary part of the elementary wavefunction as the sine and cosine components of the orbital motion of a pointlike charge. We think that, in our 60-odd papers, we’ve shown such easy interpretation effectively does the trick of explaining all of the quantum-mechanical weirdness but, of course, it is up to our readers to judge that. 🙂

[1] See: John Stewart Bell, Speakable and unspeakable in quantum mechanics, pp. 169–172, Cambridge University Press, 1987 (quoted from Wikipedia). J.S. Bell died from a cerebral hemorrhage in 1990 – the year he was nominated for the Nobel Prize in Physics and which he, therefore, did not receive (Nobel Prizes are not awarded posthumously). He was just 62 years old then.

Re-writing Feynman’s Lectures?

I have a crazy new idea: a complete re-write of Feynman’s Lectures. It would be fun, wouldn’t it? I would follow the same structure—but start with Volume III, of course: the lectures on quantum mechanics. We could even re-use some language—although we’d need to be careful so as to keep Mr. Michael Gottlieb happy, of course. 🙂 What would you think of the following draft Preface, for example?

The special problem we try to get at with these lectures is to maintain the interest of the very enthusiastic and rather smart people trying to understand physics. They have heard a lot about how interesting and exciting physics is—the theory of relativity, quantum mechanics, and other modern ideas—and spend many years studying textbooks or following online courses. Many are discouraged because there are really very few grand, new, modern ideas presented to them. The problem is whether or not we can make a course which would save them by maintaining their enthusiasm.

The lectures here are not in any way meant to be a survey course, but are very serious. I thought it would be best to re-write Feynman’s Lectures to make sure that most of the above-mentioned enthusiastic and smart people would be able to encompass (almost) everything that is in the lectures. 🙂

This is the link to Feynman’s original Preface, so you can see how my preface compares to his: same-same but very different, they’d say in Asia. 🙂

[…]

Doesn’t that sound like a nice project? 🙂

Jean Louis Van Belle, 22 May 2020

Post scriptum: It looks like we made Mr. Gottlieb and/or MIT very unhappy already: the link above does not work for us anymore (see what we get below). That’s very good: it is always nice to start a new publishing project with a little controversy. 🙂 We will have to use the good old paper print edition. We recommend you buy one too, by the way. 🙂 I think they are just a bit over US$100 now. Well worth it!

To put the historical record straight, the reader should note we started this blog before Mr. Gottlieb brought Feynman’s Lectures online. We actually wonder why he would be bothered by us referring to it. That’s what classical textbooks are for, aren’t they? They create common references to agree or disagree with, and why put a book online if you apparently don’t want it to be read or discussed? Noise like this probably means I am doing something right here. 🙂

Post scriptum 2: Done ! Or, at least, the first chapter is done ! Have a look: here is the link on ResearchGate and this is the link on Phil Gibbs’ site. Please do let me know what you think of it—whether you like it or not or, more importantly, what logic makes sense and what doesn’t. 🙂

Gottlieb

Classical principles of quantum physics

I summarized my 60-odd papers into one ‘manifesto’: it outlines what amounts to a full-blown classical interpretation of quantum mechanics. Have a look at it and let me know what you think ! 🙂

I should probably do a last paper on quantum-mechanical tunneling and potential barriers and their corollary, of course: potential wells. Indeed, the ring current model comes with a dynamic view of the fields surrounding charged particles. Potential barriers should, therefore, not be thought of as static fields: they vary in time. They are the joint result of two or more charges moving around. Hence, a particle breaking through a ‘potential wall’ or coming out of a potential ‘well’ probably just uses an opening which corresponds to a classical trajectory. However, it is not an easy analysis: it should be relativistically correct and we, therefore, need to describe the fields in terms of the vector potential and all that. I’ll need to look the math again here. :-/

Post scriptum (1 June 2020): I just added an introduction to the paper. It talks about recent attempts to explain what might be going on inside of the atomic nucleus in terms of electromagnetic interactions only. Such analyses are usually referred to as an electromagnetic theory of nuclear interaction or – using more formidable language – nuclear lattice effective field theory (NLEFT) and they will, hopefully, gain much more acceptance in the future.[1] They should—because they make sense! 🙂

[1] Easily accessible references are, for example, Bernard Schaeffer (2016) or Paolo Di Sia (2018).

The wavefunction in a medium: amplitudes as signals

We finally did what we wanted to do for a while already: we produced a paper on the meaning of the wavefunction and wave equations in the context of an atomic lattice (think of a conductor or a semiconductor here). Unsurprisingly, we came to the following conclusions:

1. The concept of the matter-wave traveling through the vacuum, an atomic lattice or any medium can be equated to the concept of an electric or electromagnetic signal traveling through the same medium.

2. There is no need to model the matter-wave as a wave packet: a single wave – with a precise frequency and a precise wavelength – will do.

3. If we do want to model the matter-wave as a wave packet rather than a single wave with a precisely defined frequency and wavelength, then the uncertainty in such wave packet reflects our own limited knowledge about the momentum and/or the velocity of the particle that we think we are representing. The uncertainty is, therefore, not inherent to Nature, but to our limited knowledge about the initial conditions or, what amounts to the same, what happened to the particle(s) in the past.

4. The fact that such wave packets usually dissipate very rapidly, reflects that even our limited knowledge about initial conditions tends to become equally rapidly irrelevant. Indeed, as Feynman puts it, “the tiniest irregularities tend to get magnified very quickly” at the micro-scale.

In short, as Hendrik Antoon Lorentz noted a few months before his demise, there is, effectively, no reason whatsoever “to elevate indeterminism to a philosophical principle.” Quantum mechanics is just what it should be: common-sense physics.

The paper confirms intuitions we had highlighted in previous papers already, but uses the formalism of quantum mechanics itself to demonstrate this.

PS: We put the paper on academia.edu and ResearchGate as well, but Phil Gibbs’ site has easy access (no log-in or membership required). Long live Phil Gibbs!

Rutherford’s idea of an electron

Pre-scriptum (dated 27 June 2020): Two illustrations in this post were deleted by the dark force. We will not substitute them. The reference is given and it will help you to look them up yourself. In fact, we think it will greatly advance your understanding if you do so. Mr. Gottlieb may actually have done us a favor by trying to pester us.

Electrons, atoms, elementary particles and wave equations

The New Zealander Ernest Rutherford came to be known as the father of nuclear physics. He was the first to provide a reliable estimate of the order of magnitude of the size of the nucleus. To be precise, in the 1921 paper which we will discuss here, he came up with an estimate of about 15 fm for massive nuclei, which is the current estimate for the size of an uranium nucleus. His experiments also helped to significantly enhance the Bohr model of an atom, culminating – just before WW I started – in the Bohr-Rutherford model of an atom (E. Rutherford, Phil. Mag. 27, 488).

The Bohr-Rutherford model of an atom explained the (gross structure of the) hydrogen spectrum perfectly well, but it could not explain its finer structure—read: the orbital sub-shells which, as we all know now (but not very well then), result from the different states of angular momentum of an electron and the associated magnetic moment.

The issue is probably best illustrated by the two diagrams below, which I copied from Feynman’s Lectures. As you can see, the idea of subshells is not very relevant when looking at the gross structure of the hydrogen spectrum because the energy levels of all subshells are (very nearly) the same. However, the Bohr model of an atom—which is nothing but an exceedingly simple application of the E = h·f equation (see p. 4-6 of my paper on classical quantum physics)—cannot explain the splitting of lines for a lithium atom, which is shown in the diagram on the right. Nor can it explain the splitting of spectral lines when we apply a stronger or weaker magnetic field while exciting the atoms so as to induce emission of electromagnetic radiation.

Schrödinger’s wave equation solves that problem—which is why Feynman and other modern physicists claim this equation is “the most dramatic success in the history of the quantum mechanics” or, more modestly, a “key result in quantum mechanics” at least!

Such dramatic statements are exaggerated. First, an even finer analysis of the emission spectrum (of hydrogen or whatever other atom) reveals that Schrödinger’s wave equation is also incomplete: the hyperfine splitting, the Zeeman splitting (anomalous or not) or the (in)famous Lamb shift are to be explained not only in terms of the magnetic moment of the electron but also in terms of the magnetic moment of the nucleus and its constituents (protons and neutrons)—or of the coupling between those magnetic moments (we may refer to our paper on the Lamb shift here). This cannot be captured in a wave equation: second-order differential equations are – quite simply – not sophisticated enough to capture the complexity of the atomic system here.

Also, as we pointed out previously, the current convention in regard to the use of the imaginary unit (i) in the wavefunction does not capture the spin direction and, therefore, makes abstraction of the direction of the magnetic moment too! The wavefunction therefore models theoretical spin-zero particles, which do not exist. In short, we cannot hope to represent anything real with wave equations and wavefunctions.

More importantly, I would dare to ask this: what use is an ‘explanation’ in terms of a wave equation if we cannot explain what the wave equation actually represents? As Feynman famously writes: “Where did we get it from? Nowhere. It’s not possible to derive it from anything you know. It came out of the mind of Schrödinger, invented in his struggle to find an understanding of the experimental observations of the real world.” Our best guess is that it, somehow, models (the local diffusion of) energy or mass densities as well as non-spherical orbital geometries. We explored such interpretations in our very first paper(s) on quantum mechanics, but the truth is this: we do not think wave equations are suitable mathematical tools to describe simple or complex systems that have some internal structure—atoms (think of Schrödinger’s wave equation here), electrons (think of Dirac’s wave equation), or protons (which is what some others tried to do, but I will let you do some googling here yourself).

We need to get back to the matter at hand here, which is Rutherford’s idea of an electron back in 1921. What can we say about it?

Rutherford’s contributions to the 1921 Solvay Conference

From what you know, and from what I write above, you will understand that Rutherford’s research focus was not on electrons: his prime interest was in explaining the atomic structure and in solving the mysteries of nuclear radiation—most notably the emission of alpha– and beta-particles as well as highly energetic gamma-rays by unstable or radioactive nuclei. In short, the nature of the electron was not his prime interest. However, this intellectual giant was, of course, very much interested in whatever experiment or whatever theory that might contribute to his thinking, and that explains why, in his contribution to the 1921 Solvay Conference—which materialized as an update of his seminal 1914 paper on The Structure of the Atom—he devotes considerable attention to Arthur Compton’s work on the scattering of light from electrons which, at the time (1921), had not even been published yet (Compton’s seminal article on (Compton) scattering was published in 1923 only).

It is also very interesting that, in the very same 1921 paper—whose 30 pages are more than a multiple of his 1914 article and later revisions of it (see, for example, the 1920 version of it, which actually has wider circulation on the Internet)—Rutherford also offers some short reflections on the magnetic properties of electrons while referring to Parson’s ring current model which, in French, he refers to as “l’électron annulaire de Parson.” Again, it is very strange that we should translate Rutherford’s 1921 remarks back in English—as we are sure the original paper must have been translated from English to French rather than the other way around.

However, it is what it is, and so here we do what we have to do: we give you a free translation of Rutherford’s remarks during the 1921 Solvay Conference on the state of research regarding the electron at that time. The reader should note these remarks are buried in a larger piece on the emission of β particles by radioactive nuclei which, as it turns out, are nothing but high-energy electrons (or their anti-matter counterpart—positrons). In fact, we should—before we proceed—draw attention to the fact that the physicists at the time had no clear notion of the concepts of protons and neutrons.

This is, indeed, another remarkable historical contribution of the 1921 Solvay Conference because, as far as I know, this is the first time Rutherford talks about the neutron hypothesis. It is quite remarkable he does not advance the neutron hypothesis to explain the atomic mass of atoms combining what we know think of as protons and neutrons (Rutherford regularly talks of a mix of ‘positive and negative electrons’ in the nucleus—neither the term proton or neutron was in use at the time) but as part of a possible explanation of nuclear fusion reactions in stars or stellar nebulae. This is, indeed, his response to a question during the discussions on Rutherford’s paper on the possibility of nuclear synthesis in stars or nebulae from the French physicist Jean Baptise Perrin who, independently from the American chemist William Draper Harkins, had proposed the possibility of hydrogen fusion just the year before (1919):

“We can, in fact, think of enormous energies being released from hydrogen nuclei merging to form helium—much larger energies than what can come from the Kelvin-Helmholtz mechanism. I have been thinking that the hydrogen in the nebulae might come from particles which we may refer to as ‘neutrons’: these would consist of a positive nucleus with an electron at an exceedingly small distance (“un noyau positif avec un électron à toute petite distance”). These would mediate the assembly of the nuclei of more massive elements. It is, otherwise, difficult to understand how the positively charged particles could come together against the repulsive force that pushes them apart—unless we would envisage they are driven by enormous velocities.”

We may add that, just to make sure he get this right, Rutherford is immediately requested to elaborate his point by the Danish physicist Martin Knudsen: “What’s the difference between a hydrogen atom and this neutron?”—which Rutherford simply answers as follows: “In a neutron, the electron would be very much closer to the nucleus.” In light of the fact that it was only in 1932 that James Chadwick would experimentally prove the existence of neutrons (and positively charged protons), we are, once again, deeply impressed by the the foresight of Rutherford and the other pioneers here: the predictive power of their theories and ideas is, effectively, truly amazing by any standard—including today’s. I should, perhaps, also add that I fully subscribe to Rutherford’s intuition that a neutron should be a composite particle consisting of a proton and an electron—but that’s a different discussion altogether.

We must come back to the topic of this post, which we will do now. Before we proceed, however, we should highlight one other contextual piece of information here: at the time, very little was known about the nature of α and β particles. We now know that beta-particles are electrons, and that alpha-particles combine two protons and two neutrons. That was not known in the 1920s, however: Rutherford and his associates could basically only see positive or negative particles coming out of these radioactive processes. This further underscores how much knowledge they were able to gain from rather limited sets of data.

Rutherford’s idea of an electron in 1921

So here is the translation of some crucial text. Needless to say, the italics, boldface and additions between [brackets] are not Rutherford’s but mine, of course.

“We may think the same laws should apply in regard to the scattering [“diffusion”] of α and β particles. [Note: Rutherford noted, earlier in his paper, that, based on the scattering patterns and other evidence, the force around the nucleus must respect the inverse square law near the nucleus—moreover, it must also do so very near to it.] However, we see marked differences. Anyone who has carefully studied the trajectories [photographs from the Wilson cloud chamber] of beta-particles will note the trajectories show a regular curvature. Such curved trajectories are even more obvious when they are illuminated by X-rays. Indeed, A.H. Compton noted that these trajectories seem to end in a converging helical path turning right or left. To explain this, Compton assumes the electron acts like a magnetic dipole whose axis is more or less fixed, and that the curvature of its path is caused by the magnetic field [from the (paramagnetic) materials that are used].

Further examination would be needed to make sure this curvature is not some coincidence, but the general impression is that the hypothesis may be quite right. We also see similar curvature and helicity with α particles in the last millimeters of their trajectories. [Note: α-particles are, obviously, also charged particles but we think Rutherford’s remark in regard to α particles also following a curved or helical path must be exaggerated: the order of magnitude of the magnetic moment of protons and neutrons is much smaller and, in any case, they tend to cancel each other out. Also, because of the rather enormous mass of α particles (read: helium nuclei) as compared to electrons, the effect would probably not be visible in a Wilson cloud chamber.]

The idea that an electron has magnetic properties is still sketchy and we would need new and more conclusive experiments before accepting it as a scientific fact. However, it would surely be natural to assume its magnetic properties would result from a rotation of the electron. Parson’s ring electron model [“électron annulaire“] was specifically imagined to incorporate such magnetic polarity [“polarité magnétique“].

A very interesting question here would be to wonder whether such rotation would be some intrinsic property of the electron or if it would just result from the rotation of the electron in its atomic orbital around the nucleus. Indeed, James Jeans usefully reminded me any asymmetry in an electron should result in it rotating around its own axis at the same frequency of its orbital rotation. [Note: The reader can easily imagine this: think of an asymmetric object going around in a circle and returning to its original position. In order to return to the same orientation, it must rotate around its own axis one time too!]

We should also wonder if an electron might acquire some rotational motion from being accelerated in an electric field and if such rotation, once acquired, would persist when decelerating in an(other) electric field or when passing through matter. If so, some of the properties of electrons would, to some extent, depend on their past.”

Each and every sentence in these very brief remarks is wonderfully consistent with modern-day modelling of electron behavior. We should add, of course, non-mainstream modeling of electrons but the addition is superfluous because mainstream physicists stubbornly continue to pretend electrons have no internal structure, and nor would they have any physical dimension. In light of the numerous experimental measurements of the effective charge radius as well as of the dimensions of the physical space in which photons effectively interfere with electrons, such mainstream assumptions seem completely ridiculous. However, such is the sad state of physics today.

Thinking backward and forward

We think that it is pretty obvious that Rutherford and others would have been able to adapt their model of an atom to better incorporate the magnetic properties not only of electrons but also of the nucleus and its constituents (protons and neutrons). Unfortunately, scientists at the time seem to have been swept away by the charisma of Bohr, Heisenberg and others, as well as by the mathematical brilliance of the likes of Sommerfeld, Dirac, and Pauli.

The road then was taken then has not led us very far. We concur with Oliver Consa’s scathing but essentially correct appraisal of the current sorry state of physics:

“QED should be the quantized version of Maxwell’s laws, but it is not that at all. QED is a simple addition to quantum mechanics that attempts to justify two experimental discrepancies in the Dirac equation: the Lamb shift and the anomalous magnetic moment of the electron. The reality is that QED is a bunch of fudge factors, numerology, ignored infinities, hocus-pocus, manipulated calculations, illegitimate mathematics, incomprehensible theories, hidden data, biased experiments, miscalculations, suspicious coincidences, lies, arbitrary substitutions of infinite values and budgets of 600 million dollars to continue the game. Maybe it is time to consider alternative proposals. Winter is coming.”

I would suggest we just go back where we went wrong: it may be warmer there, and thinking both backward as well as forward must, in any case, be a much more powerful problem solving technique than relying only on expert guessing on what linear differential equation(s) might give us some S-matrix linking all likely or possible initial and final states of some system or process. 🙂

Post scriptum: The sad state of physics is, of course, not limited to quantum electrodynamics only. We were briefly in touch with the PRad experimenters who put an end to the rather ridiculous ‘proton radius puzzle’ by re-confirming the previously established 0.83-0.84 range for the effective charge radius of a proton: we sent them our own classical back-of-the-envelope calculation of the Compton scattering radius of a proton based on the ring current model (see p. 15-16 of our paper on classical physics), which is in agreement with these measurements and courteously asked what alternative theories they were suggesting. Their spokesman replied equally courteously:

“There is no any theoretical prediction in QCD. Lattice [theorists] are trying to come up [with something] but that will take another decade before any reasonable  number [may come] from them.”

This e-mail exchange goes back to early February 2020. There has been no news since. One wonders if there is actually any real interest in solving puzzles. The physicist who wrote the above may have been nominated for a Nobel Prize in Physics—I surely hope so because, in contrast to some others, he and his team surely deserve one— but I think it is rather incongruous to finally firmly establish the size of a proton while, at the same time, admit that protons should not have any size according to mainstream theory—and we are talking the respected QCD sector of the equally respected Standard Model here!

We understand, of course! As Freddy Mercury famously sang: The Show Must Go On.

The self-appointed science gurus

Sean Carroll recently tweeted this:

Sean Caroll

I could ‘t help giving him a straight answer. I actually like Sean Carroll, but I hate how he and others – think of John Gribbins, for example – self-appoint themselves as the only ‘gurus’ who are entitled to say something about grand theories or other ‘big ideas’: everyone else (read: all non-believers in QFT) are casually dismissed as ‘crackpot scientists’.

In fact, a few weeks before he had sent out a tweet promoting his ideas on the next ‘big ideas’, so I couldn’t help reminding him of the tweet above. 🙂

Sean Caroll next tweet

This is funny, and then it isn’t. The facts are this:

  1. The ‘new physics’ – the quantum revolution – started almost 100 years ago but doesn’t answer many fundamental questions (simply think about explaining spin and other intrinsic properties of matter-particles here).
  2. Geniuses like Einstein, Lorentz, Dirac and even Bell had serious doubts about the approach.
  3. Historical research shows theories and scientists were severely biased: see Dr. Consa’s review of quantum field theory in this regard.

I am very sorry, Dr. Carroll. You are much smarter than most – and surely much smarter than me – but here you show you are also plain arrogant. :-/ It’s this arrogance that has prevented a creative way out of the mess that fundamental physics finds itself in today. If you find yourself in a hole, stop digging !

The last words of H.A. Lorentz

I talked about the Solvay Conferences in my previous post(s). The Solvay Conference proceedings are a real treasury trove. Not only are they very pleasant to read, but they also debunk more than one myth or mystery in quantum physics!

It is part of scientific lore, for example, that the 1927 Solvay Conference was a sort of battlefield on new physics between Heisenberg and Einstein. Surprisingly, the papers and write-up of discussions reveal that Einstein hardly intervened. They also reveal that ‘battlefield stories’ such as Heisenberg telling Einstein to “stop telling God what to do” or – vice versa – Einstein declaring “God doesn’t play dice” are what they are: plain gossip or popular hear-say. Neither Heisenberg nor Einstein ever said that—or not at the occasion of the 1927 Solvay Conference, at least! Instead, we see very nuanced and very deep philosophical statements—on both sides of the so-called ‘divide’ or ‘schism’.

From all interventions, the intervention of the Dutch scientist Hendrik Antoon Lorentz stands out. I know (most of) my readers don’t get French, and so I might translate it into English one of these days. In the meanwhile, you may want to google-translate it yourself!

It is all very weird, emotional and historical. H.A. Lorentz – clearly the driving force behind those pre-WW II Solvay Conferences – died a few months after the 1927 Conference. In fact, the 1927 conference proceedings have both the sad announcement of his demise as well his interventions—such was the practice of actually physically printing stuff at the time.

For those who do read French, here you go:

DISCUSSION GENERALE DES IDEES NOUVELLES EMISES.

Causalité, Déterminisme. Probabilité.

Intervention de M. Lorentz:

“Je voudrais attirer l ’attention sur les difficultés qu’on rencontre dans les anciennes théories. Nous voulons nous faire une représentation des phénomènes, nous en former une image dans notre esprit. Jusqu’ici, nous avons toujours voulu former ces images au moyen des notions ordinaires de temps et d’espace. Ces notions sont peut-être innées; en tout cas, elles se sont développées par notre expérience personnelle, par nos observations journalières. Pour moi, ces notions sont claires et j ’avoue que je ne puis me faire une idée de la physique sans ces notions. L ’image que je veux me former des phénomènes doit être absolument nette et définie et il me semble que nous ne pouvons nous former une pareille image que dans ce système d’espace et de temps.

Pour moi, un électron est un corpuscule qui, a un instant donne, se trouve en un point détermine de l ’espace, et si j ’ai eu l ’idée qu’a un moment suivant ce corpuscule se trouve ailleurs, je dois songer à sa trajectoire, qui est une ligne dans l’espace. Et si cet électron rencontre un atome et y pénètre, et qu’après plusieurs aventures il quitte cet atome, je me forge une théorie dans laquelle cet électron conserve son individualité; c’est-à-dire que j ’imagine une ligne suivant laquelle cet électron passe à travers cet atome. Il se peut, évidemment, que cette théorie soit bien difficile à développer, mais a priori cela ne me parait pas impossible.

Je me figure que, dans la nouvelle théorie, on a encore de ces électrons. Il est possible, évidemment, que dans la nouvelle théorie, bien développée, il soit nécessaire de supposer que ces électrons subissent des transformations. Je veux bien admettre que l’électron se fond en un nuage. Mais alors je chercherai à quelle occasion cette transformation se produit. Si l’on voulait m’interdire une pareille recherche en invoquant un principe, cela me gênerait beaucoup. Il me semble qu’on peut toujours espérer qu’on fera plus tard ce que nous ne pouvons pas encore faire en ce moment. Même si l’on abandonne les anciennes idées, on peut toujours conserver les anciennes dénominations. Je voudrais conserver cet idéal d’autrefois, de décrire tout ce qui se passe dans le monde par des images nettes. Je suis prêt à admettre d’autres théories, à condition qu’on puisse les traduire par des images claires et nettes.

Pour ma part, bien que n’étant pas encore familiarisé avec les nouvelles idées que j’entends exprimer maintenant, je pourrais me représenter ces idées ainsi. Prenons le cas d’un électron qui rencontre un atome; supposons que cet électron quitte cet atome et qu’en même temps il y ait émission d’un quantum de lumière. Il faut considérer, en premier lieu, les systèmes d’ondes qui correspondent à l ’électron et à l’atome avant le choc. Après le choc, nous aurons de nouveaux systèmes d’ondes. Ces systèmes d’ondes pourront etre décrits par une fonction ψ définie dans un espace a un grand nombre de dimensions qui satisfait une équation différentielle. La nouvelle mécanique ondulatoire opèrera avec cette équation et établira la fonction ψ avant et après le choc.

Or, il y a des phénomènes qui apprennent qu’ il y a autre chose encore que ces ondes, notamment des corpuscules; on peut faire, par exemple, une expérience avec un cylindre de Faraday; il y a donc à tenir compte de l’individualité des électrons et aussi des photons. Je pense que je trouverais que, pour expliquer les phénomènes, il suffit d’admettre que l’expression ψψ* donne la probabilité que ces électrons et ces photons existent dans un volume détermine; cela me suffirait pour expliquer les expériences.

Mais les exemples donnes par M. Heisenberg m’apprennent que j’aurais atteint ainsi tout ce que l’expérience me permet d’atteindre. Or, je pense que cette notion de probabilité serait à mettre à la fin, et comme conclusion, des considérations théoriques, et non pas comme axiome a priori, quoique je veuille bien admettre que cette indétermination correspond aux possibilités expérimentales. Je pourrais toujours garder ma foi déterministe pour les phénomènes fondamentaux, dont je n’ai pas parlé. Est-ce qu’un esprit plus profond ne pourrait pas se rendre compte des mouvements de ces électrons. Ne pourrait-on pas garder le déterminisme en en faisant l’objet d’une croyance ? Faut-il nécessairement ériger l’ indéterminisme en principe?

I added the bold italics above. A free translation of this phrase is this:

Why should we elevate determinism or  – as Born en Heisenberg do – its opposite (indeterminism) to a philosophical principle?

What a beautiful statement ! Lorentz died of a very trivial cause: erysipelas, commonly known as St Anthony’s fire. :-/

Where things went wrong, exactly !

As mentioned in my previous post, Oliver Consa traces all of the nonsense in modern physics back to the Shelter Island (1947), Pocono (1948) and Oldstone (1949) Conferences. However, the first Solvay Conference that was organized after WW II was quite significant too. Niels Bohr and Robert Oppenheimer pretty much dominated it. Bohr does so by providing the introductory lecture ‘On the Notions of Causality and Complementarity’, while Oppenheimer’s ‘Electron Theory’ sets the tone for subsequent Solvay Conferences—most notably the one that would consecrate quantum field theory (QFT), which was held 13 years later (1961).

Indeed, the discussion between Oppenheimer and Dirac on the ‘Electron Theory’ paper in 1948 seems to be where things might have gone wrong—in terms of the ‘genealogy’ or ‘archaelogy’ of modern ideas, so to speak. In fact, both Oppenheimer and Dirac make rather historical blunders there:

  1. Oppenheimer uses perturbation theory to arrive at some kind of ‘new’ model of an electron, based on Schwinger’s new QFT models—which, as we now know, do not really lead anywhere.
  2. Dirac, however, is just too stubborn too: he simply keeps defending his un-defendable electron equation— which, of course, also doesn’t lead anywhere. [It is rather significant he was no longer invited for the next Solvay Conference.]

It is, indeed, very weird that Dirac does not follow through on his own conclusion: “Only a small part of the wave function has a physical meaning. We now have the problem of picking out that very small physical part of the exact solution of the wave equation.

It’s the ring current or Zitterbewegung electron, of course. The one trivial solution he thought was so significant in his 1933 Nobel Prize lecture… The other part of the solution(s) is/are, effectively, bizarre oscillations which he refers to as ‘run-away electrons’.

It’s nice to sort of ‘get’ this. 🙂

Tracing good and bad ideas

Today I decided to look for the original Solvay Conference papers, which were digitized by the libraries of the Free University of Brussels: here is the link to them.  I quickly went through the famous 1927 and 1930 Conferences (Einstein did not attend the 1933 Conference – nor did he attend the 1921 Conference) – but, to my great consternation – there is no trace of those so-called ‘heated discussions’ between Heisenberg and Einstein.

A few critical questions here and there, yes, but I don’t see anything even vaguely resembling an ‘ardent debate’ or a so-called ‘Bohr-Einstein controversy’. Am I mistaken—or am I missing something?

The fact that it’s all in French is quite interesting, and may explain why Einstein’s interventions are rare (I am not sure of the language that was used: the physicists then were multi-lingual, weren’t they?). The remarks of the French physicists Leon Brillouin, for example, are quite interesting but not widely known, it seems.

Funny remarks like Heisenberg telling Einstein ‘to stop telling God what to do’ are surely not there ! Are they folklore? Would anyone know whether these remarks are documented somewhere? I am just trying to trace those historical moments in the evolution of thought and science… 🙂

Things like this make me think a great deal of the ‘controversy’ between old (classical) and new (quantum) physics is actually just hype rather than reality. One of my readers sent me this link to a very interesting article in the LA Times in this regard. It’s a quick but very worthwhile read, showing it’s not only physics who suffers from ‘the need to sell’ real or non-existing results: here is the link—have a look!

In fact, I realize I am still looking for some kind of purpose for my new site. Perhaps I should dedicate it to research like this—separating fact from fiction in the history of ideas?

PS: I just checked the Wikipedia article on Heisenberg’s quotes and it seems Heisenberg’s “stop telling God what to do” is, effectively, disputed ! Interesting but, in light of its frequent use – also quite shocking, I would think.

PS 2: I jotted down the following based on a very quick scan of these Solvay Conferences:

Dr. Oliver Consa starts his scathing history of the sorry state of modern-day physics as follows:

“After the end of World War II, American physicists organized a series of three transcendent conferences for the development of modern physics: Shelter Island (1947), Pocono (1948) and Oldstone (1949). These conferences were intended to be a continuation of the mythical Solvay conferences. But, after World War II, the world had changed. The launch of the atomic bombs in Hiroshima and Nagasaki (1945), followed by the immediate surrender of Japan, made the Manhattan Project scientists true war heroes. Physicists were no longer a group of harmless intellectuals; they had become the powerful holders of the secrets of the atomic bomb.”[1]

Secrets that could not be kept, of course. The gatekeepers did their best, however. Julius Robert Oppenheimer was, effectively, one of them. The history of Oppenheimer – father of the atomic bomb and prominent pacifist at the same time – is well known.

It is actually quite interesting to note that the Solvay Conferences continued after WW II and that Niels Bohr and Robert Oppenheimer pretty much dominated the very first post-WW II Solvay Conference, which was held in 1948. Bohr does so by providing the introductory lecture ‘On the Notions of Causality and Complementarity[2], while Oppenheimer’s ‘Electron Theory’ sets the tone for subsequent Solvay Conferences—most notably the one that would consecrate quantum field theory (QFT), which was held 13 years later (1961).[3]

Significantly, Paul Dirac is pretty much the only one asking Oppenheimer critical questions. As for Albert Einstein, I find it rather strange that – despite him being a member of the scientific committee[4] – he actually hardly interferes in discussions. It makes me think he had actually lost interest in the development of quantum theory.

Even more significant is the fact that Dirac was not invited nor even mentioned in the 1951 Solvay Conference.

[1] Oliver Consa, Something is rotten in the state of QED, February 2020.

[2] See the 1948 Solvay Conference report on the ULB’s digital archives.

[3] Institut international de physique Solvay (1962). La théorie quantique des champs: douzième Conseil de physique, tenu à l’Université libre de Bruxelles du 9 au 14 octobre 1961.

[4] Einstein was a member of the Solvay scientific committee from the very first conference (1911) – representing, in typical style, a country (Austria, not Germany) rather than an institution or just being a member in some personal capacity – till 1948. He was not a member of the 1951 scientific committee. The reason might well be age or a lack of interest, of course: Einstein was 72 years in 1951, and would die four years later (1955).

The difference between a theory and an explanation

That’s a weird title, isn’t it? It’s the title of a fun paper (fun for me, at least—I hope for you too, of course), in which I try to show where quantum mechanics went wrong, and why and when the job of both the academic physicist as well as of the would-be student of quantum mechanics turned into calculating rather than explaining what might or might not be happening.

Modern quantum physicists are, effectively, like economists modeling input-output relations: if they are lucky, they get some kind of mathematical description of what goes in and what goes out of a process or an interaction, but the math doesn’t tell them how stuff actually happens.

So this paper of ours talks about that—in a very detailed way, actually—and then we bring the Zitterbewegung electron model and our photon model together to provide a classical explanation of Compton scattering of photons by electrons so as to show what electron-photon interference might actually be: two electromagnetic oscillations interfering (classically) with each other.

The whole thing also offers some reflections on the nature of the Uncertainty Principle.

Here is the link on the academia.edu site ! In case you do not have an academia.edu identity, here’s the link to the paper on Phil Gibbs’ alternative science site.

Enjoy ! 🙂 When everything is said and done, the mystery of quantum mechanics is this: why is an electron an electron, and why is a proton a proton? 🙂

PS: I am sure you think my last statement is nonsensical. If so, I invite you to think again. Whomever can explain the electron-proton mass ratio will be able to explain the difference between the electromagnetic and strong force. In other words, he or she will be able to connect the electromagnetic and the strong ‘sector’ of a classical interpretation of quantum mechanics. 🙂

Explaining the Lamb shift in classical terms

Corona-virus is bad, but it does have one advantage: more time to work on my hobby ! I finally managed to have a look at what the (in)famous Lamb shift may or may not be. Here is the link to the paper.

I think it’s good. Why? Well… It’s that other so-called ‘high precision test’ of mainstream quantum mechanics (read: quantum field theory)m but so I found it’s just like the rest: ‘Cargo Cult Science.’ [I must acknowledge a fellow amateur physicist and blogger for that reference: it is, apparently, a term coined by Richard Feynman!]

To All: Enjoy and please keep up the good work in these very challenging times !

🙂

Making sense of it all

In recent posts, we have been very harsh in criticizing mainstream academics for not even trying to make sense of quantum mechanics—labeling them as mystery wallahs or, worse, as Oliver Consa does, frauds. While we think the latter criticism is fully justified –we can and should think of some of the people we used to admire as frauds now – I think we should also acknowledge most of the professional physicists are actually doing what we all are doing and that is to, somehow, try to make sense of it all. Nothing more, nothing less.

However, they are largely handicapped: we can say or whatever we write, and we do not need to think about editorial lines. In other words: we are free to follow logic and practice real science. Let me insert a few images here to lighten the discussion. One is a cartoon from the web and the other was sent to me by a friendly academic. As for the painting, if you don’t know him already, you should find out for yourself. 🙂

Both mainstream as well as non-mainstream insiders and outsiders are having very heated discussions nowadays. When joining such discussions, I think we should start by acknowledging that Nature is actually difficult to understand: if it would be easy, we would not be struggling with it. Hence, anyone who wants you to believe it actually all is easy and self-evident is a mystery wallah or a fraud too—at the other end of the spectrum!

For example, I really do believe that the ring current model of elementary particles elegantly combines wave-particle duality and, therefore, avoids countless dichotomies (such as the boson-fermion dichotomy, for example) that have hampered mankind’s understanding of what an elementary particle might actually be. At the same time, I also acknowledge that the model raises its own set of very fundamental questions (see our paper on the nature of antimatter and some other unresolved issues) and can, therefore, be challenged as well. In short, I don’t want to come across as being religious about our own interpretation of things because it is what it is: an interpretation of things we happen to believe in. Why? Because it happens to come across as being more rational, simpler or – to use Dirac’s characterization of a true theory – just beautiful.

So why are we having so much trouble accepting the Copenhagen interpretation of quantum mechanics? Why are we so shocked by Consa’s story on man’s ambition in this particular field of human activity—as opposed to, say, politics or business? It’s because people like you and me thought these men were like us—much cleverer, perhaps, but, otherwise, totally like us: people searching for truth—or some basic version of it, at least! That’s why Consa’s conclusion hurts us so much:

“QED should be the quantized version of Maxwell’s laws, but it is not that at all. […] QED is a bunch of fudge factors, numerology, ignored infinities, hocus-pocus, manipulated calculations, illegitimate mathematics, incomprehensible theories, hidden data, biased experiments, miscalculations, suspicious coincidences, lies, arbitrary substitutions of infinite values and budgets of 600 million dollars to continue the game.”

Amateur physicists like you and me thought we were just missing something: some glaring (in)consistency in their or our theories which we just couldn’t see but that, inevitably, we would suddenly stumble upon while wracking our brains trying to grind through it all. We naively thought all of the sleepless nights, all the agony and all the sacrifices in terms of time and trouble would pay off, one day, at least! But, no, we’ve been wasting countless years to try to understand something which one can’t understand anyway—something which is, quite simply, not true. It was nothing but a bright shining lie and our anger is, therefore, fully justified. It sure did not do much to improve our mental and physical well-being, did it?

Such indignation may be justified but it doesn’t answer the more fundamental question: why did we even bother? Why are we so passionate about these things? Why do we feel that the Copenhagen interpretation cannot be right? One reason, of course, is that we were never alone here. The likes of Einstein, Dirac, and even Bell told us all along. Now that I think of it, all mainstream physicists that I know are critical of us – amateur physicists – but, at the same time, are also openly stating that the Standard Model isn’t satisfactory—and I am really thinking of mainstream researchers here: the likes of Zwiebach, Hossenfelder, Smolin, Gasparan, Batelaan, Pohl and so many others: they are all into string theory or, else, trying to disprove this or that quantum-mechanical theorem. [Batelaan’s reseach on the exchange of momentum in the electron double-slit experiment, for example, is very interesting in this regard.]

In fact, now that I think of it: can you give me one big name who is actually passionate about the Standard Model—apart from one or two Nobel Prize winners who got an undeserved price for it? If no one thinks it can be  right, then why can’t we just accept it just isn’t?

I’ve come to the conclusion the ingrained abhorrence – both of professional as well as of amateur physicists – is rooted in this: the Copenhagen interpretation amounts to a surrender of reason. It is, therefore, not science, but religion. Stating that it is a law of Nature that even experts cannot possibly understand Nature “the way they would like to”, as Richard Feynman put it, is both intuitively as well as rationally unacceptable.

Intuitively—and rationally? That’s a contradictio in terminis, isn’t it? We don’t think so. I think this is an outstanding example of a locus in our mind where intuition and rationality do meet each other.

Matter and antimatter

Matter and anti-matter: what’s the difference? The charge, of course: positive versus negative. Yes. Of course! But what’s beyond? Our ring current model offers a geometric explanation, so we thought we might try our hand at offering a geometric explanation of the difference between matter and anti-matter too. Have a look at the paper. It’s kinda primitive, but I need to start somewhere, right? 🙂