The nature of time: an easy explanation of relativity

My manuscript offers a somewhat sacrilegious but intuitive explanation of (special) relativity theory (The Emperor Has No Clothes: the force law and relativity, p. 24-27). It is one of my lighter and more easily accessible pieces of writing. The argument is based on the idea that we may define infinity or infinite velocities as some kind of limit (or some kind of limiting idea), but that we cannot really imagine it: it leads to all kinds of logical inconsistencies.

Let me give you a very simple example here to illustrate these inconsistencies: if something is traveling at an infinite velocity, then it is everywhere and nowhere at the same time, and no theory of physics can deal with that.

Now, if I would have to rewrite that brief introduction to relativity theory, I would probably add another logical argument. One that is based on our definition or notion of time itself. What is the definition of time, indeed? When you think long and hard about this, you will have to agree we can only measure time with reference to some fundamental cycle in Nature, right? It used to be the seasons, or the days or nights. Later, we subdivided a day into hours, and now we have atomic clocks. Whatever you can count and meaningfully communicate to some other intelligent being who happens to observe the same cyclical phenomenon works just fine, right?

Hence, if we would be able to communicate to some other intelligent being in outer space, whose position we may or may not know but both he/she/it (let us think of a male Martian for ease of reference) and we/me/us are broadcasting our frequency- or amplitude-modulated signals wide enough so as to ensure ongoing communication, then we would probably be able to converge on a definition of time in terms of the fundamental frequency of an elementary particle – let us say an electron to keep things simple. We could, therefore, agree on an experiment where he – after receiving a pre-agreed start signal from us – would starting counting and send us a stop signal back after, say, three billion electron cycles (not approximately, of course, but three billion exactly). In the meanwhile, we would be capable, of course, to verify that, inbetween sending and receiving the start and stop signal respectively (and taking into account the time that start and stop signal needs to travel between him and us), his clock seems to run somewhat differently than ours.

So that is the amazing thing, really. Our Martian uses the same electron clock, but our/his motion relative to his/ours leads us to the conclusion his clock works somewhat differently, and Einstein’s (special) relativity theory tells us how, exactly: time dilation, as given by the Lorentz factor.

Does this explanation make it any easier to truly understand relativity theory? Maybe. Maybe not. For me, it does, because what I am describing here is nothing but the results of the Michelson-Morley experiment in a slightly more amusing context which, for some reason I do not quite understand, seems to make them more comprehensible. At the very least, it shows Galilean relativity is as incomprehensible – or as illogical or non-intuitive, I should say – as the modern-day concept of relativity as pioneered by Albert Einstein.

You may now think (or not): OK, but what about relativistic mass? That concept is, and will probably forever remain, non-intuitive. Right? Time dilation and length contraction are fine, because we can now somehow imagine the what and why of this, but how do you explain relativistic mass, really?

The only answer I can give you here it to think some more about Newton’s law: mass is a measure of inertia, so that is a resistance to a change in the state of motion of an object. Motion and, therefore, your measurement of any acceleration or deceleration (i.e. a change in the state of motion) will depend on how you measure time and distance too. Therefore, mass has to be relativistic too.

QED: quod erat demonstrandum. In fact, it is not a proof, so I should not say it’s QED. It’s SE: a satisfactory explanation. Why is an explanation and not a proof? Because I take the constant speed of light for granted, and so I kinda derive the relativity of time, distance and mass from my point of departure (both figuratively and literally speaking, I’d say).

Post scriptum: For the mentioned calculation, we do need to know the (relative) position of the Martian, of course. Any event in physics is defined by both its position as well as its timing. That is what (also) makes it all very consistent, in fact. I should also note this short story here (I mean my post) is very well aligned with Einstein’s original 1905 article, so you can (also) go there to check the math. The main difference between his article and my explanation here is that I take the constant speed of light for granted, and then all that’s relative derives its relativity from that. Einstein looked at it the other way around, because things were not so obvious then. 🙂

The End of Physics

There is an army of physicists out there – still – trying to convince you there is still some mystery that needs explaining. They are wrong: quantum-mechanical weirdness is weird, but it is not some mystery. We have a decent interpretation of what quantum-mechanical equations – such as Schrodinger’s equation, for example – actually mean. We can also understand what photons, electrons, or protons – light and matter – actually are, and such understanding can be expressed in terms of 3D space, time, force, and charge: elementary concepts that feel familiar to us. There is no mystery left.

Unfortunately, physicists have completely lost it: they have multiplied concepts and produced a confusing but utterly unconvincing picture of the essence of the Universe. They promoted weird mathematical concepts – the quark hypothesis is just one example among others – and gave them some kind of reality status. The Nobel Prize Committee then played the role of the Vatican by canonizing the newfound religion.

It is a sad state of affairs, because we are surrounded by too many lies already: the ads and political slogans that shout us in the face as soon as we log on to Facebook to see what our friends are up to, or to YouTube to watch something or – what I often do – listen to the healing sounds of music.

The language and vocabulary of physics are complete. Does it make us happier beings? It should, shouldn’t it? I am happy I understand. I find consciousness fascinating – self-consciousness even more – but not because I think it is rooted in mystery. No. Consciousness arises from the self-organization of matter: order arising from chaos. It is a most remarkable thing – and it happens at all levels: atoms in molecules, molecules forming cellular systems, cellular systems forming biological systems. We are a biological system which, in turn, is part of much larger systems: biological, ecological – material systems. There is no God talking to us. We are on our own, and we must make the best out of it. We have everything, and we know everything.

Sadly, most people do not realize.

Post scriptum: With the end of physics comes the end of technology as well, isn’t it? All of the advanced technologies in use today are effectively already described in Feynman’s Lectures on Physics, which were written and published in the first half of the 1960s.

I thought about possible counterexamples, like optical-fiber cables, or the equipment that is used in superconducting quantum computing, such as Josephson junctions. But Feynman already describes Josephson junctions in the last chapter of his Lectures on Quantum Mechanics, which is a seminar on superconductivity. And fiber-optic cable is, essentially, a waveguide for light, which Feynman describes in very much detail in Chapter 24 of his Lectures on Electromagnetism and Matter. Needless to say, computers were also already there, and Feynman’s lecture on semiconductors has all you need to know about modern-day computing equipment. [In case you briefly thought about lasers, the first laser was built in 1960, and Feynman’s lecture on masers describes lasers too.]

So it is all there. I was born in 1969, when Man first walked on the Moon. CERN and other spectacular research projects have since been established, but, when one is brutally honest, one has to admit these experiments have not added anything significant – neither to the knowledge nor to the technology base of humankind (and, yes, I know your first instinct is to disagree with that, but that is because study or the media indoctrinated you that way). It is a rather strange thought, but I think it is essentially correct. Most scientists, experts and commentators are trying to uphold a totally fake illusion of progress.

Mental categories versus reality

Pre-scriptum: For those who do not like to read, I produced a very short YouTube presentation/video on this topic. About 15 minutes – same time as it will take you to read this post, probably. Check it out: https://www.youtube.com/watch?v=sJxAh_uCNjs.

Text:

We think of space and time as fundamental categories of the mind. And they are, but only in the sense that the famous Dutch physicist H.A. Lorentz conveyed to us: we do not seem to be able to conceive of any idea in physics without these two notions. However, relativity theory tells us these two concepts are not absolute and we may, therefore, say they cannot be truly fundamental. Only Nature’s constants – the speed of light, or Planck’s quantum of action – are absolute: these constants seem to mix space and time into something that is, apparently, more fundamental.

The speed of light (c) combines the physical dimensions of space and time, and Planck’s quantum of action (h) adds the idea of a force. But time, distance, and force are all relative. Energy (force over a distance), momentum (force times time) are, therefore, also relative. In contrast, the speed of light, and Planck’s quantum of action, are absolute. So we should think of distance, and of time, as some kind of projection of a deeper reality: the reality of light or – in case of Planck’s quantum of action – the reality of an electron or a proton. In contrast, time, distance, force, energy, momentum and whatever other concept we would derive from them exist in our mind only.

We should add another point here. To imagine the reality of an electron or a proton (or the idea of an elementary particle, you might say), we need an additional concept: the concept of charge. The elementary charge (e) is, effectively, a third idea (or category of the mind, one might say) without which we cannot imagine Nature. The ideas of charge and force are, of course, closely related: a force acts on a charge, and a charge is that upon which a force is acting. So we cannot think of charge without thinking of force, and vice versa. But, as mentioned above, the concept of force is relative: it incorporates the idea of time and distance (a force is that what accelerates a charge). In contrast, the idea of the elementary charge is absolute again: it does not depend on our frame of reference.

So we have three fundamental concepts: (1) velocity (or motion, you might say: a ratio of distance and time); (2) (physical) action (force times distance times time); and (3) charge. We measure them in three fundamental units: c, h, and e. Che. 🙂 So that’s reality, then: all of the metaphysics of physics are here. In three letters. We need three concepts: three things that we think of as being real, somehow. Real in the sense that we do not think they exist in our mind only. Light is real, and elementary particles are equally real. All other concepts exist in our mind only.

So were Kant’s ideas about space and time wrong? Maybe. Maybe not. If they are wrong, then that’s quite OK: Immanuel Kant lived in the 18th century, and had not ventured much beyond the place where he was born. Less exciting times. I think he was basically right in saying that space and time exist in our mind only. But he had no answer(s) to the question as to what is real: if some things exist in our mind only, something must exist in what is not our mind, right? So that is what we refer to as reality then: that which does not exist in our mind only.

Modern physics has the answers. The philosophy curriculum at universities should, therefore, adapt to modern times: Maxwell first derived the (absolute) speed of light in 1862, and Einstein published the (special) theory of relativity back in 1905. Hence, philosophers are 100-150 years behind the curve. They are probably even behind the general public. Philosophers should learn about modern physics as part of their studies so they can (also) think about real things rather than mental constructs only.

Form and substance

Philosophers usually distinguish between form and matter, rather than form and substance. Matter, as opposed to form, is then what is supposed to be formless. However, if there is anything that physics – as a science – has taught us, is that matter is defined by its form: in fact, it is the form factor which explains the difference between, say, a proton and an electron. So we might say that matter combines substance and form.

Now, we all know what form is: it is a mathematical quality—like the quality of having the shape of a triangle or a cube. But what is (the) substance that matter is made of? It is charge. Electric charge. It comes in various densities and shapes – that is why we think of it as being basically formless – but we can say a few more things about it. One is that it always comes in the same unit: the elementary charge—which may be positive or negative. Another is that the concept of charge is closely related to the concept of a force: a force acts on a charge—always.

We are talking elementary forces here, of course—the electromagnetic force, mainly. What about gravity? And what about the strong force? Attempts to model gravity as some kind of residual force, and the strong force as some kind of electromagnetic force with a different geometry but acting on the very same charge, have not been successful so far—but we should immediately add that mainstream academics never focused on it either, so the result may be commensurate with the effort made: nothing much.

Indeed, Einstein basically explained gravity away by giving us a geometric interpretation for it (general relativity theory) which, as far as I can see, confirms it may be some residual force resulting from the particular layout of positive and negative charge in electrically neutral atomic and molecular structures. As for the strong force, I believe the quark hypothesis – which basically states that partial (non-elementary) charges are, somehow, real â€“ has led mainstream physics into the dead end it finds itself in now. Will it ever get out of it?

I am not sure. It does not matter all that much to me. I am not a mainstream scientist and I have the answers I was looking for. These answers may be temporary, but they are the best I have for the time being. The best quote I can think of right now is this one:

‘We are in the words, and at the same time, apart from them. The words spin out, spin us out, over a void. There, somewhere between us, some words form some answer for some time, allowing us to live more fully in the forgetting face of nonexistence, in the dissolving away of each other.’ (Jacques Lacan, in Jeremy D. Safran (2003), Psychoanalysis and Buddhism: an unfolding dialogue, p. 134)

That says it all, doesn’t it? For the time being, at least. 🙂

Post scriptum: You might think explaining gravity as some kind of residual electromagnetic force should be impossible, but explaining the attractive force inside a nucleus behind like charges was pretty difficult as well, until someone came up with a relatively simple idea based on the idea of ring currents. 🙂

Feynman’s religion

Perhaps I should have titled this post differently: the physicist’s worldview. We may, effectively, assume that Richard Feynman’s Lectures on Physics represent mainstream sentiment, and he does get into philosophy—less or more liberally depending on the topic. Hence, yes, Feynman’s worldview is pretty much that of most physicists, I would think. So what is it? One of his more succinct statements is this:

“Often, people in some unjustified fear of physics say you cannot write an equation for life. Well, perhaps we can. As a matter of fact, we very possibly already have an equation to a sufficient approximation when we write the equation of quantum mechanics.” (Feynman’s Lectures, p. II-41-11)

He then jots down that equation which Schrödinger has on his grave (shown below). It is a differential equation: it relates the wavefunction (ψ) to its time derivative through the Hamiltonian coefficients that describe how physical states change with time (Hij), the imaginary unit (i) and Planck’s quantum of action (ħ).

hl_alpb_3453_ptplr

Feynman, and all modern academic physicists in his wake, claim this equation cannot be understood. I don’t agree: the explanation is not easy, and requires quite some prerequisites, but it is not anymore difficult than, say, trying to understand Maxwell’s equations, or the Planck-Einstein relation (E = ħ·ω = h·f).

In fact, a good understanding of both allows you to not only understand Schrödinger’s equation but all of quantum physics. The basics are this: the presence of the imaginary unit tells us the wavefunction is cyclical, and that it is an oscillation in two dimensions. The presence of Planck’s quantum of action in this equation tells us that such oscillation comes in units of ħ. Schrödinger’s wave equation as a whole is, therefore, nothing but a succinct representation of the energy conservation principle. Hence, we can understand it.

At the same time, we cannot, of course. We can only grasp it to some extent. Indeed, Feynman concludes his philosophical remarks as follows:

“The next great era of awakening of human intellect may well produce a method of understanding the qualitative content of equations. Today we cannot. Today we cannot see that the water flow equations contain such things as the barber pole structure of turbulence that one sees between rotating cylinders. We cannot see whether Schrödinger’s equation contains frogs, musical composers, or morality—or whether it does not. We cannot say whether something beyond it like God is needed, or not. And so we can all hold strong opinions either way.” (Feynman’s Lectures, p. II-41-12)

I think that puts the matter to rest—for the time being, at least. 🙂

Signing off…

I am done with reading Feynman and commenting on it—especially because this site just got mutilated by the third DMCA takedown of material (see below). Follow me to my new blog. No Richard Feynman, Mr. Gottlieb or DMCA there! Pure logic only. This site has served its purpose, and that is to highlight the Rotten State of QED. 🙂

Long time ago – in 1996, to be precise – I studied Wittgenstein’s TLP—part of a part-time BPhil degree program. At the time, I did not like it. The lecture notes were two or three times the volume of the work itself, and I got pretty poor marks for it. I guess one has to go through life to get an idea of what he was writing about. With all of the nonsense lately, I thought about one of the lines in that little book: “One must, so to speak, throw away the ladder after he has climbed up it. One must transcend the propositions, and then he will see the world aright.” (TLP, 6-54)

For Mr. Gottlieb and other narrow-minded zealots and mystery wallahs – who would not be interested in Wittgenstein anyway – I’ll just quote Wittgenstein’s quote of Ferdinand KĂŒrnberger:

“. . . und alles, was man weiss, nicht bloss rauschen und brausen gehört hat, lĂ€sst sich in drei Worten sagen.

I will let you google-translate that and, yes, sign off here—in the spirit of Ludwig Boltzmann and Paul Ehrenfest. [Sorry for being too lengthy or verbose here.]

“Bring forward what is true. Write it so that it is clear. Defend it to your last breath.” (Boltzmann)

Knox (Automattic)

Jun 20, 2020, 4:30 PM UTC

Hello,

We’ve received the DMCA takedown notice below regarding material published on your WordPress.com site, which means the complainant is asserting ownership of this material and claiming that your use of it is not permitted by them or the law. As required by the DMCA, we have disabled public access to the material.

Repeated incidents of copyright infringement will also lead to the permanent suspension of your WordPress.com site. We certainly don’t want that to happen, so please delete any other material you may have uploaded for which you don’t have the necessary rights and refrain from uploading additional material that you do not have permission to upload. Although we can’t provide legal advice, these resources might help you make this determination:

https://wordpress.com/support/counter-notice/#what-is-fair-use

If you believe that this DMCA takedown notice was received in error, or if you believe your usage of this material would be considered fair use, it’s important that you submit a formal DMCA counter notice to ensure that your WordPress.com site remains operational. If you submit a valid counter notice, we will return the material to your site in 10 business days if the complainant does not reply with legal action.

Please refer to the following pages for more information:

Please note that republishing the material yourself, without permission from the copyright holder (even after you have submitted a counter notice) will result in the permanent suspension of your WordPress.com site and/or account.

Thank you.

[…]

Well… Thank you, WordPress. I guess you’ll first suspend the site and then the account? :-/ I hope you’ll give me some time to create another account, at least? If not, this spacetime rebel will have to find another host for his site. 🙂

Lasers, masers, two-state systems and Feynman’s Lectures

The past few days I re-visited Feynman’s lectures on quantum math—the ones in which he introduces the concept of probability amplitudes (I will provide no specific reference or link to them because that is apparently unfair use of copyrighted material). The Great Richard Feynman introduces the concept of probability amplitudes as part of a larger discussion of two-state systems—and lasers and masers are a great example of such two-state systems. I have done a few posts on that while building up this blog over the past few years but because these have been mutilated by DMCA take-downs of diagrams and illustrations as a result of such ‘unfair use’, I won’t refer to them either. The point is this:

I have come to the conclusion we actually do not need the machinery of state vectors and probability amplitudes to explain how a maser (and, therefore, a laser) actually works.

The functioning of masers and lasers crucially depends on a dipole moment (of an ammonia molecule for a maser and of light-emitting atoms for a laser) which will flip up and down in sync with an external oscillating electromagnetic field. It all revolves around the resonant frequency (ω0), which depends on the tiny difference between the energies of the ‘up’ and ‘down’ states. This tiny energy difference (the A in the Hamiltonian matrix) is given by the product of the dipole moment (ÎŒ) and the external electromagnetic field that gets the thing going (Ɛ0). [Don’t confuse the symbols with the magnetic and electric constants here!] And so… Well… I have come to the conclusion that we can analyze this as just any other classical electromagnetic oscillation. We can effectively directly use the Planck-Einstein relation to determine the frequency instead of having to invoke all of the machinery that comes with probability amplitudes, base states, Hamiltonian matrices and differential equations:

ω0 = E/ħ = A/ħ = ΌƐ0/ħ

All the rest follows logically.

You may say: so what? Well… I find this very startling. I’ve been systematically dismantling a lot of ‘quantum-mechanical myths’, and so this seemed to be the last myth standing. It has fallen now: here is the link to the paper.

What’s the implication? The implication is that we can analyze all of the QED sector now in terms of classical mechanics: oscillator math, Maxwell’s equations, relativity theory and the Planck-Einstein relation will do. All that was published before the first World War broke out, in other words—with the added discoveries made by the likes of Holly Compton (photon-electron interactions), Carl Anderson (the discovery of anti-matter), James Chadwick (experimental confirmation of the existence of the neutron) and a few others after the war, of course! But that’s it, basically: nothing more, nothing less. So all of the intellectual machinery that was invented after World War I (the Bohr-Heisenberg theory of quantum mechanics) and after World War II (quantum field theory, the quark hypothesis and what have you) may be useful in the QCD sector of physics but − IMNSHO − even that remains to be seen!

I actually find this more than startling: it is shocking! I started studying Feynman’s Lectures – and everything that comes with it – back in 2012, only to find out that my idol had no intention whatsoever to make things easy. That is OK. In his preface, he writes he wanted to make sure that even the most intelligent student would be unable to completely encompass everything that was in the lectures—so that’s why we were attracted to them, of course! But that is, of course, something else than doing what he did, and that is to promote a Bright Shining Lie ! 

[…]

Long time ago, I took the side of Bill Gates in the debate on Feynman’s qualities as a teacher. For Bill Gates, Feynman was, effectively, “the best teacher he never had.” One of those very bright people who actually had him as a teacher (John F. McGowan, PhD and math genius) paints a very different picture, however. I would take the side of McGowan in this discussion now—especially when it turns out that Mr. Feynman’s legacy can apparently no longer be freely used as a reference anyway.

Philip Anderson and Freeman Dyson died this year—both at the age of 96. They were the last of what is generally thought of as a brilliant generation of quantum physicists—the third generation, we might say. May they all rest in peace.

Post scriptum: In case you wonder why I refer to them as the third rather than the second generation: I actually consider Heisenberg’s generation to be the second generation of quantum physicists—first was the generation of the likes of Einstein!

As for the (intended) irony in my last remarks, let me quote from an interesting book on the state of physics that was written by Doris Teplitz back in 1982: “The state of the classical electromagnetic theory reminds one of a house under construction that was abandoned by its working workmen upon receiving news of an approaching plague. The plague was in this case, of course, quantum theory.” I now very much agree with this bold statement. So… Well… I think I’ve had it with studying Feynman’s Lectures. Fortunately, I spent only ten years on them or so. Academics have to spend their whole life on what Paul Ehrenfest referred to as the ‘unendlicher Heisenberg-Born-Dirac-Schrödinger Wurstmachinen-Physik-Betrieb.’ :-/

The dark force

We have the electromagnetic force, the strong force, and it looks like there is a dark force too now! Mr. Michael Gottlieb, the publisher of the online edition of Feynman’s Lectures, is actively exploring it: I received yet another DMCA take-down notice for so-called unfair use of the material (see below) and, yes, this hassle has some history already, unfortunately—so it is bad news.

Now, it is true I do refer quite often to these lectures—but I do so because they are a commonly referenced textbook: one needs some kind of reference when referring to mainstream physics, isn’t it? In fact, I am amazed Mr. Gottlieb was able to claim copyright on a textbook that was published 57 years ago and whose author is long dead! I should also add that I started this blog on Feynman’s Lectures before Mr. Gottlieb’s online edition of it became online! Shame on him! However, it looks like he’s got the upper hand in this nasty game so I’ll just bow out and that will be it. I’ve got better things to do than fight some narcissist who thinks of a 1963 textbook as personal property!

The material Mr. Gottlieb objects to is in older posts mainly—but he has been scrutinizing more recent posts as well. The use of the diagram with the energy levels of electron orbitals in a lithium atom in my May, 2020 post, for example, is apparently not permitted—despite me mentioning the source of this diagram quite explicitly. In short, it looks like many of my posts on this blog will look quite mutilated soon. Worse, the host of this WordPress site (Automattic) may decide to take down my site altogether.

I know there are dark forces fighting freedom of expression and independent thought everywhere. I just hadn’t expected them to be present in science too. This is, effectively, a lot worse than just being told to ‘shut up and calculate’ or – more commonly – being referred to as a ‘crackpot’ theorist. I, therefore, actually do feel sad about it. :-/ Any case, if the site goes down, I would like to thank my readers here – especially those who are actually following each and every post – for their encouragement. It has been a good exercise, and I will continue to publish, of course, using other channels and references other than Feynman. Keep tracking independent research on viXra.org, academia.edu and ResearchGate.net.

I wish Mr. Gottlieb the best of luck: I can only hope he learns some real physics while continuing to scrutinize my posts. Just for the record, I would like to understand this correctly: Mr. Gottlieb put Feynman’s Lectures online – verbatim – but so he is not copying stuff. And so I am trying to do some creative stuff with it – not copying literally but effectively using some of Feynman’s material to provide a new perspective or make a point – and so it is me who is the offender here, right? That’s the situation, isn’t it? Or is my logic faulty here? We really do live in weird times. Weird but interesting, at least.

I have various working titles for a future book on physics, but ‘The Sorry State of Physics’ is still my favorite one. In fact, Oliver Consa is probably right when calling it plain rotten. And, yes, at this occasion, I’d like to recall my previous advice: do not buy Feynman’s Lectures, please. They are outdated and you can, therefore, not learn all that much from them. They have, unfortunately, become what Feynman did not want them to become: Cargo Cult Science, zealously guarded by a bunch of self-appointed Mystery Wallahs. Be well and happy, and please do keep thinking things through for yourself!

Jean Louis Van Belle, 18 June 2020

Fenton (Automattic)

Jun 17, 2020, 10:17 PM UTC

Hello,

We’ve received the DMCA takedown notice below regarding material published on your WordPress.com site, which means the complainant is asserting ownership of this material and claiming that your use of it is not permitted by them or the law. As required by the DMCA, we have disabled public access to the material.

Repeated incidents of copyright infringement will also lead to the permanent suspension of your WordPress.com site. We certainly don’t want that to happen, so please delete any other material you may have uploaded for which you don’t have the necessary rights and refrain from uploading additional material that you do not have permission to upload. Although we can’t provide legal advice, these resources might help you make this determination:

https://wordpress.com/support/counter-notice/#what-is-fair-use

If you believe that this DMCA takedown notice was received in error, or if you believe your usage of this material would be considered fair use, it’s important that you submit a formal DMCA counter notice to ensure that your WordPress.com site remains operational. If you submit a valid counter notice, we will return the material to your site in 10 business days if the complainant does not reply with legal action.

Please refer to the following pages for more information:

Please note that republishing the material yourself, without permission from the copyright holder (even after you have submitted a counter notice) will result in the permanent suspension of your WordPress.com site and/or account.

Thank you.

— BEGIN NOTICE —

Neutrons as composite particles and electrons as gluons?

Neutrons as composite particles

In our rather particular conception of the world, we think of photons, electrons, and protons – and neutrinos – as elementary particles. Elementary particles are, obviously, stable: they would not be elementary, otherwise. The difference between photons and neutrinos on the one hand, and electrons, protons, and other matter-particles on the other, is that we think all matter-particles carry charge—even if they are neutral.

Of course, to be neutral, one must combine positive and negative charge: neutral particles can, therefore, not be elementary—unless we accept the quark hypothesis, which we do not like to do (not now, at least). A neutron must, therefore, be an example of a neutral (composite) matter-particle. We know it is unstable outside of the nucleus but its longevity – as compared to other non-stable particles – is quite remarkable: it survives about 15 minutes—for other unstable particles, we usually talk about micro- or nano-seconds, or worse!

Let us explore what the neutron might be—if only to provide some kind of model for analyzing other unstable particle, perhaps. We should first note that the neutron radius is about the same as that of a proton. How do we know this? NIST only gives the rms charge radius for a proton based on the various proton radius measurements. We, therefore, only have a CODATA value for the Compton wavelength for a neutron, which is more or less the same as that for the proton. To be precise, the two values are this:

λneutron = 1.31959090581(75)10-15 m

λproton = 1.32140985539(40)×10-15 m

These values are just mechanical calculations based on the mass or energy of protons and neutrons respectively: the Compton wavelength is, effectively, calculated as λ = h/mc.[1] However, you should, of course, not only rely on CODATA values only: you should google for experiments measuring the size of a neutron directly or indirectly to get an idea of what is going on here.

Let us look at the energies. The neutron’s energy is about 939,565,420 eV. The proton energy is about 938,272,088 eV. Hence, the difference is about 1,293,332 eV. This mass difference, combined with the fact that neutrons spontaneously decay into protons but – conversely – there is no such thing as spontaneous proton decay[2], confirms we are probably justified in thinking that a neutron must, somehow, combine a proton and an electron. The mass of an electron is 0.511 MeV/c2, so that is only about 40% of the energy difference, but the kinetic and binding energy could make up for the remainder.[3]

So, yes, we will want to think of a neutron as carrying both positive and negative charge inside. These charges balance each other out (there is no net electric charge) but their respective motion still yields a small magnetic moment, which we think of as some net result from the motion of the positive and negative charge inside.

Let us now move to the next grand idea which emerges here.

Electrons as gluons?

The negative charge inside of a neutron may help to keep the nucleus together. We can, therefore, think of this charge as some kind of nuclear glue. We tentatively explored this idea in a paper: Electrons as gluons? The basic idea is this: the electromagnetic force keeps electrons close to the positively charged nucleus and we should, therefore, not exclude that a similar arrangement of positive and negative charges – but one involving some strong(er) force to explain the difference in scale – might exist within the nucleus.

Nonsense? We don’t think so. Consider this: one never finds a proton pair without one or more neutrons. The main isotope of helium (4He), for example, has a nucleus consisting of two protons and two neutrons, while a helium-3 (3He) nucleus consists of two protons and one neutron. When we find a pair of nucleons, like in deuterium (2H), this will always consist of a proton and a neutron. The idea of a negative charge acting as an in-between to keep two positive charges together is, therefore, quite logical. Think of it as the opposite of a positively charged nucleus keeping electrons together in a multi-electron atom.

Does this make sense to you? It does to me, so I’d appreciate any converging or diverging thoughts you might have on this. 🙂

[1] The reader should note that the Compton wavelength and, therefore, the Compton radius is inversely proportional to the mass: a more massive particle is, therefore, associated with a smaller radius. This is somewhat counterintuitive but it is what it is.

[2] None of the experiments (think of the Super-Kamiokande detector here) found any evidence of proton decay so far.

[3] The reader should note that the mass of a proton and an electron add up to less than the mass of a neutron, which is why it is only logical that a neutron should decay into a proton and an electron. Binding energies – think of Feynman’s calculations of the radius of the hydrogen atom, for example – are usually negative.

The mystery of the elementary charge

As part of my ‘debunking quantum-mechanical myths’ drive, I re-wrote Feynman’s introductory lecture on quantum mechanics. Of course, it has got nothing to do with Feynman’s original lecture—titled: on Quantum Behavior: I just made some fun of Feynman’s preface and that’s basically it in terms of this iconic reference. Hence, Mr. Gottlieb should not make too much of a fuss—although I hope he will, of course, because it would draw more attention to the paper. It was a fun exercise because it encouraged me to join an interesting discussion on ResearchGate (I copied the topic and some up and down below) which, in turn, made me think some more about what I wrote about the form factor in the explanation of the electron, muon and proton. Let me copy the relevant paragraph:

When we talked about the radius of a proton, we promised you we would talk some more about the form factor. The idea is very simple: an angular momentum (L) can always be written as the product of a moment of inertia (I) and an angular frequency (ω). We also know that the moment of inertia for a rotating mass or a hoop is equal to I = mr2, while it is equal to I = mr2/4 for a solid disk. So you might think this explains the 1/4 factor: a proton is just an anti-muon but in disk version, right? It is like a muon because of the strong force inside, but it is even smaller because it packs its charge differently, right?

Maybe. Maybe not. We think probably not. Maybe you will have more luck when playing with the formulas but we could not demonstrate this. First, we must note, once again, that the radius of a muon (about 1.87 fm) and a proton (0.83-0.84 fm) are both smaller than the radius of the pointlike charge inside of an electron (α·ħ/mec ≈ 2.818 fm). Hence, we should start by suggesting how we would pack the elementary charge into a muon first!

Second, we noted that the proton mass is 8.88 times that of the muon, while the radius is only 2.22 times smaller – so, yes, that 1/4 ratio once more – but these numbers are still weird: even if we would manage to, somehow, make abstraction of this form factor by accounting for the different angular momentum of a muon and a proton, we would probably still be left with a mass difference we cannot explain in terms of a unique force geometry.

Perhaps we should introduce other hypotheses: a muon is, after all, unstable, and so there may be another factor there: excited states of electrons are unstable too and involve an n = 2 or some other number in Planck’s E = n·h·f equation, so perhaps we can play with that too.

Our answer to such musings is: yes, you can. But please do let us know if you have more luck then us when playing with these formulas: it is the key to the mystery of the strong force, and we did not find it—so we hope you do!

So… Well… This is really as far as a realist interpretation of quantum mechanics will take you. One can solve most so-called mysteries in quantum mechanics (interference of electrons, tunneling and what have you) with plain old classical equations (applying Planck’s relation to electromagnetic theory, basically) but here we are stuck: the elementary charge itself is a most mysterious thing. When packing it into an electron, a muon or a proton, Nature gives it a very different shape and size.

The shape or form factor is related to the angular momentum, while the size has got to do with scale: the scale of a muon and proton is very different than that of an electron—smaller even than the pointlike Zitterbewegung charge which we used to explain the electron. So that’s where we are. It’s like we’ve got two quanta—rather than one only: Planck’s quantum of action, and the elementary charge. Indeed, Planck’s quantum of action may also be said to express itself itself very differently in space or in time (h = E·T versus h = p·λ). Perhaps there is room for additional simplification, but I doubt it. Something inside of me says that, when everything is said and done, I will just have to accept that electrons are electrons, and protons are protons, and a muon is a weird unstable thing in-between—and all other weird unstable things in-between are non-equilibrium states which one cannot explain with easy math.

Would that be good enough? For you? I cannot speak for you. Is it a good enough explanation for me? I am not sure. I have not made my mind up yet. I am taking a bit of a break from physics for the time being, but the question will surely continue to linger in the back of my mind. We’ll keep you updated on progress ! Thanks for staying tuned ! JL

PS: I realize the above might sound a bit like crackpot theory but that is just because it is very dense and very light writing at the same time. If you read the paper in full, you should be able to make sense of it. 🙂 You should also check the formulas for the moments of inertia: the I = mr2/4 formula for a solid disk depends on your choice of the axis of symmetry.

Research Gate

Peter Jackson

Dear Peter – Thanks so much for checking the paper and your frank comments. That is very much appreciated. I know I have gone totally overboard in dismissing much of post-WW II developments in quantum physics – most notably the idea of force-carrying particles (bosons – including Higgs, W/Z bosons and gluons). My fundamental intuition here is that field theories should be fine for modeling interactions (I’ll quote Dirac’s 1958 comments on that at the very end of my reply here) and, yes, we should not be limiting the idea of a field to EM fields only. So I surely do not want to give the impression I think classical 19th/early 20th century physics – Planck’s relation, electromagnetic theory and relativity – can explain everything.

Having said that, the current state of physics does resemble the state of scholastic philosophy before it was swept away by rationalism: I feel there has been a multiplication of ill-defined concepts that did not add much additional explanation of what might be the case (the latter expression is Wittgenstein’s definition of reality). So, yes, I feel we need some reincarnation of William of Occam to apply his Razor and kick ass. Fortunately, it looks like there are many people trying to do exactly that now – a return to basics – so that’s good: I feel like I can almost hear the tectonic plates moving. 🙂

My last paper is a half-serious rewrite of Feynman’s first Lecture on Quantum Mechanics. Its intention is merely provocative: I want to highlight what of the ‘mystery’ in quantum physics is truly mysterious and what is humbug or – as Feynman would call it – Cargo Cult Science. The section on the ‘form factor’ (what is the ‘geometry’ of the strong force?) in that paper is the shortest and most naive paragraph in that text but it actually does highlight the one and only question that keeps me awake: what is that form factor, what different geometry do we need to explain a proton (or a muon) as opposed to, say, an electron? I know I have to dig into the kind of stuff that you are highlighting – and Alex Burinskii’s Dirac-Kerr-Newman models (also integrating gravity) to find elements that – one day – may explain why a muon is not an electron, and why a proton is not a positron.

Indeed, I think the electron and photon model are just fine: classical EM and Planck’s relation are all that’s needed and so I actually don’t waste to more time on the QED sector. But a decent muon and proton model will, obviously, require ”something else’ than Planck’s relation, the electric charge and electromagnetic theory. The question here is: what is that ‘something else’, exactly?

Even if we find another charge or another field theory to explain the proton, then we’re just at the beginning of explaining the QCD sector. Indeed, the proton and muon are stable (fairly stable – I should say – in case of the muon – which I want to investigate because of the question of matter generations). In contrast, transient particles and resonances do not respect Planck’s relation – that’s why they are unstable – and so we are talking non-equilibrium states and so that’s an entirely different ballgame. In short, I think Dirac’s final words in the very last (fourth) edition of his ‘Principles of Quantum Mechanics’ still ring very true today. They were written in 1958 so Dirac was aware of the work of Gell-Man and Nishijima (the contours of quark-gluon theory) and, clearly, did not think much of it (I understand he also had conversations with Feynman on this):

“Quantum mechanics may be defined as the application of equations of motion to particles. [
] The domain of applicability of the theory is mainly the treatment of electrons and other charged particles interacting with the electromagnetic field⎯a domain which includes most of low-energy physics and chemistry.

Now there are other kinds of interactions, which are revealed in high-energy physics and are important for the description of atomic nuclei. These interactions are not at present sufficiently well understood to be incorporated into a system of equations of motion. Theories of them have been set up and much developed and useful results obtained from them. But in the absence of equations of motion these theories cannot be presented as a logical development of the principles set up in this book. We are effectively in the pre-Bohr era with regard to these other interactions. It is to be hoped that with increasing knowledge a way will eventually be found for adapting the high-energy theories into a scheme based on equations of motion, and so unifying them with those of low-energy physics.”

Again, many thanks for reacting and, yes, I will study the references you gave – even if I am a bit skeptical of Wolfram’s new project. Cheers – JL

Paul Ehrenfest and the search for truth

On 25 September 1933, Paul Ehrenfest took his son Wassily, who was suffering from Down syndrome, for a walk in the park. He shot him, and then killed himself. He was only 53. That’s my age bracket. From the letters he left (here is a summary in Dutch), we know his frustration of not being able to arrive at some kind of common-sense interpretation of the new quantum physics played a major role in the anxiety that had brought him to this point. He had taken courses from Ludwig Boltzmann as an aspiring young man. We, therefore, think Boltzmann’s suicide – for similar reasons – might have troubled him too.

His suicide did not come unexpectedly: he had announced it. In one of his letters to Einstein, he complains about ‘indigestion’ from the ‘unendlicher Heisenberg-Born-Dirac-Schrödinger Wurstmachinen-Physik-Betrieb.’ I’ll let you google-translate that. :-/ He also seems to have gone through the trouble of summarizing all his questions on the new approach in an article in what was then one of the top journals for physics: Einige die Quantenmechanik betreffende Erkundigungsfrage, Zeitschrift fĂŒr Physik 78 (1932) 555-559 (quoted in the above-mentioned review article). This I’ll translate: Some Questions about Quantum Mechanics.

Ehrenfest

Paul Ehrenfest in happier times (painting by Harm Kamerlingh Onnes in 1920)

A diplomat-friend of mine once remarked this: “It is good you are studying physics only as a pastime. Professional physicists are often troubled people—miserable.” It is an interesting observation from a highly intelligent outsider. To be frank, I understand this strange need to probe things at the deepest level—to be able to explain what might or might not be the case (I am using Wittgenstein’s definition of reality here). Even H.A. Lorentz, who – fortunately, perhaps – died before his successor did what he did, was becoming quite alarmist about the sorry state of academic physics near the end of his life—and he, Albert Einstein, and so many others were not alone. Not then, and not now. All of the founding fathers of quantum mechanics ended up becoming pretty skeptical about the theory they had created. We have documented that elsewhere so we won’t talk too much about it here. Even John Stewart Bell himself – one of the third generation of quantum physicists, we may say – did not like his own ‘No Go Theorem’ and thought that some “radical conceptual renewal”[1] might disprove his conclusions.

The Born-Heisenberg revolution has failed: most – if not all – of contemporary high-brow physicist are pursuing alternative theories—in spite, or because, of the academic straitjackets they have to wear. If a genius like Ehrenfest didn’t buy it, then I won’t buy it either. Furthermore, the masses surely don’t buy it and, yes, truth – in this domain too – is, fortunately, being defined more democratically nowadays. The Nobel Prize Committee will have to do some serious soul-searching—if not five years from now, then ten.

We feel sad for the physicists who died unhappily—and surely for those who took their life out of depression—because the common-sense interpretation they were seeking is so self-evident: de Broglie’s intuition in regard to matter being wavelike was correct. He just misinterpreted its nature: it is not a linear but a circular wave. We quickly insert the quintessential illustration (courtesy of Celani, Vassallo and Di Tommaso) but we refer the reader for more detail to our articles or – more accessible, perhaps – our manuscript for the general public.

aa 2

The equations are easy. The mass of an electron – any matter-particle, really – is the equivalent mass of the oscillation of the charge it carries. This oscillation is, most probably, statistically regular only. So we think it’s chaotic, actually, but we also think the words spoken by Lord Pollonius in Shakespeare’s Hamlet apply to it: “Though this be madness, yet there is method in ‘t.” This means we can meaningfully speak of a cycle time and, therefore, of a frequency. Erwin Schrödinger stumbled upon this motion while exploring solutions to Dirac’s wave equation for free electrons, and Dirac immediately grasped the significance of Schrödinger’s discovery, because he mentions Schrödinger’s discovery rather prominently in his Nobel Prize Lecture:

“It is found that an electron which seems to us to be moving slowly, must actually have a very high frequency oscillatory motion of small amplitude superposed on the regular motion which appears to us. As a result of this oscillatory motion, the velocity of the electron at any time equals the velocity of light. This is a prediction which cannot be directly verified by experiment, since the frequency of the oscillatory motion is so high and its amplitude is so small. But one must believe in this consequence of the theory, since other consequences of the theory which are inseparably bound up with this one, such as the law of scattering of light by an electron, are confirmed by experiment.” (Paul A.M. Dirac, Theory of Electrons and Positrons, Nobel Lecture, December 12, 1933)

Unfortunately, Dirac confuses the concept of the electron as a particle with the concept of the (naked) charge inside. Indeed, the idea of an elementary (matter-)particle must combine the idea of a charge and its motion to account for both the particle- as well as the wave-like character of matter-particles. We do not want to dwell on all of this because we’ve written too many papers on this already. We just thought it would be good to sum up the core of our common-sense interpretation of physics. Why? To honor Boltzmann and Ehrenfest: I think of their demise as a sacrifice in search for truth.

[…]

OK. That sounds rather tragic—sorry for that! For the sake of brevity, we will just describe the electron here.

I. Planck’s quantum of action (h) and the speed of light (c) are Nature’s most fundamental constants. Planck’s quantum of action relates the energy of a particle to its cycle time and, therefore, to its frequency:

(1) h = E·T = E/f ⇔ ħ = E/ω

The charge that is whizzing around inside of the electron has zero rest mass, and so it whizzes around at the speed of light: the slightest force on it gives it an infinite acceleration. It, therefore, acquires a relativistic mass which is equal to mγ = me/2 (we refer to our paper(s) for a relativistically correct geometric argument). The momentum of the pointlike charge, in its circular or orbital motion, is, therefore, equal to p = mγ·c = me·c/2.

The (angular) frequency of the oscillation is also given by the formula for the (angular) velocity:

(2) c = a·ω ⇔ ω = c/a

While Eq. (1) is a fundamental law of Nature, Eq. (2) is a simple geometric or mathematical relation only.

II. From (1) and (2), we can now calculate the radius of this tiny circular motion as:

(3a) ħ = E/ω = E·a/c ⇔ a = (ħ·c)/E

Because we know the mass of the electron is the inertial mass of the state of motion of the pointlike charge, we may use Einstein’s mass-energy equivalence relation to rewrite this as the Compton radius of the electron:

(3b) a = (ħ·c)/E = (ħ·c)/(me·c2) = ħ/(me·c)

Note that we only used two fundamental laws of Nature so far: the Planck-Einstein relation and Einstein’s mass-energy equivalence relation.

III. We must also be able to express the Planck-Einstein quantum as the product of the momentum (p) of the pointlike charge and some length λ:

(4) h = p·λ

The question here is: what length? The circumference of the loop, or its radius? The same geometric argument we used to derive the effective mass of the pointlike charge as it whizzes around at lightspeed around its center, tells us the centripetal force acts over a distance that is equal to two times the radius. Indeed, the relevant formula for the centripetal force is this:

(5) F = (mγ/me)·(E/a) = E/2a

We can therefore reduce Eq. (4) by dividing it by 2π. We then get reduced, angular or circular (as opposed to linear) concepts:

(6) ħ = (p·λ)/(2π) = (me·c/2)·(λ/π) = (me·c/2)·(2a) = me·c·a ⇔ ħ/a = me·c

We can verify the logic of our reasoning by substituting a for the Compton radius:

ħ = p·λ = me·c·a = me·c·a = me·c·ħ/(me·c) = ħ

IV. We can, finally, re-confirm the logic of our reason by re-deriving Einstein’s mass-energy equivalence relation as well as the Planck-Einstein relation using the ω = c/a and the ħ/a = me·c relations:

(7) ħ·ω = ħ·c/a = (ħ/a)·c = (me·c)·c = me·c2 = E

Of course, we note all of the formulas we have derived are interdependent. We, therefore, have no clear separation between axioms and derivations here. If anything, we are only explaining what Nature’s most fundamental laws (the Planck-Einstein relation and Einstein’s mass-energy equivalence relation) actually mean or represent. As such, all we have is a simple description of reality itself—at the smallest scale, of course! Everything that happens at larger scales involves Maxwell’s equations: that’s all electromagnetic in nature. No need for strong or weak forces, or for quarks—who invented that? Ehrenfest, Lorentz and all who suffered with truly understanding the de Broglie’s concept of the matter-wave might have been happier physicists if they would have seen these simple equations!

The gist of the matter is this: the intuition of Einstein and de Broglie in regard to the wave-nature of matter was, essentially, correct. However, de Broglie’s modeling of it as a wave packet was not: modeling matter-particles as some linear oscillation does not do the trick. It is extremely surprising no one thought of trying to think of some circular oscillation. Indeed, the interpretation of the elementary wavefunction as representing the mentioned Zitterbewegung of the electric charge solves all questions: it amounts to interpreting the real and imaginary part of the elementary wavefunction as the sine and cosine components of the orbital motion of a pointlike charge. We think that, in our 60-odd papers, we’ve shown such easy interpretation effectively does the trick of explaining all of the quantum-mechanical weirdness but, of course, it is up to our readers to judge that. 🙂

[1] See: John Stewart Bell, Speakable and unspeakable in quantum mechanics, pp. 169–172, Cambridge University Press, 1987 (quoted from Wikipedia). J.S. Bell died from a cerebral hemorrhage in 1990 – the year he was nominated for the Nobel Prize in Physics and which he, therefore, did not receive (Nobel Prizes are not awarded posthumously). He was just 62 years old then.

Re-writing Feynman’s Lectures?

I have a crazy new idea: a complete re-write of Feynman’s Lectures. It would be fun, wouldn’t it? I would follow the same structure—but start with Volume III, of course: the lectures on quantum mechanics. We could even re-use some language—although we’d need to be careful so as to keep Mr. Michael Gottlieb happy, of course. 🙂 What would you think of the following draft Preface, for example?

The special problem we try to get at with these lectures is to maintain the interest of the very enthusiastic and rather smart people trying to understand physics. They have heard a lot about how interesting and exciting physics is—the theory of relativity, quantum mechanics, and other modern ideas—and spend many years studying textbooks or following online courses. Many are discouraged because there are really very few grand, new, modern ideas presented to them. The problem is whether or not we can make a course which would save them by maintaining their enthusiasm.

The lectures here are not in any way meant to be a survey course, but are very serious. I thought it would be best to re-write Feynman’s Lectures to make sure that most of the above-mentioned enthusiastic and smart people would be able to encompass (almost) everything that is in the lectures. 🙂

This is the link to Feynman’s original Preface, so you can see how my preface compares to his: same-same but very different, they’d say in Asia. 🙂

[…]

Doesn’t that sound like a nice project? 🙂

Jean Louis Van Belle, 22 May 2020

Post scriptum: It looks like we made Mr. Gottlieb and/or MIT very unhappy already: the link above does not work for us anymore (see what we get below). That’s very good: it is always nice to start a new publishing project with a little controversy. 🙂 We will have to use the good old paper print edition. We recommend you buy one too, by the way. 🙂 I think they are just a bit over US$100 now. Well worth it!

To put the historical record straight, the reader should note we started this blog before Mr. Gottlieb brought Feynman’s Lectures online. We actually wonder why he would be bothered by us referring to it. That’s what classical textbooks are for, aren’t they? They create common references to agree or disagree with, and why put a book online if you apparently don’t want it to be read or discussed? Noise like this probably means I am doing something right here. 🙂

Post scriptum 2: Done ! Or, at least, the first chapter is done ! Have a look: here is the link on ResearchGate and this is the link on Phil Gibbs’ site. Please do let me know what you think of it—whether you like it or not or, more importantly, what logic makes sense and what doesn’t. 🙂

Gottlieb

The wavefunction in a medium: amplitudes as signals

We finally did what we wanted to do for a while already: we produced a paper on the meaning of the wavefunction and wave equations in the context of an atomic lattice (think of a conductor or a semiconductor here). Unsurprisingly, we came to the following conclusions:

1. The concept of the matter-wave traveling through the vacuum, an atomic lattice or any medium can be equated to the concept of an electric or electromagnetic signal traveling through the same medium.

2. There is no need to model the matter-wave as a wave packet: a single wave – with a precise frequency and a precise wavelength – will do.

3. If we do want to model the matter-wave as a wave packet rather than a single wave with a precisely defined frequency and wavelength, then the uncertainty in such wave packet reflects our own limited knowledge about the momentum and/or the velocity of the particle that we think we are representing. The uncertainty is, therefore, not inherent to Nature, but to our limited knowledge about the initial conditions or, what amounts to the same, what happened to the particle(s) in the past.

4. The fact that such wave packets usually dissipate very rapidly, reflects that even our limited knowledge about initial conditions tends to become equally rapidly irrelevant. Indeed, as Feynman puts it, “the tiniest irregularities tend to get magnified very quickly” at the micro-scale.

In short, as Hendrik Antoon Lorentz noted a few months before his demise, there is, effectively, no reason whatsoever “to elevate indeterminism to a philosophical principle.” Quantum mechanics is just what it should be: common-sense physics.

The paper confirms intuitions we had highlighted in previous papers already, but uses the formalism of quantum mechanics itself to demonstrate this.

PS: We put the paper on academia.edu and ResearchGate as well, but Phil Gibbs’ site has easy access (no log-in or membership required). Long live Phil Gibbs!

Louis de Broglie’s mistake

So, yes, where did he go wrong? We wrote a paper with a brief history of quantum-mechanical ideas, focusing on some of the well-known contributions of great minds – including the ones we already talked about in previous posts – to the Solvay Conferences.

We hope to show there was nothing inevitable about the new physics winning out. In fact, we suggest modern-day physicists may usefully go back to some of the old ideas – most notably the idea that elementary particles do have some shape and size  – and that they should, perhaps, try somewhat harder to explain intrinsic properties of these particles, such as angular momentum and their magnetic moment, in terms of classical physics.

The contributions which we discuss are those of Ernest Rutherford, Joseph Larmor, Hendrik Antoon Lorentz and, yes, Louis de Broglie. However, we also singled out Louis de Broglie’s ideas in a more comprehensive but also more technical paper on de Broglie’s wavelength, elementary particles, the wavefunction and relativity.

Enjoy !

PS: ResearchGate accepted my request to join the ‘research community’ as an independent researcher, so you can also find us on ResearchGate now. In fact, if you want to look at our core papers only, just go there! 🙂

 

 

Rutherford’s idea of an electron

Pre-scriptum (dated 27 June 2020): Two illustrations in this post were deleted by the dark force. We will not substitute them. The reference is given and it will help you to look them up yourself. In fact, we think it will greatly advance your understanding if you do so. Mr. Gottlieb may actually have done us a favor by trying to pester us.

Electrons, atoms, elementary particles and wave equations

The New Zealander Ernest Rutherford came to be known as the father of nuclear physics. He was the first to provide a reliable estimate of the order of magnitude of the size of the nucleus. To be precise, in the 1921 paper which we will discuss here, he came up with an estimate of about 15 fm for massive nuclei, which is the current estimate for the size of an uranium nucleus. His experiments also helped to significantly enhance the Bohr model of an atom, culminating – just before WW I started – in the Bohr-Rutherford model of an atom (E. Rutherford, Phil. Mag. 27, 488).

The Bohr-Rutherford model of an atom explained the (gross structure of the) hydrogen spectrum perfectly well, but it could not explain its finer structure—read: the orbital sub-shells which, as we all know now (but not very well then), result from the different states of angular momentum of an electron and the associated magnetic moment.

The issue is probably best illustrated by the two diagrams below, which I copied from Feynman’s Lectures. As you can see, the idea of subshells is not very relevant when looking at the gross structure of the hydrogen spectrum because the energy levels of all subshells are (very nearly) the same. However, the Bohr model of an atom—which is nothing but an exceedingly simple application of the E = h·f equation (see p. 4-6 of my paper on classical quantum physics)—cannot explain the splitting of lines for a lithium atom, which is shown in the diagram on the right. Nor can it explain the splitting of spectral lines when we apply a stronger or weaker magnetic field while exciting the atoms so as to induce emission of electromagnetic radiation.

Schrödinger’s wave equation solves that problem—which is why Feynman and other modern physicists claim this equation is “the most dramatic success in the history of the quantum mechanics” or, more modestly, a “key result in quantum mechanics” at least!

Such dramatic statements are exaggerated. First, an even finer analysis of the emission spectrum (of hydrogen or whatever other atom) reveals that Schrödinger’s wave equation is also incomplete: the hyperfine splitting, the Zeeman splitting (anomalous or not) or the (in)famous Lamb shift are to be explained not only in terms of the magnetic moment of the electron but also in terms of the magnetic moment of the nucleus and its constituents (protons and neutrons)—or of the coupling between those magnetic moments (we may refer to our paper on the Lamb shift here). This cannot be captured in a wave equation: second-order differential equations are – quite simply – not sophisticated enough to capture the complexity of the atomic system here.

Also, as we pointed out previously, the current convention in regard to the use of the imaginary unit (i) in the wavefunction does not capture the spin direction and, therefore, makes abstraction of the direction of the magnetic moment too! The wavefunction therefore models theoretical spin-zero particles, which do not exist. In short, we cannot hope to represent anything real with wave equations and wavefunctions.

More importantly, I would dare to ask this: what use is an ‘explanation’ in terms of a wave equation if we cannot explain what the wave equation actually represents? As Feynman famously writes: “Where did we get it from? Nowhere. It’s not possible to derive it from anything you know. It came out of the mind of Schrödinger, invented in his struggle to find an understanding of the experimental observations of the real world.” Our best guess is that it, somehow, models (the local diffusion of) energy or mass densities as well as non-spherical orbital geometries. We explored such interpretations in our very first paper(s) on quantum mechanics, but the truth is this: we do not think wave equations are suitable mathematical tools to describe simple or complex systems that have some internal structure—atoms (think of Schrödinger’s wave equation here), electrons (think of Dirac’s wave equation), or protons (which is what some others tried to do, but I will let you do some googling here yourself).

We need to get back to the matter at hand here, which is Rutherford’s idea of an electron back in 1921. What can we say about it?

Rutherford’s contributions to the 1921 Solvay Conference

From what you know, and from what I write above, you will understand that Rutherford’s research focus was not on electrons: his prime interest was in explaining the atomic structure and in solving the mysteries of nuclear radiation—most notably the emission of alpha– and beta-particles as well as highly energetic gamma-rays by unstable or radioactive nuclei. In short, the nature of the electron was not his prime interest. However, this intellectual giant was, of course, very much interested in whatever experiment or whatever theory that might contribute to his thinking, and that explains why, in his contribution to the 1921 Solvay Conference—which materialized as an update of his seminal 1914 paper on The Structure of the Atom—he devotes considerable attention to Arthur Compton’s work on the scattering of light from electrons which, at the time (1921), had not even been published yet (Compton’s seminal article on (Compton) scattering was published in 1923 only).

It is also very interesting that, in the very same 1921 paper—whose 30 pages are more than a multiple of his 1914 article and later revisions of it (see, for example, the 1920 version of it, which actually has wider circulation on the Internet)—Rutherford also offers some short reflections on the magnetic properties of electrons while referring to Parson’s ring current model which, in French, he refers to as “l’électron annulaire de Parson.” Again, it is very strange that we should translate Rutherford’s 1921 remarks back in English—as we are sure the original paper must have been translated from English to French rather than the other way around.

However, it is what it is, and so here we do what we have to do: we give you a free translation of Rutherford’s remarks during the 1921 Solvay Conference on the state of research regarding the electron at that time. The reader should note these remarks are buried in a larger piece on the emission of ÎČ particles by radioactive nuclei which, as it turns out, are nothing but high-energy electrons (or their anti-matter counterpart—positrons). In fact, we should—before we proceed—draw attention to the fact that the physicists at the time had no clear notion of the concepts of protons and neutrons.

This is, indeed, another remarkable historical contribution of the 1921 Solvay Conference because, as far as I know, this is the first time Rutherford talks about the neutron hypothesis. It is quite remarkable he does not advance the neutron hypothesis to explain the atomic mass of atoms combining what we know think of as protons and neutrons (Rutherford regularly talks of a mix of ‘positive and negative electrons’ in the nucleus—neither the term proton or neutron was in use at the time) but as part of a possible explanation of nuclear fusion reactions in stars or stellar nebulae. This is, indeed, his response to a question during the discussions on Rutherford’s paper on the possibility of nuclear synthesis in stars or nebulae from the French physicist Jean Baptise Perrin who, independently from the American chemist William Draper Harkins, had proposed the possibility of hydrogen fusion just the year before (1919):

“We can, in fact, think of enormous energies being released from hydrogen nuclei merging to form helium—much larger energies than what can come from the Kelvin-Helmholtz mechanism. I have been thinking that the hydrogen in the nebulae might come from particles which we may refer to as ‘neutrons’: these would consist of a positive nucleus with an electron at an exceedingly small distance (“un noyau positif avec un Ă©lectron Ă  toute petite distance”). These would mediate the assembly of the nuclei of more massive elements. It is, otherwise, difficult to understand how the positively charged particles could come together against the repulsive force that pushes them apart—unless we would envisage they are driven by enormous velocities.”

We may add that, just to make sure he get this right, Rutherford is immediately requested to elaborate his point by the Danish physicist Martin Knudsen: “What’s the difference between a hydrogen atom and this neutron?”—which Rutherford simply answers as follows: “In a neutron, the electron would be very much closer to the nucleus.” In light of the fact that it was only in 1932 that James Chadwick would experimentally prove the existence of neutrons (and positively charged protons), we are, once again, deeply impressed by the the foresight of Rutherford and the other pioneers here: the predictive power of their theories and ideas is, effectively, truly amazing by any standard—including today’s. I should, perhaps, also add that I fully subscribe to Rutherford’s intuition that a neutron should be a composite particle consisting of a proton and an electron—but that’s a different discussion altogether.

We must come back to the topic of this post, which we will do now. Before we proceed, however, we should highlight one other contextual piece of information here: at the time, very little was known about the nature of α and ÎČ particles. We now know that beta-particles are electrons, and that alpha-particles combine two protons and two neutrons. That was not known in the 1920s, however: Rutherford and his associates could basically only see positive or negative particles coming out of these radioactive processes. This further underscores how much knowledge they were able to gain from rather limited sets of data.

Rutherford’s idea of an electron in 1921

So here is the translation of some crucial text. Needless to say, the italics, boldface and additions between [brackets] are not Rutherford’s but mine, of course.

“We may think the same laws should apply in regard to the scattering [“diffusion”] of α and ÎČ particles. [Note: Rutherford noted, earlier in his paper, that, based on the scattering patterns and other evidence, the force around the nucleus must respect the inverse square law near the nucleus—moreover, it must also do so very near to it.] However, we see marked differences. Anyone who has carefully studied the trajectories [photographs from the Wilson cloud chamber] of beta-particles will note the trajectories show a regular curvature. Such curved trajectories are even more obvious when they are illuminated by X-rays. Indeed, A.H. Compton noted that these trajectories seem to end in a converging helical path turning right or left. To explain this, Compton assumes the electron acts like a magnetic dipole whose axis is more or less fixed, and that the curvature of its path is caused by the magnetic field [from the (paramagnetic) materials that are used].

Further examination would be needed to make sure this curvature is not some coincidence, but the general impression is that the hypothesis may be quite right. We also see similar curvature and helicity with α particles in the last millimeters of their trajectories. [Note: α-particles are, obviously, also charged particles but we think Rutherford’s remark in regard to α particles also following a curved or helical path must be exaggerated: the order of magnitude of the magnetic moment of protons and neutrons is much smaller and, in any case, they tend to cancel each other out. Also, because of the rather enormous mass of α particles (read: helium nuclei) as compared to electrons, the effect would probably not be visible in a Wilson cloud chamber.]

The idea that an electron has magnetic properties is still sketchy and we would need new and more conclusive experiments before accepting it as a scientific fact. However, it would surely be natural to assume its magnetic properties would result from a rotation of the electron. Parson’s ring electron model [“Ă©lectron annulaire“] was specifically imagined to incorporate such magnetic polarity [“polaritĂ© magnĂ©tique“].

A very interesting question here would be to wonder whether such rotation would be some intrinsic property of the electron or if it would just result from the rotation of the electron in its atomic orbital around the nucleus. Indeed, James Jeans usefully reminded me any asymmetry in an electron should result in it rotating around its own axis at the same frequency of its orbital rotation. [Note: The reader can easily imagine this: think of an asymmetric object going around in a circle and returning to its original position. In order to return to the same orientation, it must rotate around its own axis one time too!]

We should also wonder if an electron might acquire some rotational motion from being accelerated in an electric field and if such rotation, once acquired, would persist when decelerating in an(other) electric field or when passing through matter. If so, some of the properties of electrons would, to some extent, depend on their past.”

Each and every sentence in these very brief remarks is wonderfully consistent with modern-day modelling of electron behavior. We should add, of course, non-mainstream modeling of electrons but the addition is superfluous because mainstream physicists stubbornly continue to pretend electrons have no internal structure, and nor would they have any physical dimension. In light of the numerous experimental measurements of the effective charge radius as well as of the dimensions of the physical space in which photons effectively interfere with electrons, such mainstream assumptions seem completely ridiculous. However, such is the sad state of physics today.

Thinking backward and forward

We think that it is pretty obvious that Rutherford and others would have been able to adapt their model of an atom to better incorporate the magnetic properties not only of electrons but also of the nucleus and its constituents (protons and neutrons). Unfortunately, scientists at the time seem to have been swept away by the charisma of Bohr, Heisenberg and others, as well as by the mathematical brilliance of the likes of Sommerfeld, Dirac, and Pauli.

The road then was taken then has not led us very far. We concur with Oliver Consa’s scathing but essentially correct appraisal of the current sorry state of physics:

“QED should be the quantized version of Maxwell’s laws, but it is not that at all. QED is a simple addition to quantum mechanics that attempts to justify two experimental discrepancies in the Dirac equation: the Lamb shift and the anomalous magnetic moment of the electron. The reality is that QED is a bunch of fudge factors, numerology, ignored infinities, hocus-pocus, manipulated calculations, illegitimate mathematics, incomprehensible theories, hidden data, biased experiments, miscalculations, suspicious coincidences, lies, arbitrary substitutions of infinite values and budgets of 600 million dollars to continue the game. Maybe it is time to consider alternative proposals. Winter is coming.”

I would suggest we just go back where we went wrong: it may be warmer there, and thinking both backward as well as forward must, in any case, be a much more powerful problem solving technique than relying only on expert guessing on what linear differential equation(s) might give us some S-matrix linking all likely or possible initial and final states of some system or process. 🙂

Post scriptum: The sad state of physics is, of course, not limited to quantum electrodynamics only. We were briefly in touch with the PRad experimenters who put an end to the rather ridiculous ‘proton radius puzzle’ by re-confirming the previously established 0.83-0.84 range for the effective charge radius of a proton: we sent them our own classical back-of-the-envelope calculation of the Compton scattering radius of a proton based on the ring current model (see p. 15-16 of our paper on classical physics), which is in agreement with these measurements and courteously asked what alternative theories they were suggesting. Their spokesman replied equally courteously:

“There is no any theoretical prediction in QCD. Lattice [theorists] are trying to come up [with something] but that will take another decade before any reasonable  number [may come] from them.”

This e-mail exchange goes back to early February 2020. There has been no news since. One wonders if there is actually any real interest in solving puzzles. The physicist who wrote the above may have been nominated for a Nobel Prize in Physics—I surely hope so because, in contrast to some others, he and his team surely deserve one— but I think it is rather incongruous to finally firmly establish the size of a proton while, at the same time, admit that protons should not have any size according to mainstream theory—and we are talking the respected QCD sector of the equally respected Standard Model here!

We understand, of course! As Freddy Mercury famously sang: The Show Must Go On.

The self-appointed science gurus

Sean Carroll recently tweeted this:

Sean Caroll

I could ‘t help giving him a straight answer. I actually like Sean Carroll, but I hate how he and others – think of John Gribbins, for example – self-appoint themselves as the only ‘gurus’ who are entitled to say something about grand theories or other ‘big ideas’: everyone else (read: all non-believers in QFT) are casually dismissed as ‘crackpot scientists’.

In fact, a few weeks before he had sent out a tweet promoting his ideas on the next ‘big ideas’, so I couldn’t help reminding him of the tweet above. 🙂

Sean Caroll next tweet

This is funny, and then it isn’t. The facts are this:

  1. The ‘new physics’ – the quantum revolution – started almost 100 years ago but doesn’t answer many fundamental questions (simply think about explaining spin and other intrinsic properties of matter-particles here).
  2. Geniuses like Einstein, Lorentz, Dirac and even Bell had serious doubts about the approach.
  3. Historical research shows theories and scientists were severely biased: see Dr. Consa’s review of quantum field theory in this regard.

I am very sorry, Dr. Carroll. You are much smarter than most – and surely much smarter than me – but here you show you are also plain arrogant. :-/ It’s this arrogance that has prevented a creative way out of the mess that fundamental physics finds itself in today. If you find yourself in a hole, stop digging !

The last words of H.A. Lorentz

I talked about the Solvay Conferences in my previous post(s). The Solvay Conference proceedings are a real treasury trove. Not only are they very pleasant to read, but they also debunk more than one myth or mystery in quantum physics!

It is part of scientific lore, for example, that the 1927 Solvay Conference was a sort of battlefield on new physics between Heisenberg and Einstein. Surprisingly, the papers and write-up of discussions reveal that Einstein hardly intervened. They also reveal that ‘battlefield stories’ such as Heisenberg telling Einstein to “stop telling God what to do” or – vice versa – Einstein declaring “God doesn’t play dice” are what they are: plain gossip or popular hear-say. Neither Heisenberg nor Einstein ever said that—or not at the occasion of the 1927 Solvay Conference, at least! Instead, we see very nuanced and very deep philosophical statements—on both sides of the so-called ‘divide’ or ‘schism’.

From all interventions, the intervention of the Dutch scientist Hendrik Antoon Lorentz stands out. I know (most of) my readers don’t get French, and so I might translate it into English one of these days. In the meanwhile, you may want to google-translate it yourself!

It is all very weird, emotional and historical. H.A. Lorentz – clearly the driving force behind those pre-WW II Solvay Conferences – died a few months after the 1927 Conference. In fact, the 1927 conference proceedings have both the sad announcement of his demise as well his interventions—such was the practice of actually physically printing stuff at the time.

For those who do read French, here you go:

DISCUSSION GENERALE DES IDEES NOUVELLES EMISES.

Causalité, Déterminisme. Probabilité.

Intervention de M. Lorentz:

“Je voudrais attirer l ’attention sur les difficultĂ©s qu’on rencontre dans les anciennes thĂ©ories. Nous voulons nous faire une reprĂ©sentation des phĂ©nomĂšnes, nous en former une image dans notre esprit. Jusqu’ici, nous avons toujours voulu former ces images au moyen des notions ordinaires de temps et d’espace. Ces notions sont peut-ĂȘtre innĂ©es; en tout cas, elles se sont dĂ©veloppĂ©es par notre expĂ©rience personnelle, par nos observations journaliĂšres. Pour moi, ces notions sont claires et j ’avoue que je ne puis me faire une idĂ©e de la physique sans ces notions. L ’image que je veux me former des phĂ©nomĂšnes doit ĂȘtre absolument nette et dĂ©finie et il me semble que nous ne pouvons nous former une pareille image que dans ce systĂšme d’espace et de temps.

Pour moi, un Ă©lectron est un corpuscule qui, a un instant donne, se trouve en un point dĂ©termine de l ’espace, et si j ’ai eu l ’idĂ©e qu’a un moment suivant ce corpuscule se trouve ailleurs, je dois songer Ă  sa trajectoire, qui est une ligne dans l’espace. Et si cet Ă©lectron rencontre un atome et y pĂ©nĂštre, et qu’aprĂšs plusieurs aventures il quitte cet atome, je me forge une thĂ©orie dans laquelle cet Ă©lectron conserve son individualitĂ©; c’est-Ă -dire que j ’imagine une ligne suivant laquelle cet Ă©lectron passe Ă  travers cet atome. Il se peut, Ă©videmment, que cette thĂ©orie soit bien difficile Ă  dĂ©velopper, mais a priori cela ne me parait pas impossible.

Je me figure que, dans la nouvelle thĂ©orie, on a encore de ces Ă©lectrons. Il est possible, Ă©videmment, que dans la nouvelle thĂ©orie, bien dĂ©veloppĂ©e, il soit nĂ©cessaire de supposer que ces Ă©lectrons subissent des transformations. Je veux bien admettre que l’électron se fond en un nuage. Mais alors je chercherai Ă  quelle occasion cette transformation se produit. Si l’on voulait m’interdire une pareille recherche en invoquant un principe, cela me gĂȘnerait beaucoup. Il me semble qu’on peut toujours espĂ©rer qu’on fera plus tard ce que nous ne pouvons pas encore faire en ce moment. MĂȘme si l’on abandonne les anciennes idĂ©es, on peut toujours conserver les anciennes dĂ©nominations. Je voudrais conserver cet idĂ©al d’autrefois, de dĂ©crire tout ce qui se passe dans le monde par des images nettes. Je suis prĂȘt Ă  admettre d’autres thĂ©ories, Ă  condition qu’on puisse les traduire par des images claires et nettes.

Pour ma part, bien que n’étant pas encore familiarisĂ© avec les nouvelles idĂ©es que j’entends exprimer maintenant, je pourrais me reprĂ©senter ces idĂ©es ainsi. Prenons le cas d’un Ă©lectron qui rencontre un atome; supposons que cet Ă©lectron quitte cet atome et qu’en mĂȘme temps il y ait Ă©mission d’un quantum de lumiĂšre. Il faut considĂ©rer, en premier lieu, les systĂšmes d’ondes qui correspondent Ă  l ’électron et Ă  l’atome avant le choc. AprĂšs le choc, nous aurons de nouveaux systĂšmes d’ondes. Ces systĂšmes d’ondes pourront etre dĂ©crits par une fonction ψ dĂ©finie dans un espace a un grand nombre de dimensions qui satisfait une Ă©quation diffĂ©rentielle. La nouvelle mĂ©canique ondulatoire opĂšrera avec cette Ă©quation et Ă©tablira la fonction ψ avant et aprĂšs le choc.

Or, il y a des phĂ©nomĂšnes qui apprennent qu’ il y a autre chose encore que ces ondes, notamment des corpuscules; on peut faire, par exemple, une expĂ©rience avec un cylindre de Faraday; il y a donc Ă  tenir compte de l’individualitĂ© des Ă©lectrons et aussi des photons. Je pense que je trouverais que, pour expliquer les phĂ©nomĂšnes, il suffit d’admettre que l’expression ψψ* donne la probabilitĂ© que ces Ă©lectrons et ces photons existent dans un volume dĂ©termine; cela me suffirait pour expliquer les expĂ©riences.

Mais les exemples donnes par M. Heisenberg m’apprennent que j’aurais atteint ainsi tout ce que l’expĂ©rience me permet d’atteindre. Or, je pense que cette notion de probabilitĂ© serait Ă  mettre Ă  la fin, et comme conclusion, des considĂ©rations thĂ©oriques, et non pas comme axiome a priori, quoique je veuille bien admettre que cette indĂ©termination correspond aux possibilitĂ©s expĂ©rimentales. Je pourrais toujours garder ma foi dĂ©terministe pour les phĂ©nomĂšnes fondamentaux, dont je n’ai pas parlĂ©. Est-ce qu’un esprit plus profond ne pourrait pas se rendre compte des mouvements de ces Ă©lectrons. Ne pourrait-on pas garder le dĂ©terminisme en en faisant l’objet d’une croyance ? Faut-il nĂ©cessairement Ă©riger l’ indĂ©terminisme en principe?

I added the bold italics above. A free translation of this phrase is this:

Why should we elevate determinism or  – as Born en Heisenberg do – its opposite (indeterminism) to a philosophical principle?

What a beautiful statement ! Lorentz died of a very trivial cause: erysipelas, commonly known as St Anthony’s fire. :-/

Where things went wrong, exactly !

As mentioned in my previous post, Oliver Consa traces all of the nonsense in modern physics back to the Shelter Island (1947), Pocono (1948) and Oldstone (1949) Conferences. However, the first Solvay Conference that was organized after WW II was quite significant too. Niels Bohr and Robert Oppenheimer pretty much dominated it. Bohr does so by providing the introductory lecture ‘On the Notions of Causality and Complementarity’, while Oppenheimer’s ‘Electron Theory’ sets the tone for subsequent Solvay Conferences—most notably the one that would consecrate quantum field theory (QFT), which was held 13 years later (1961).

Indeed, the discussion between Oppenheimer and Dirac on the ‘Electron Theory’ paper in 1948 seems to be where things might have gone wrong—in terms of the ‘genealogy’ or ‘archaelogy’ of modern ideas, so to speak. In fact, both Oppenheimer and Dirac make rather historical blunders there:

  1. Oppenheimer uses perturbation theory to arrive at some kind of ‘new’ model of an electron, based on Schwinger’s new QFT models—which, as we now know, do not really lead anywhere.
  2. Dirac, however, is just too stubborn too: he simply keeps defending his un-defendable electron equation— which, of course, also doesn’t lead anywhere. [It is rather significant he was no longer invited for the next Solvay Conference.]

It is, indeed, very weird that Dirac does not follow through on his own conclusion: “Only a small part of the wave function has a physical meaning. We now have the problem of picking out that very small physical part of the exact solution of the wave equation.”

It’s the ring current or Zitterbewegung electron, of course. The one trivial solution he thought was so significant in his 1933 Nobel Prize lecture
 The other part of the solution(s) is/are, effectively, bizarre oscillations which he refers to as ‘run-away electrons’.

It’s nice to sort of ‘get’ this. 🙂

Tracing good and bad ideas

Today I decided to look for the original Solvay Conference papers, which were digitized by the libraries of the Free University of Brussels: here is the link to them.  I quickly went through the famous 1927 and 1930 Conferences (Einstein did not attend the 1933 Conference – nor did he attend the 1921 Conference) – but, to my great consternation – there is no trace of those so-called ‘heated discussions’ between Heisenberg and Einstein.

A few critical questions here and there, yes, but I don’t see anything even vaguely resembling an ‘ardent debate’ or a so-called ‘Bohr-Einstein controversy’. Am I mistaken—or am I missing something?

The fact that it’s all in French is quite interesting, and may explain why Einstein’s interventions are rare (I am not sure of the language that was used: the physicists then were multi-lingual, weren’t they?). The remarks of the French physicists Leon Brillouin, for example, are quite interesting but not widely known, it seems.

Funny remarks like Heisenberg telling Einstein ‘to stop telling God what to do’ are surely not there ! Are they folklore? Would anyone know whether these remarks are documented somewhere? I am just trying to trace those historical moments in the evolution of thought and science… 🙂

Things like this make me think a great deal of the ‘controversy’ between old (classical) and new (quantum) physics is actually just hype rather than reality. One of my readers sent me this link to a very interesting article in the LA Times in this regard. It’s a quick but very worthwhile read, showing it’s not only physics who suffers from ‘the need to sell’ real or non-existing results: here is the link—have a look!

In fact, I realize I am still looking for some kind of purpose for my new site. Perhaps I should dedicate it to research like this—separating fact from fiction in the history of ideas?

PS: I just checked the Wikipedia article on Heisenberg’s quotes and it seems Heisenberg’s “stop telling God what to do” is, effectively, disputed ! Interesting but, in light of its frequent use – also quite shocking, I would think.

PS 2: I jotted down the following based on a very quick scan of these Solvay Conferences:

Dr. Oliver Consa starts his scathing history of the sorry state of modern-day physics as follows:

“After the end of World War II, American physicists organized a series of three transcendent conferences for the development of modern physics: Shelter Island (1947), Pocono (1948) and Oldstone (1949). These conferences were intended to be a continuation of the mythical Solvay conferences. But, after World War II, the world had changed. The launch of the atomic bombs in Hiroshima and Nagasaki (1945), followed by the immediate surrender of Japan, made the Manhattan Project scientists true war heroes. Physicists were no longer a group of harmless intellectuals; they had become the powerful holders of the secrets of the atomic bomb.”[1]

Secrets that could not be kept, of course. The gatekeepers did their best, however. Julius Robert Oppenheimer was, effectively, one of them. The history of Oppenheimer – father of the atomic bomb and prominent pacifist at the same time – is well known.

It is actually quite interesting to note that the Solvay Conferences continued after WW II and that Niels Bohr and Robert Oppenheimer pretty much dominated the very first post-WW II Solvay Conference, which was held in 1948. Bohr does so by providing the introductory lecture ‘On the Notions of Causality and Complementarity’[2], while Oppenheimer’s ‘Electron Theory’ sets the tone for subsequent Solvay Conferences—most notably the one that would consecrate quantum field theory (QFT), which was held 13 years later (1961).[3]

Significantly, Paul Dirac is pretty much the only one asking Oppenheimer critical questions. As for Albert Einstein, I find it rather strange that – despite him being a member of the scientific committee[4] – he actually hardly interferes in discussions. It makes me think he had actually lost interest in the development of quantum theory.

Even more significant is the fact that Dirac was not invited nor even mentioned in the 1951 Solvay Conference.

[1] Oliver Consa, Something is rotten in the state of QED, February 2020.

[2] See the 1948 Solvay Conference report on the ULB’s digital archives.

[3] Institut international de physique Solvay (1962). La thĂ©orie quantique des champs: douziĂšme Conseil de physique, tenu Ă  l’UniversitĂ© libre de Bruxelles du 9 au 14 octobre 1961.

[4] Einstein was a member of the Solvay scientific committee from the very first conference (1911) – representing, in typical style, a country (Austria, not Germany) rather than an institution or just being a member in some personal capacity – till 1948. He was not a member of the 1951 scientific committee. The reason might well be age or a lack of interest, of course: Einstein was 72 years in 1951, and would die four years later (1955).