The concept of a field

I ended my post on particles as spacetime oscillations saying I should probably write something about the concept of a field too, and why and how many academic physicists abuse it so often. So I did that, but it became a rather lengthy paper, and so I will refer you to Phil Gibbs’ site, where I post such stuff. Here is the link. Let me know what you think of it.

As for how it fits in with the rest of my writing, I already jokingly rewrote two of Feynman’s introductory Lectures on quantum mechanics (see: Quantum Behavior and Probability Amplitudes). I consider this paper to be the third. 🙂

Post scriptum: Now that I am talking about Richard Feynman – again ! – I should add that I really think of him as a weird character. I think he himself got caught in that image of the ‘Great Teacher’ while, at the same (and, surely, as a Nobel laureate), he also had to be seen to a ‘Great Guru.’ Read: a Great Promoter of the ‘Grand Mystery of Quantum Mechanics’ – while he probably knew classical electromagnetism combined with the Planck-Einstein relation can explain it all… Indeed, his lecture on superconductivity starts off as an incoherent ensemble of ‘rocket science’ pieces, to then – in the very last paragraphs – manipulate Schrödinger’s equation (and a few others) to show superconducting currents are just what you would expect in a superconducting fluid. Let me quote him:

“Schrödinger’s equation for the electron pairs in a superconductor gives us the equations of motion of an electrically charged ideal fluid. Superconductivity is the same as the problem of the hydrodynamics of a charged liquid. If you want to solve any problem about superconductors you take these equations for the fluid [or the equivalent pair, Eqs. (21.32) and (21.33)], and combine them with Maxwell’s equations to get the fields.”

So… Well… Looks he too is all about impressing people with ‘rocket science models’ first, and then he simplifies it all to… Well… Something simple. 😊

Having said that, I still like Feynman more than modern science gurus, because the latter usually don’t get to the simplifying part. :-/

A new book?

I don’t know where I would start a new story on physics. I am also not quite sure for whom I would be writing it – although it would be for people like me, obviously: most of what we do, we do for ourselves, right? So I should probably describe myself in order to describe the audience: amateur physicists who are interested in the epistemology of modern physics – or its ontology, or its metaphysics. I also talk about the genealogy or archaeology of ideas on my ResearchGate site. All these words have (slightly) different meanings but the distinctions do not matter all that much. The point is this: I write for people who want to understand physics in pretty much the same way as the great classical physicist Hendrik Antoon Lorentz who, just a few months before his demise, at the occasion of the (in)famous 1927 Solvay Conference, wanted to understand the ‘new theories’:

“We are representing phenomena. We try to form an image of them in our mind. Till now, we always tried to do using the ordinary notions of space and time. These notions may be innate; they result, in any case, from our personal experience, from our daily observations. To me, these notions are clear, and I admit I am not able to have any idea about physics without those notions. The image I want to have when thinking physical phenomena has to be clear and well defined, and it seems to me that cannot be done without these notions of a system defined in space and in time.”

Note that H.A. Lorentz understood electromagnetism and relativity theory as few others did. In fact, judging from some of the crap out there, I can safely say he understood stuff as few others do today still. Hence, he should surely not be thought of as a classical physicist who, somehow, was stuck. On the contrary: he understood the ‘new theories’ better than many of the new theorists themselves. In fact, as far as I am concerned, I think his comments or conclusions on the epistemological status of the Uncertainty Principle – which he made in the same intervention – still stand. Let me quote the original French:

“Je pense que cette notion de probabilité [in the new theories] serait à mettre à la fin, et comme conclusion, des considérations théoriques, et non pas comme axiome a priori, quoique je veuille bien admettre que cette indétermination correspond aux possibilités expérimentales. Je pourrais toujours garder ma foi déterministe pour les phénomènes fondamentaux, dont je n’ai pas parlé. Est-ce qu’un esprit plus profond ne pourrait pas se rendre compte des mouvements de ces électrons. Ne pourrait-on pas garder le déterminisme en en faisant l’objet d’une croyance? Faut-il nécessairement ériger l’ indéterminisme en principe?”

What a beautiful statement, isn’t it? Why should we elevate indeterminism to a philosophical principle? Indeed, now that I’ve inserted some French, I may as well inject some German. The idea of a particle includes the idea of a more or less well-known position. Let us be specific and think of uncertainty in the context of position. We may not fully know the position of a particle for one or more of the following reasons:

  1. The precision of our measurements may be limited: this is what Heisenberg referred to as an Ungenauigkeit.
  2. Our measurement might disturb the position and, as such, cause the information to get lost and, as a result, introduce an uncertainty: this is what we may translate as an Unbestimmtheit.
  3. The uncertainty may be inherent to Nature, in which case we should probably refer to it as an Ungewissheit.

So what is the case? Lorentz claims it is either the first or the second – or a combination of both – and that the third proposition is a philosophical statement which we can neither prove nor disprove. I cannot see anything logical (theory) or practical (experiment) that would invalidate this point. I, therefore, intend to write a basic book on quantum physics from what I hope would be Lorentz’ or Einstein’s point of view.

My detractors will immediately cry wolf: Einstein lost the discussions with Bohr, didn’t he? I do not think so: he just got tired of them. I want to try to pick up the story where he left it. Let’s see where I get. 🙂

Particles as spacetime oscillations

My very first publication on Phil Gibb’s site – The Quantum-Mechanical Wavefunction as a Gravitational Wave – reached 500+ downloads. I find that weird, because I warn the reader in the comments section that some of these early ideas do not make sense. Indeed, while my idea of modelling an electron as a two-dimensional oscillation has not changed, the essence of the model did. My theory of matter is based on the idea of a naked charge – with zero rest mass – orbiting around some center, and the energy in its motion – a perpetual current ring, really – is what gives matter its (equivalent) mass. Wheeler’s idea of ‘mass without mass’. The force is, therefore, definitely not gravitational.

It cannot be: the force has to grab onto something, and all it can grab onto is the naked charge. The force must, therefore, be electromagnetic. So I now look at that very first paper as an immature essay. However, I leave it there because that paper does ask all of the right questions, and I should probably revisit it – because the questions I get on my last paper on the subject – De Broglie’s Matter-Wave: Concept and Issues, which gets much more attention on ResearchGate than on Phil Gibb’s site (so it is more serious, perhaps) – are quite similar to the ones I try to answer in that very first paper: what is the true nature of the matter-wave? What is that fundamental oscillation?

I have been thinking about this for many years now, and I may never be able to give a definite answer to the question, but yesterday night some thoughts came to me that may or may not make sense. And so to be able to determine whether they might, I thought I should write them down. So that is what I am going to do here, and you should not take it very seriously. If anything, they may help you to find some answers for yourself. So if you feel like switching off because I am getting too philosophical, please do: I myself wonder how useful it is to try to interpret equations and, hence, to write about what I am going to write about here – so I do not mind at all if you do too!

That is too much already as an introduction, so let us get started. One of my more obvious reflections yesterday was this: the nature of the matter-wave is not gravitational, but it is an oscillation in space and in time. As such, we may think of it as a spacetime oscillation. In any case, physicists often talk about spacetime oscillations without any clear idea of what they actually mean by it, so we may as well try to clarify it in this very particular context here: the explanation of matter in terms of an oscillating pointlike charge. Indeed, the first obvious point to make is that any such perpetual motion may effectively be said to be a spacetime oscillation: it is an oscillation in space – and in time, right?

As such, a planet orbiting some star – think of the Earth orbiting our Sun – may be thought of a spacetime oscillation too ! Am I joking? No, I am not. Let me elaborate this idea. The concept of a spacetime oscillation implies we think of space as something physical, as having an essence of sorts. We talk of a spacetime fabric, a (relativistic) aether or whatever other term comes to mind. The Wikipedia article on aether theories quotes Robert B. Laughlin as follows in this regard: “It is ironic that Einstein’s most creative work, the general theory of relativity, should boil down to conceptualizing space as a medium when his original premise [in special relativity] was that no such medium existed [..] The word ‘ether’ has extremely negative connotations in theoretical physics because of its past association with opposition to relativity. This is unfortunate because, stripped of these connotations, it rather nicely captures the way most physicists actually think about the vacuum.”

I disagree with that. I do not think about the vacuum in such terms: the vacuum is the Cartesian mathematical 3D space in which we imagine stuff to exist. We should not endow this mathematical space with any physical qualities – with some essence. Mathematical concepts are mathematical concepts only. It is the difference between size and distance. Size is physical: an electron – any physical object, really – has a size. But the distance between two points is a mathematical concept only.

The confusion arises from us expressing both in terms of the physical distance unit: a meter, or a pico- or femtometer – whatever is appropriate for the scale of the things that we are looking at. So it is the same thing when we talk about a point: we need to distinguish a physical point – think of our pointlike charge here – and a mathematical point. That should be the key to understanding matter-particles as spacetime oscillations – if we would want to understand them as such, that is – which is what we are trying to do here. So how should we think of this? Let us start with matter-particles. In our realist interpretation of physics, we think of matter-particles as consisting of charge – in contrast to, say, photons, the particles of light, which (also) carry energy but no charge. Let us consider the electron, because the structure of the proton is very different and may involve a different force: a strong force – as opposed to the electromagnetic force that we are so familiar with. Let me use an animated gif from the Wikipedia Commons repository to recapture the idea of such (two-dimensional) oscillation.

Think of the green dot as the pointlike charge: it is a physical point moving in a mathematical space – a simple 2D plane, in this case. So it goes from here to there, and here and there are two mathematical points only: points in the 3D Cartesian space which – as H.A. Lorentz pointed out when criticizing the new theories – is a notion without which we cannot imagine any idea in physics. So we have a spacetime oscillation here alright: an oscillation in space, and in time. Oscillations in space are always oscillations in time, obviously – because the idea of an oscillation implies the idea of motion, and the idea of motion always involves the notion of space as well as the notion of time. So what makes this spacetime oscillation different from, say, the Earth orbiting around the Sun?

Perhaps we should answer this question by pointing out the similarities first. A planet orbiting around the sun involves perpetual motion too: there is an interplay between kinetic and potential energy, both of which depend on the distance from the center. Indeed, Earth falls into the Sun, so to speak, and its kinetic energy gets converted into potential energy and vice versa. However, the centripetal force is gravitational, of course. The centripetal force on the pointlike charge is not: there is nothing at the center pulling it. But – Hey ! – what is pulling our planet, exactly? We do not believe in virtual gravitons traveling up and down between the Sun and the Earth, do we? So the analogy may not be so bad, after all ! It is just a very different force: its structure is different, and it acts on something different: a charge versus mass. That’s it. Nothing more. Nothing less.

Or… Well… Velocities are very different, of course, but even there distinctions are, perhaps, less clear-cut than they appear to be at first. The pointlike charge in our electron has no mass and, therefore, moves at lightspeed. The electron itself, however, acquires mass and, therefore, moves at a fraction of lightspeed only in an atomic or molecular orbital. And much slower in a perpetual current in superconducting material. [Yes. When thinking of electrons in the context of superconduction, we have an added complication: we should think of electron pairs (Cooper pairs) rather than individual electrons, it seems. We are not quite sure what to make of this – except to note electrons will also want to lower their energy by pairing up in atomic or molecular orbitals, and we think the nature of this pairing must, therefore, be the same.]

Did we clarify anything? Maybe. Maybe not. Saying that an electron is a pointlike charge and a two-dimensional oscillation, or saying that it’s a spacetime oscillation itself, appears to be a tautology here, right? Yes. You are right. So what’s the point, then?

We are not sure, except for one thing: when defining particles as spacetime oscillations, we do definitely not need the idea of virtual particles. That’s rubbish: an unnecessary multiplication of concepts. So I think that is some kind of progress we got out of this rather difficult philosophical reflections, and that is useful, I think. To illustrate this point, you may want to think of the concept of heat. When there is heat, there is no empty space. There is no vacuum anymore. When we heat a space, we fill it with photons. They bounce around and get absorbed and re-emitted all of the time. in fact, we, therefore, also need matter to imagine a heated space. Hence, space here is no longer the vacuum: it is full of energy, but this energy is always somewhere – and somewhere specifically: it’s carried by a photon, or (temporarily) stored as an electron orbits around a nucleus in an excited state (which amounts to the same as saying it is being stored by an atom or some molecular structure consisting of atoms). In short, heat is energy but it is being ‘transmitted’ or ‘transported’ through space by photons. Again, the point is that the vacuum itself should not be associated with energy: it is empty. It is a mathematical construct only.

We should try to think this through – even further than we already did – by thinking how photons – or radiation of heat – would disturb perpetual currents: in an atom, obviously (the electron orbitals), but also perpetual superconducting currents at the macro-scale: unless the added heat from the photons is continuously taken away by the supercooling helium or whatever is used, radiation or heat will literally bounce the electrons into a different physical trajectory, so we should effectively associate excited energy states with different patterns of motion: a different oscillation, in other words. So it looks like electrons – or electrons in atomic/molecular orbitals – do go from one state into another (excited) state and back again but, in whatever state they are, we should think of them as being in their own space (and time). So that is the nature of particles as spacetime oscillations then, I guess. Can we say anything more about it?

I am not sure. At this moment, I surely have nothing more to say about it. Some more thinking about how superconduction – at the macro-scale – might actually work could, perhaps, shed more light on it: is there an energy transfer between the two electrons in a Cooper pair? An interplay between kinetic and potential energy? Perhaps the two electrons behave like coupled pendulums? If they do, then we need to answer the question: how, exactly? Is there an exchange of (real) photons, or is the magic of the force the same: some weird interaction in spacetime which we can no further meaningfully analyze, but which gives space not only some physicality but also causes us to think of it as being discrete, somehow. Indeed, an electron is an electron: it is a whole. Thinking of it as a pointlike charge in perpetual motion does not make it less of a whole. Likewise, an electron in an atomic orbital is a whole as well: it just occupies more space. But both are particles: they have a size. They are no longer pointlike: they occupy a measurable space: the Cartesian (continuous) mathematical space becomes (discrete) physical space.

I need to add another idea here – or another question for you, if I may. If superconduction can only occur when electrons pair up, then we should probably think of the pairs as some unit too – and a unit that may take up a rather large space. Hence, the idea of a discrete, pointlike, particle becomes somewhat blurred, right? Or, at the very least, it becomes somewhat less absolute, doesn’t it? 🙂

I guess I am getting lost in words here, which is probably worse than getting ‘lost in math‘ (I am just paraphrasing Sabine Hossenfelder here) but, yes, that is why I am writing a blog post rather than a paper here. If you want equations, read my papers. 🙂 Oh – And don’t forget: fields are real as well. They may be relative, but they are real. And it’s not because they are quantized (think of (magnetic) flux quantization in the context of superconductivity, for example) that they are necessarily discrete – that we have field packets, so to speak. I should do a blog post on that. I will. Give me some time. 🙂

Post scriptum: What I wrote above on there not being any exchange of gravitons between an orbiting planet and its central star (or between double stars or whatever gravitational trajectories out there), does not imply I am ruling out their existence. I am a firm believer in the existence of gravitational waves, in fact. We should all be firm believers because – apart from some marginal critics still wondering what was actually being measured – the LIGO detections are real. However, whether or not these waves involve discrete lightlike particles – like photons and, in the case of the strong force, neutrinos – is a very different question. Do I have an opinion on it? I sure do. It is this: when matter gets destroyed or created (remember the LIGO detections involved the creation and/or destruction of matter as black holes merge), gravitational waves must carry some of the energy, and there is no reason to assume that the Planck-Einstein relation would not apply. Hence, we will have energy packets in the gravitational wave as well: the equivalent of photons (and, most probably, of neutrinos), in other words. All of this is, obviously, very speculative. Again, just think of this whole blog post as me freewheeling: the objective is, quite simply, to make you think as hard as I do about these matters. 🙂

As for my remark on the Cooper pairs being a unit or not, that question may be answered by thinking about what happens if Cooper pairs are broken, which is a topic I am not familiar with, so I cannot say anything about it.

Bell’s No-Go Theorem

I’ve been asked a couple of times: “What about Bell’s No-Go Theorem, which tells us there are no hidden variables that can explain quantum-mechanical interference in some kind of classical way?” My answer to that question is quite arrogant, because it’s the answer Albert Einstein would give when younger physicists would point out that his objections to quantum mechanics (which he usually expressed as some new  thought experiment) violated this or that axiom or theorem in quantum mechanics: “Das ist mir wur(sch)t.

In English: I don’t care. Einstein never lost the discussions with Heisenberg or Bohr: he just got tired of them. Like Einstein, I don’t care either – because Bell’s Theorem is what it is: a mathematical theorem. Hence, it respects the GIGO principle: garbage in, garbage out. In fact, John Stewart Bell himself – one of the third-generation physicists, we may say – had always hoped that some “radical conceptual renewal”[1] might disprove his conclusions. We should also remember Bell kept exploring alternative theories – including Bohm’s pilot wave theory, which is a hidden variables theory – until his death at a relatively young age. [J.S. Bell died from a cerebral hemorrhage in 1990 – the year he was nominated for the Nobel Prize in Physics. He was just 62 years old then.]

So I never really explored Bell’s Theorem. I was, therefore, very happy to get an email from Gerard van der Ham, who seems to have the necessary courage and perseverance to research this question in much more depth and, yes, relate it to a (local) realist interpretation of quantum mechanics. I actually still need to study his papers, and analyze the YouTube video he made (which looks much more professional than my videos), but this is promising.

To be frank, I got tired of all of these discussions – just like Einstein, I guess. The difference between realist interpretations of quantum mechanics and the Copenhagen dogmas is just a factor 2 or π in the formulas, and Richard Feynman famously said we should not care about such factors (Feynman’s Lectures, III-2-4). Modern physicists fudge them away consistently. They’ve done much worse than that, actually. :-/ They are not interested in truth. Convention, dogma, indoctrination – – non-scientific historical stuff – seems to prevent them from that. And modern science gurus – the likes of Sean Carroll or Sabine Hossenfelder etc. – play the age-old game of being interesting: they pretend to know something you do not know or – if they don’t – that they are close to getting the answers. They are not. They have them already. They just don’t want to tell you that because, yes, it’s the end of physics.


[1] See: John Stewart Bell, Speakable and unspeakable in quantum mechanics, pp. 169–172, Cambridge University Press, 1987.

The nature of time: an easy explanation of relativity

My manuscript offers a somewhat sacrilegious but intuitive explanation of (special) relativity theory (The Emperor Has No Clothes: the force law and relativity, p. 24-27). It is one of my lighter and more easily accessible pieces of writing. The argument is based on the idea that we may define infinity or infinite velocities as some kind of limit (or some kind of limiting idea), but that we cannot really imagine it: it leads to all kinds of logical inconsistencies.

Let me give you a very simple example here to illustrate these inconsistencies: if something is traveling at an infinite velocity, then it is everywhere and nowhere at the same time, and no theory of physics can deal with that.

Now, if I would have to rewrite that brief introduction to relativity theory, I would probably add another logical argument. One that is based on our definition or notion of time itself. What is the definition of time, indeed? When you think long and hard about this, you will have to agree we can only measure time with reference to some fundamental cycle in Nature, right? It used to be the seasons, or the days or nights. Later, we subdivided a day into hours, and now we have atomic clocks. Whatever you can count and meaningfully communicate to some other intelligent being who happens to observe the same cyclical phenomenon works just fine, right?

Hence, if we would be able to communicate to some other intelligent being in outer space, whose position we may or may not know but both he/she/it (let us think of a male Martian for ease of reference) and we/me/us are broadcasting our frequency- or amplitude-modulated signals wide enough so as to ensure ongoing communication, then we would probably be able to converge on a definition of time in terms of the fundamental frequency of an elementary particle – let us say an electron to keep things simple. We could, therefore, agree on an experiment where he – after receiving a pre-agreed start signal from us – would starting counting and send us a stop signal back after, say, three billion electron cycles (not approximately, of course, but three billion exactly). In the meanwhile, we would be capable, of course, to verify that, inbetween sending and receiving the start and stop signal respectively (and taking into account the time that start and stop signal needs to travel between him and us), his clock seems to run somewhat differently than ours.

So that is the amazing thing, really. Our Martian uses the same electron clock, but our/his motion relative to his/ours leads us to the conclusion his clock works somewhat differently, and Einstein’s (special) relativity theory tells us how, exactly: time dilation, as given by the Lorentz factor.

Does this explanation make it any easier to truly understand relativity theory? Maybe. Maybe not. For me, it does, because what I am describing here is nothing but the results of the Michelson-Morley experiment in a slightly more amusing context which, for some reason I do not quite understand, seems to make them more comprehensible. At the very least, it shows Galilean relativity is as incomprehensible – or as illogical or non-intuitive, I should say – as the modern-day concept of relativity as pioneered by Albert Einstein.

You may now think (or not): OK, but what about relativistic mass? That concept is, and will probably forever remain, non-intuitive. Right? Time dilation and length contraction are fine, because we can now somehow imagine the what and why of this, but how do you explain relativistic mass, really?

The only answer I can give you here it to think some more about Newton’s law: mass is a measure of inertia, so that is a resistance to a change in the state of motion of an object. Motion and, therefore, your measurement of any acceleration or deceleration (i.e. a change in the state of motion) will depend on how you measure time and distance too. Therefore, mass has to be relativistic too.

QED: quod erat demonstrandum. In fact, it is not a proof, so I should not say it’s QED. It’s SE: a satisfactory explanation. Why is an explanation and not a proof? Because I take the constant speed of light for granted, and so I kinda derive the relativity of time, distance and mass from my point of departure (both figuratively and literally speaking, I’d say).

Post scriptum: For the mentioned calculation, we do need to know the (relative) position of the Martian, of course. Any event in physics is defined by both its position as well as its timing. That is what (also) makes it all very consistent, in fact. I should also note this short story here (I mean my post) is very well aligned with Einstein’s original 1905 article, so you can (also) go there to check the math. The main difference between his article and my explanation here is that I take the constant speed of light for granted, and then all that’s relative derives its relativity from that. Einstein looked at it the other way around, because things were not so obvious then. 🙂

The End of Physics

There is an army of physicists out there – still – trying to convince you there is still some mystery that needs explaining. They are wrong: quantum-mechanical weirdness is weird, but it is not some mystery. We have a decent interpretation of what quantum-mechanical equations – such as Schrodinger’s equation, for example – actually mean. We can also understand what photons, electrons, or protons – light and matter – actually are, and such understanding can be expressed in terms of 3D space, time, force, and charge: elementary concepts that feel familiar to us. There is no mystery left.

Unfortunately, physicists have completely lost it: they have multiplied concepts and produced a confusing but utterly unconvincing picture of the essence of the Universe. They promoted weird mathematical concepts – the quark hypothesis is just one example among others – and gave them some kind of reality status. The Nobel Prize Committee then played the role of the Vatican by canonizing the newfound religion.

It is a sad state of affairs, because we are surrounded by too many lies already: the ads and political slogans that shout us in the face as soon as we log on to Facebook to see what our friends are up to, or to YouTube to watch something or – what I often do – listen to the healing sounds of music.

The language and vocabulary of physics are complete. Does it make us happier beings? It should, shouldn’t it? I am happy I understand. I find consciousness fascinating – self-consciousness even more – but not because I think it is rooted in mystery. No. Consciousness arises from the self-organization of matter: order arising from chaos. It is a most remarkable thing – and it happens at all levels: atoms in molecules, molecules forming cellular systems, cellular systems forming biological systems. We are a biological system which, in turn, is part of much larger systems: biological, ecological – material systems. There is no God talking to us. We are on our own, and we must make the best out of it. We have everything, and we know everything.

Sadly, most people do not realize.

Post scriptum: With the end of physics comes the end of technology as well, isn’t it? All of the advanced technologies in use today are effectively already described in Feynman’s Lectures on Physics, which were written and published in the first half of the 1960s.

I thought about possible counterexamples, like optical-fiber cables, or the equipment that is used in superconducting quantum computing, such as Josephson junctions. But Feynman already describes Josephson junctions in the last chapter of his Lectures on Quantum Mechanics, which is a seminar on superconductivity. And fiber-optic cable is, essentially, a waveguide for light, which Feynman describes in very much detail in Chapter 24 of his Lectures on Electromagnetism and Matter. Needless to say, computers were also already there, and Feynman’s lecture on semiconductors has all you need to know about modern-day computing equipment. [In case you briefly thought about lasers, the first laser was built in 1960, and Feynman’s lecture on masers describes lasers too.]

So it is all there. I was born in 1969, when Man first walked on the Moon. CERN and other spectacular research projects have since been established, but, when one is brutally honest, one has to admit these experiments have not added anything significant – neither to the knowledge nor to the technology base of humankind (and, yes, I know your first instinct is to disagree with that, but that is because study or the media indoctrinated you that way). It is a rather strange thought, but I think it is essentially correct. Most scientists, experts and commentators are trying to uphold a totally fake illusion of progress.

Mental categories versus reality

Pre-scriptum: For those who do not like to read, I produced a very short YouTube presentation/video on this topic. About 15 minutes – same time as it will take you to read this post, probably. Check it out: https://www.youtube.com/watch?v=sJxAh_uCNjs.

Text:

We think of space and time as fundamental categories of the mind. And they are, but only in the sense that the famous Dutch physicist H.A. Lorentz conveyed to us: we do not seem to be able to conceive of any idea in physics without these two notions. However, relativity theory tells us these two concepts are not absolute and we may, therefore, say they cannot be truly fundamental. Only Nature’s constants – the speed of light, or Planck’s quantum of action – are absolute: these constants seem to mix space and time into something that is, apparently, more fundamental.

The speed of light (c) combines the physical dimensions of space and time, and Planck’s quantum of action (h) adds the idea of a force. But time, distance, and force are all relative. Energy (force over a distance), momentum (force times time) are, therefore, also relative. In contrast, the speed of light, and Planck’s quantum of action, are absolute. So we should think of distance, and of time, as some kind of projection of a deeper reality: the reality of light or – in case of Planck’s quantum of action – the reality of an electron or a proton. In contrast, time, distance, force, energy, momentum and whatever other concept we would derive from them exist in our mind only.

We should add another point here. To imagine the reality of an electron or a proton (or the idea of an elementary particle, you might say), we need an additional concept: the concept of charge. The elementary charge (e) is, effectively, a third idea (or category of the mind, one might say) without which we cannot imagine Nature. The ideas of charge and force are, of course, closely related: a force acts on a charge, and a charge is that upon which a force is acting. So we cannot think of charge without thinking of force, and vice versa. But, as mentioned above, the concept of force is relative: it incorporates the idea of time and distance (a force is that what accelerates a charge). In contrast, the idea of the elementary charge is absolute again: it does not depend on our frame of reference.

So we have three fundamental concepts: (1) velocity (or motion, you might say: a ratio of distance and time); (2) (physical) action (force times distance times time); and (3) charge. We measure them in three fundamental units: c, h, and e. Che. 🙂 So that’s reality, then: all of the metaphysics of physics are here. In three letters. We need three concepts: three things that we think of as being real, somehow. Real in the sense that we do not think they exist in our mind only. Light is real, and elementary particles are equally real. All other concepts exist in our mind only.

So were Kant’s ideas about space and time wrong? Maybe. Maybe not. If they are wrong, then that’s quite OK: Immanuel Kant lived in the 18th century, and had not ventured much beyond the place where he was born. Less exciting times. I think he was basically right in saying that space and time exist in our mind only. But he had no answer(s) to the question as to what is real: if some things exist in our mind only, something must exist in what is not our mind, right? So that is what we refer to as reality then: that which does not exist in our mind only.

Modern physics has the answers. The philosophy curriculum at universities should, therefore, adapt to modern times: Maxwell first derived the (absolute) speed of light in 1862, and Einstein published the (special) theory of relativity back in 1905. Hence, philosophers are 100-150 years behind the curve. They are probably even behind the general public. Philosophers should learn about modern physics as part of their studies so they can (also) think about real things rather than mental constructs only.

Form and substance

Philosophers usually distinguish between form and matter, rather than form and substance. Matter, as opposed to form, is then what is supposed to be formless. However, if there is anything that physics – as a science – has taught us, is that matter is defined by its form: in fact, it is the form factor which explains the difference between, say, a proton and an electron. So we might say that matter combines substance and form.

Now, we all know what form is: it is a mathematical quality—like the quality of having the shape of a triangle or a cube. But what is (the) substance that matter is made of? It is charge. Electric charge. It comes in various densities and shapes – that is why we think of it as being basically formless – but we can say a few more things about it. One is that it always comes in the same unit: the elementary charge—which may be positive or negative. Another is that the concept of charge is closely related to the concept of a force: a force acts on a charge—always.

We are talking elementary forces here, of course—the electromagnetic force, mainly. What about gravity? And what about the strong force? Attempts to model gravity as some kind of residual force, and the strong force as some kind of electromagnetic force with a different geometry but acting on the very same charge, have not been successful so far—but we should immediately add that mainstream academics never focused on it either, so the result may be commensurate with the effort made: nothing much.

Indeed, Einstein basically explained gravity away by giving us a geometric interpretation for it (general relativity theory) which, as far as I can see, confirms it may be some residual force resulting from the particular layout of positive and negative charge in electrically neutral atomic and molecular structures. As for the strong force, I believe the quark hypothesis – which basically states that partial (non-elementary) charges are, somehow, real – has led mainstream physics into the dead end it finds itself in now. Will it ever get out of it?

I am not sure. It does not matter all that much to me. I am not a mainstream scientist and I have the answers I was looking for. These answers may be temporary, but they are the best I have for the time being. The best quote I can think of right now is this one:

‘We are in the words, and at the same time, apart from them. The words spin out, spin us out, over a void. There, somewhere between us, some words form some answer for some time, allowing us to live more fully in the forgetting face of nonexistence, in the dissolving away of each other.’ (Jacques Lacan, in Jeremy D. Safran (2003), Psychoanalysis and Buddhism: an unfolding dialogue, p. 134)

That says it all, doesn’t it? For the time being, at least. 🙂

Post scriptum: You might think explaining gravity as some kind of residual electromagnetic force should be impossible, but explaining the attractive force inside a nucleus behind like charges was pretty difficult as well, until someone came up with a relatively simple idea based on the idea of ring currents. 🙂

Explaining the proton mass and radius

Our alternative realist interpretation of quantum physics is pretty complete but one thing that has been puzzling us is the mass density of a proton: why is it so massive as compared to an electron? We simplified things by adding a factor in the Planck-Einstein relation. To be precise, we wrote it as E = 4·h·f. This allowed us to derive the proton radius from the ring current model:

proton radius

This felt a bit artificial. Writing the Planck-Einstein relation using an integer multiple of h or ħ (E = n·h·f = n·ħ·ω) is not uncommon. You should have encountered this relation when studying the black-body problem, for example, and it is also commonly used in the context of Bohr orbitals of electrons. But why is n equal to 4 here? Why not 2, or 3, or 5 or some other integer? We do not know: all we know is that the proton is very different. A proton is, effectively, not the antimatter counterpart of an electron—a positron. While the proton is much smaller – 459 times smaller, to be precise – its mass is 1,836 times that of the electron. Note that we have the same 1/4 factor here because the mass and Compton radius are inversely proportional:

ratii

This doesn’t look all that bad but it feels artificial. In addition, our reasoning involved a unexplained difference – a mysterious but exact SQRT(2) factor, to be precise – between the theoretical and experimentally measured magnetic moment of a proton. In short, we assumed some form factor must explain both the extraordinary mass density as well as this SQRT(2) factor but we were not quite able to pin it down, exactly. A remark on a video on our YouTube channel inspired us to think some more – thank you for that, Andy! – and we think we may have the answer now.

We now think the mass – or energy – of a proton combines two oscillations: one is the Zitterbewegung oscillation of the pointlike charge (which is a circular oscillation in a plane) while the other is the oscillation of the plane itself. The illustration below is a bit horrendous (I am not so good at drawings) but might help you to get the point. The plane of the Zitterbewegung (the plane of the proton ring current, in other words) may oscillate itself between +90 and −90 degrees. If so, the effective magnetic moment will differ from the theoretical magnetic moment we calculated, and it will differ by that SQRT(2) factor.

Proton oscillation

Hence, we should rewrite our paper, but the logic remains the same: we just have a much better explanation now of why we should apply the energy equipartition theorem.

Mystery solved! 🙂

Post scriptum (9 August 2020): The solution is not as simple as you may imagine. When combining the idea of some other motion to the ring current, we must remember that the speed of light –  the presumed tangential speed of our pointlike charge – cannot change. Hence, the radius must become smaller. We also need to think about distinguishing two different frequencies, and things quickly become quite complicated.

Feynman’s religion

Perhaps I should have titled this post differently: the physicist’s worldview. We may, effectively, assume that Richard Feynman’s Lectures on Physics represent mainstream sentiment, and he does get into philosophy—less or more liberally depending on the topic. Hence, yes, Feynman’s worldview is pretty much that of most physicists, I would think. So what is it? One of his more succinct statements is this:

“Often, people in some unjustified fear of physics say you cannot write an equation for life. Well, perhaps we can. As a matter of fact, we very possibly already have an equation to a sufficient approximation when we write the equation of quantum mechanics.” (Feynman’s Lectures, p. II-41-11)

He then jots down that equation which Schrödinger has on his grave (shown below). It is a differential equation: it relates the wavefunction (ψ) to its time derivative through the Hamiltonian coefficients that describe how physical states change with time (Hij), the imaginary unit (i) and Planck’s quantum of action (ħ).

hl_alpb_3453_ptplr

Feynman, and all modern academic physicists in his wake, claim this equation cannot be understood. I don’t agree: the explanation is not easy, and requires quite some prerequisites, but it is not anymore difficult than, say, trying to understand Maxwell’s equations, or the Planck-Einstein relation (E = ħ·ω = h·f).

In fact, a good understanding of both allows you to not only understand Schrödinger’s equation but all of quantum physics. The basics are this: the presence of the imaginary unit tells us the wavefunction is cyclical, and that it is an oscillation in two dimensions. The presence of Planck’s quantum of action in this equation tells us that such oscillation comes in units of ħ. Schrödinger’s wave equation as a whole is, therefore, nothing but a succinct representation of the energy conservation principle. Hence, we can understand it.

At the same time, we cannot, of course. We can only grasp it to some extent. Indeed, Feynman concludes his philosophical remarks as follows:

“The next great era of awakening of human intellect may well produce a method of understanding the qualitative content of equations. Today we cannot. Today we cannot see that the water flow equations contain such things as the barber pole structure of turbulence that one sees between rotating cylinders. We cannot see whether Schrödinger’s equation contains frogs, musical composers, or morality—or whether it does not. We cannot say whether something beyond it like God is needed, or not. And so we can all hold strong opinions either way.” (Feynman’s Lectures, p. II-41-12)

I think that puts the matter to rest—for the time being, at least. 🙂

The geometry of the de Broglie wavelength

I thought I would no longer post stuff here but I see this site still gets a lot more traffic than the new one, so I will make an exception and cross-post an announcement of a new video on my YouTube channel. Indeed, yesterday I was to talk for about 30 minutes to some students who are looking at classical electron models as part of an attempt to try to model what might be happening to an electron when moving through a magnetic field. Of course, I only had time to discuss the ring current model, and even then it inadvertently turned into a two-hour presentation. Fortunately, they were polite and no one dropped out—although it was an online Google Meet. In fact, they reacted quite enthusiastically, and so we all enjoyed it a lot. So much that I adjusted the presentation a bit the next morning (which added even more time to it unfortunately) and published it online. So this is the link to it, and I hope you enjoy it. If so, please like it—and share it! 🙂

Oh! Forgot to mention: in case you wonder why this video is different than others, see my Tweet on Sean Carroll’s latest series of videos hereunder. That should explain it. 🙂

Post scriptum: I got the usual question from one of the students, of course: if an electron is a ring current, then why doesn’t it radiate its energy away? The easy answer is: an electron is an electron and so it doesn’t—for the same reason that an electron in an atomic orbital or a Cooper pair in a superconducting loop of current does not radiate energy away. The more difficult answer is a bit mysterious: it has got to do with flux quantization and, most importantly, with the Planck-Einstein relation. I will not be too explicit here (it is just a footnote) but the following elements should be noted:

1. The Planck-Einstein law embodies a (stable) wavicle: a wavicle respects the Planck-Einstein relation (E = h·f) as well as Einstein’s mass-energy equivalence relation (E = mc2). A wavicle will, therefore, carry energy but it will also pack one or more units of Planck’s quantum of action. Both the energy as well as this finite amount of physical action (Wirkung in German) will be conserved—cycle after cycle.

2. Hence, equilibrium states should be thought of as electromagnetic oscillations without friction. Indeed, it is the frictional element that explains the radiation of, say, an electron going up and down in an antenna and radiating some electromagnetic signal out. To add to this rather intuitive explanation, I should also remind you that it is the accelerations and decelerations of the electric charge in an antenna that generate the radio wave—not the motion as such. So one should, perhaps, think of a charge going round and round as moving like in a straight line—along some geodesic in its own space. That’s the metaphor, at least.

3. Technically, one needs to think in terms of quantized fluxes and Poynting vectors and energy transfers from kinetic to potential (and back) and from ‘electric’ to ‘magnetic’ (and back). In short, the electron really is an electromagnetic perpetuum mobile ! I know that sounds mystical (too) but then I never said I would take all of the mystery away from quantum physics ! 🙂 If there would be no mystery left, I would not be interested in physics. :wink: On the quantization of flux for superconducting loops: see, for example, http://electron6.phys.utk.edu/qm2/modules/m5-6/flux.htm. There is other stuff you may want to dig into too, like my alternative Principles of Physics, of course ! 🙂  

Signing off…

I am done with reading Feynman and commenting on it—especially because this site just got mutilated by the third DMCA takedown of material (see below). Follow me to my new blog. No Richard Feynman, Mr. Gottlieb or DMCA there! Pure logic only. This site has served its purpose, and that is to highlight the Rotten State of QED. 🙂

Long time ago – in 1996, to be precise – I studied Wittgenstein’s TLP—part of a part-time BPhil degree program. At the time, I did not like it. The lecture notes were two or three times the volume of the work itself, and I got pretty poor marks for it. I guess one has to go through life to get an idea of what he was writing about. With all of the nonsense lately, I thought about one of the lines in that little book: “One must, so to speak, throw away the ladder after he has climbed up it. One must transcend the propositions, and then he will see the world aright.” (TLP, 6-54)

For Mr. Gottlieb and other narrow-minded zealots and mystery wallahs – who would not be interested in Wittgenstein anyway – I’ll just quote Wittgenstein’s quote of Ferdinand Kürnberger:

“. . . und alles, was man weiss, nicht bloss rauschen und brausen gehört hat, lässt sich in drei Worten sagen.

I will let you google-translate that and, yes, sign off here—in the spirit of Ludwig Boltzmann and Paul Ehrenfest. [Sorry for being too lengthy or verbose here.]

“Bring forward what is true. Write it so that it is clear. Defend it to your last breath.” (Boltzmann)

Knox (Automattic)

Jun 20, 2020, 4:30 PM UTC

Hello,

We’ve received the DMCA takedown notice below regarding material published on your WordPress.com site, which means the complainant is asserting ownership of this material and claiming that your use of it is not permitted by them or the law. As required by the DMCA, we have disabled public access to the material.

Repeated incidents of copyright infringement will also lead to the permanent suspension of your WordPress.com site. We certainly don’t want that to happen, so please delete any other material you may have uploaded for which you don’t have the necessary rights and refrain from uploading additional material that you do not have permission to upload. Although we can’t provide legal advice, these resources might help you make this determination:

https://wordpress.com/support/counter-notice/#what-is-fair-use

If you believe that this DMCA takedown notice was received in error, or if you believe your usage of this material would be considered fair use, it’s important that you submit a formal DMCA counter notice to ensure that your WordPress.com site remains operational. If you submit a valid counter notice, we will return the material to your site in 10 business days if the complainant does not reply with legal action.

Please refer to the following pages for more information:

Please note that republishing the material yourself, without permission from the copyright holder (even after you have submitted a counter notice) will result in the permanent suspension of your WordPress.com site and/or account.

Thank you.

[…]

Well… Thank you, WordPress. I guess you’ll first suspend the site and then the account? :-/ I hope you’ll give me some time to create another account, at least? If not, this spacetime rebel will have to find another host for his site. 🙂

Lasers, masers, two-state systems and Feynman’s Lectures

The past few days I re-visited Feynman’s lectures on quantum math—the ones in which he introduces the concept of probability amplitudes (I will provide no specific reference or link to them because that is apparently unfair use of copyrighted material). The Great Richard Feynman introduces the concept of probability amplitudes as part of a larger discussion of two-state systems—and lasers and masers are a great example of such two-state systems. I have done a few posts on that while building up this blog over the past few years but because these have been mutilated by DMCA take-downs of diagrams and illustrations as a result of such ‘unfair use’, I won’t refer to them either. The point is this:

I have come to the conclusion we actually do not need the machinery of state vectors and probability amplitudes to explain how a maser (and, therefore, a laser) actually works.

The functioning of masers and lasers crucially depends on a dipole moment (of an ammonia molecule for a maser and of light-emitting atoms for a laser) which will flip up and down in sync with an external oscillating electromagnetic field. It all revolves around the resonant frequency (ω0), which depends on the tiny difference between the energies of the ‘up’ and ‘down’ states. This tiny energy difference (the A in the Hamiltonian matrix) is given by the product of the dipole moment (μ) and the external electromagnetic field that gets the thing going (Ɛ0). [Don’t confuse the symbols with the magnetic and electric constants here!] And so… Well… I have come to the conclusion that we can analyze this as just any other classical electromagnetic oscillation. We can effectively directly use the Planck-Einstein relation to determine the frequency instead of having to invoke all of the machinery that comes with probability amplitudes, base states, Hamiltonian matrices and differential equations:

ω0 = E/ħ = A/ħ = μƐ0/ħ

All the rest follows logically.

You may say: so what? Well… I find this very startling. I’ve been systematically dismantling a lot of ‘quantum-mechanical myths’, and so this seemed to be the last myth standing. It has fallen now: here is the link to the paper.

What’s the implication? The implication is that we can analyze all of the QED sector now in terms of classical mechanics: oscillator math, Maxwell’s equations, relativity theory and the Planck-Einstein relation will do. All that was published before the first World War broke out, in other words—with the added discoveries made by the likes of Holly Compton (photon-electron interactions), Carl Anderson (the discovery of anti-matter), James Chadwick (experimental confirmation of the existence of the neutron) and a few others after the war, of course! But that’s it, basically: nothing more, nothing less. So all of the intellectual machinery that was invented after World War I (the Bohr-Heisenberg theory of quantum mechanics) and after World War II (quantum field theory, the quark hypothesis and what have you) may be useful in the QCD sector of physics but − IMNSHO − even that remains to be seen!

I actually find this more than startling: it is shocking! I started studying Feynman’s Lectures – and everything that comes with it – back in 2012, only to find out that my idol had no intention whatsoever to make things easy. That is OK. In his preface, he writes he wanted to make sure that even the most intelligent student would be unable to completely encompass everything that was in the lectures—so that’s why we were attracted to them, of course! But that is, of course, something else than doing what he did, and that is to promote a Bright Shining Lie

[…]

Long time ago, I took the side of Bill Gates in the debate on Feynman’s qualities as a teacher. For Bill Gates, Feynman was, effectively, “the best teacher he never had.” One of those very bright people who actually had him as a teacher (John F. McGowan, PhD and math genius) paints a very different picture, however. I would take the side of McGowan in this discussion now—especially when it turns out that Mr. Feynman’s legacy can apparently no longer be freely used as a reference anyway.

Philip Anderson and Freeman Dyson died this year—both at the age of 96. They were the last of what is generally thought of as a brilliant generation of quantum physicists—the third generation, we might say. May they all rest in peace.

Post scriptum: In case you wonder why I refer to them as the third rather than the second generation: I actually consider Heisenberg’s generation to be the second generation of quantum physicists—first was the generation of the likes of Einstein!

As for the (intended) irony in my last remarks, let me quote from an interesting book on the state of physics that was written by Doris Teplitz back in 1982: “The state of the classical electromagnetic theory reminds one of a house under construction that was abandoned by its working workmen upon receiving news of an approaching plague. The plague was in this case, of course, quantum theory.” I now very much agree with this bold statement. So… Well… I think I’ve had it with studying Feynman’s Lectures. Fortunately, I spent only ten years on them or so. Academics have to spend their whole life on what Paul Ehrenfest referred to as the ‘unendlicher Heisenberg-Born-Dirac-Schrödinger Wurstmachinen-Physik-Betrieb.:-/

The dark force

We have the electromagnetic force, the strong force, and it looks like there is a dark force too now! Mr. Michael Gottlieb, the publisher of the online edition of Feynman’s Lectures, is actively exploring it: I received yet another DMCA take-down notice for so-called unfair use of the material (see below) and, yes, this hassle has some history already, unfortunately—so it is bad news.

Now, it is true I do refer quite often to these lectures—but I do so because they are a commonly referenced textbook: one needs some kind of reference when referring to mainstream physics, isn’t it? In fact, I am amazed Mr. Gottlieb was able to claim copyright on a textbook that was published 57 years ago and whose author is long dead! I should also add that I started this blog on Feynman’s Lectures before Mr. Gottlieb’s online edition of it became online! Shame on him! However, it looks like he’s got the upper hand in this nasty game so I’ll just bow out and that will be it. I’ve got better things to do than fight some narcissist who thinks of a 1963 textbook as personal property!

The material Mr. Gottlieb objects to is in older posts mainly—but he has been scrutinizing more recent posts as well. The use of the diagram with the energy levels of electron orbitals in a lithium atom in my May, 2020 post, for example, is apparently not permitted—despite me mentioning the source of this diagram quite explicitly. In short, it looks like many of my posts on this blog will look quite mutilated soon. Worse, the host of this WordPress site (Automattic) may decide to take down my site altogether.

I know there are dark forces fighting freedom of expression and independent thought everywhere. I just hadn’t expected them to be present in science too. This is, effectively, a lot worse than just being told to ‘shut up and calculate’ or – more commonly – being referred to as a ‘crackpot’ theorist. I, therefore, actually do feel sad about it. :-/ Any case, if the site goes down, I would like to thank my readers here – especially those who are actually following each and every post – for their encouragement. It has been a good exercise, and I will continue to publish, of course, using other channels and references other than Feynman. Keep tracking independent research on viXra.org, academia.edu and ResearchGate.net.

I wish Mr. Gottlieb the best of luck: I can only hope he learns some real physics while continuing to scrutinize my posts. Just for the record, I would like to understand this correctly: Mr. Gottlieb put Feynman’s Lectures online – verbatim – but so he is not copying stuff. And so I am trying to do some creative stuff with it – not copying literally but effectively using some of Feynman’s material to provide a new perspective or make a point – and so it is me who is the offender here, right? That’s the situation, isn’t it? Or is my logic faulty here? We really do live in weird times. Weird but interesting, at least.

I have various working titles for a future book on physics, but ‘The Sorry State of Physics’ is still my favorite one. In fact, Oliver Consa is probably right when calling it plain rotten. And, yes, at this occasion, I’d like to recall my previous advice: do not buy Feynman’s Lectures, please. They are outdated and you can, therefore, not learn all that much from them. They have, unfortunately, become what Feynman did not want them to become: Cargo Cult Science, zealously guarded by a bunch of self-appointed Mystery Wallahs. Be well and happy, and please do keep thinking things through for yourself!

Jean Louis Van Belle, 18 June 2020

Fenton (Automattic)

Jun 17, 2020, 10:17 PM UTC

Hello,

We’ve received the DMCA takedown notice below regarding material published on your WordPress.com site, which means the complainant is asserting ownership of this material and claiming that your use of it is not permitted by them or the law. As required by the DMCA, we have disabled public access to the material.

Repeated incidents of copyright infringement will also lead to the permanent suspension of your WordPress.com site. We certainly don’t want that to happen, so please delete any other material you may have uploaded for which you don’t have the necessary rights and refrain from uploading additional material that you do not have permission to upload. Although we can’t provide legal advice, these resources might help you make this determination:

https://wordpress.com/support/counter-notice/#what-is-fair-use

If you believe that this DMCA takedown notice was received in error, or if you believe your usage of this material would be considered fair use, it’s important that you submit a formal DMCA counter notice to ensure that your WordPress.com site remains operational. If you submit a valid counter notice, we will return the material to your site in 10 business days if the complainant does not reply with legal action.

Please refer to the following pages for more information:

Please note that republishing the material yourself, without permission from the copyright holder (even after you have submitted a counter notice) will result in the permanent suspension of your WordPress.com site and/or account.

Thank you.

— BEGIN NOTICE —

Neutrons as composite particles and electrons as gluons?

Neutrons as composite particles

In our rather particular conception of the world, we think of photons, electrons, and protons – and neutrinos – as elementary particles. Elementary particles are, obviously, stable: they would not be elementary, otherwise. The difference between photons and neutrinos on the one hand, and electrons, protons, and other matter-particles on the other, is that we think all matter-particles carry charge—even if they are neutral.

Of course, to be neutral, one must combine positive and negative charge: neutral particles can, therefore, not be elementary—unless we accept the quark hypothesis, which we do not like to do (not now, at least). A neutron must, therefore, be an example of a neutral (composite) matter-particle. We know it is unstable outside of the nucleus but its longevity – as compared to other non-stable particles – is quite remarkable: it survives about 15 minutes—for other unstable particles, we usually talk about micro- or nano-seconds, or worse!

Let us explore what the neutron might be—if only to provide some kind of model for analyzing other unstable particle, perhaps. We should first note that the neutron radius is about the same as that of a proton. How do we know this? NIST only gives the rms charge radius for a proton based on the various proton radius measurements. We, therefore, only have a CODATA value for the Compton wavelength for a neutron, which is more or less the same as that for the proton. To be precise, the two values are this:

λneutron = 1.31959090581(75)10-15 m

λproton = 1.32140985539(40)×10-15 m

These values are just mechanical calculations based on the mass or energy of protons and neutrons respectively: the Compton wavelength is, effectively, calculated as λ = h/mc.[1] However, you should, of course, not only rely on CODATA values only: you should google for experiments measuring the size of a neutron directly or indirectly to get an idea of what is going on here.

Let us look at the energies. The neutron’s energy is about 939,565,420 eV. The proton energy is about 938,272,088 eV. Hence, the difference is about 1,293,332 eV. This mass difference, combined with the fact that neutrons spontaneously decay into protons but – conversely – there is no such thing as spontaneous proton decay[2], confirms we are probably justified in thinking that a neutron must, somehow, combine a proton and an electron. The mass of an electron is 0.511 MeV/c2, so that is only about 40% of the energy difference, but the kinetic and binding energy could make up for the remainder.[3]

So, yes, we will want to think of a neutron as carrying both positive and negative charge inside. These charges balance each other out (there is no net electric charge) but their respective motion still yields a small magnetic moment, which we think of as some net result from the motion of the positive and negative charge inside.

Let us now move to the next grand idea which emerges here.

Electrons as gluons?

The negative charge inside of a neutron may help to keep the nucleus together. We can, therefore, think of this charge as some kind of nuclear glue. We tentatively explored this idea in a paper: Electrons as gluons? The basic idea is this: the electromagnetic force keeps electrons close to the positively charged nucleus and we should, therefore, not exclude that a similar arrangement of positive and negative charges – but one involving some strong(er) force to explain the difference in scale – might exist within the nucleus.

Nonsense? We don’t think so. Consider this: one never finds a proton pair without one or more neutrons. The main isotope of helium (4He), for example, has a nucleus consisting of two protons and two neutrons, while a helium-3 (3He) nucleus consists of two protons and one neutron. When we find a pair of nucleons, like in deuterium (2H), this will always consist of a proton and a neutron. The idea of a negative charge acting as an in-between to keep two positive charges together is, therefore, quite logical. Think of it as the opposite of a positively charged nucleus keeping electrons together in a multi-electron atom.

Does this make sense to you? It does to me, so I’d appreciate any converging or diverging thoughts you might have on this. 🙂

[1] The reader should note that the Compton wavelength and, therefore, the Compton radius is inversely proportional to the mass: a more massive particle is, therefore, associated with a smaller radius. This is somewhat counterintuitive but it is what it is.

[2] None of the experiments (think of the Super-Kamiokande detector here) found any evidence of proton decay so far.

[3] The reader should note that the mass of a proton and an electron add up to less than the mass of a neutron, which is why it is only logical that a neutron should decay into a proton and an electron. Binding energies – think of Feynman’s calculations of the radius of the hydrogen atom, for example – are usually negative.

The mystery of the elementary charge

As part of my ‘debunking quantum-mechanical myths’ drive, I re-wrote Feynman’s introductory lecture on quantum mechanics. Of course, it has got nothing to do with Feynman’s original lecture—titled: on Quantum Behavior: I just made some fun of Feynman’s preface and that’s basically it in terms of this iconic reference. Hence, Mr. Gottlieb should not make too much of a fuss—although I hope he will, of course, because it would draw more attention to the paper. It was a fun exercise because it encouraged me to join an interesting discussion on ResearchGate (I copied the topic and some up and down below) which, in turn, made me think some more about what I wrote about the form factor in the explanation of the electron, muon and proton. Let me copy the relevant paragraph:

When we talked about the radius of a proton, we promised you we would talk some more about the form factor. The idea is very simple: an angular momentum (L) can always be written as the product of a moment of inertia (I) and an angular frequency (ω). We also know that the moment of inertia for a rotating mass or a hoop is equal to I = mr2, while it is equal to I = mr2/4 for a solid disk. So you might think this explains the 1/4 factor: a proton is just an anti-muon but in disk version, right? It is like a muon because of the strong force inside, but it is even smaller because it packs its charge differently, right?

Maybe. Maybe not. We think probably not. Maybe you will have more luck when playing with the formulas but we could not demonstrate this. First, we must note, once again, that the radius of a muon (about 1.87 fm) and a proton (0.83-0.84 fm) are both smaller than the radius of the pointlike charge inside of an electron (α·ħ/mec ≈ 2.818 fm). Hence, we should start by suggesting how we would pack the elementary charge into a muon first!

Second, we noted that the proton mass is 8.88 times that of the muon, while the radius is only 2.22 times smaller – so, yes, that 1/4 ratio once more – but these numbers are still weird: even if we would manage to, somehow, make abstraction of this form factor by accounting for the different angular momentum of a muon and a proton, we would probably still be left with a mass difference we cannot explain in terms of a unique force geometry.

Perhaps we should introduce other hypotheses: a muon is, after all, unstable, and so there may be another factor there: excited states of electrons are unstable too and involve an n = 2 or some other number in Planck’s E = n·h·f equation, so perhaps we can play with that too.

Our answer to such musings is: yes, you can. But please do let us know if you have more luck then us when playing with these formulas: it is the key to the mystery of the strong force, and we did not find it—so we hope you do!

So… Well… This is really as far as a realist interpretation of quantum mechanics will take you. One can solve most so-called mysteries in quantum mechanics (interference of electrons, tunneling and what have you) with plain old classical equations (applying Planck’s relation to electromagnetic theory, basically) but here we are stuck: the elementary charge itself is a most mysterious thing. When packing it into an electron, a muon or a proton, Nature gives it a very different shape and size.

The shape or form factor is related to the angular momentum, while the size has got to do with scale: the scale of a muon and proton is very different than that of an electron—smaller even than the pointlike Zitterbewegung charge which we used to explain the electron. So that’s where we are. It’s like we’ve got two quanta—rather than one only: Planck’s quantum of action, and the elementary charge. Indeed, Planck’s quantum of action may also be said to express itself itself very differently in space or in time (h = E·T versus h = p·λ). Perhaps there is room for additional simplification, but I doubt it. Something inside of me says that, when everything is said and done, I will just have to accept that electrons are electrons, and protons are protons, and a muon is a weird unstable thing in-between—and all other weird unstable things in-between are non-equilibrium states which one cannot explain with easy math.

Would that be good enough? For you? I cannot speak for you. Is it a good enough explanation for me? I am not sure. I have not made my mind up yet. I am taking a bit of a break from physics for the time being, but the question will surely continue to linger in the back of my mind. We’ll keep you updated on progress ! Thanks for staying tuned ! JL

PS: I realize the above might sound a bit like crackpot theory but that is just because it is very dense and very light writing at the same time. If you read the paper in full, you should be able to make sense of it. 🙂 You should also check the formulas for the moments of inertia: the I = mr2/4 formula for a solid disk depends on your choice of the axis of symmetry.

Research Gate

Peter Jackson

Dear Peter – Thanks so much for checking the paper and your frank comments. That is very much appreciated. I know I have gone totally overboard in dismissing much of post-WW II developments in quantum physics – most notably the idea of force-carrying particles (bosons – including Higgs, W/Z bosons and gluons). My fundamental intuition here is that field theories should be fine for modeling interactions (I’ll quote Dirac’s 1958 comments on that at the very end of my reply here) and, yes, we should not be limiting the idea of a field to EM fields only. So I surely do not want to give the impression I think classical 19th/early 20th century physics – Planck’s relation, electromagnetic theory and relativity – can explain everything.

Having said that, the current state of physics does resemble the state of scholastic philosophy before it was swept away by rationalism: I feel there has been a multiplication of ill-defined concepts that did not add much additional explanation of what might be the case (the latter expression is Wittgenstein’s definition of reality). So, yes, I feel we need some reincarnation of William of Occam to apply his Razor and kick ass. Fortunately, it looks like there are many people trying to do exactly that now – a return to basics – so that’s good: I feel like I can almost hear the tectonic plates moving. 🙂

My last paper is a half-serious rewrite of Feynman’s first Lecture on Quantum Mechanics. Its intention is merely provocative: I want to highlight what of the ‘mystery’ in quantum physics is truly mysterious and what is humbug or – as Feynman would call it – Cargo Cult Science. The section on the ‘form factor’ (what is the ‘geometry’ of the strong force?) in that paper is the shortest and most naive paragraph in that text but it actually does highlight the one and only question that keeps me awake: what is that form factor, what different geometry do we need to explain a proton (or a muon) as opposed to, say, an electron? I know I have to dig into the kind of stuff that you are highlighting – and Alex Burinskii’s Dirac-Kerr-Newman models (also integrating gravity) to find elements that – one day – may explain why a muon is not an electron, and why a proton is not a positron.

Indeed, I think the electron and photon model are just fine: classical EM and Planck’s relation are all that’s needed and so I actually don’t waste to more time on the QED sector. But a decent muon and proton model will, obviously, require ”something else’ than Planck’s relation, the electric charge and electromagnetic theory. The question here is: what is that ‘something else’, exactly?

Even if we find another charge or another field theory to explain the proton, then we’re just at the beginning of explaining the QCD sector. Indeed, the proton and muon are stable (fairly stable – I should say – in case of the muon – which I want to investigate because of the question of matter generations). In contrast, transient particles and resonances do not respect Planck’s relation – that’s why they are unstable – and so we are talking non-equilibrium states and so that’s an entirely different ballgame. In short, I think Dirac’s final words in the very last (fourth) edition of his ‘Principles of Quantum Mechanics’ still ring very true today. They were written in 1958 so Dirac was aware of the work of Gell-Man and Nishijima (the contours of quark-gluon theory) and, clearly, did not think much of it (I understand he also had conversations with Feynman on this):

“Quantum mechanics may be defined as the application of equations of motion to particles. […] The domain of applicability of the theory is mainly the treatment of electrons and other charged particles interacting with the electromagnetic field⎯a domain which includes most of low-energy physics and chemistry.

Now there are other kinds of interactions, which are revealed in high-energy physics and are important for the description of atomic nuclei. These interactions are not at present sufficiently well understood to be incorporated into a system of equations of motion. Theories of them have been set up and much developed and useful results obtained from them. But in the absence of equations of motion these theories cannot be presented as a logical development of the principles set up in this book. We are effectively in the pre-Bohr era with regard to these other interactions. It is to be hoped that with increasing knowledge a way will eventually be found for adapting the high-energy theories into a scheme based on equations of motion, and so unifying them with those of low-energy physics.”

Again, many thanks for reacting and, yes, I will study the references you gave – even if I am a bit skeptical of Wolfram’s new project. Cheers – JL

Paul Ehrenfest and the search for truth

On 25 September 1933, Paul Ehrenfest took his son Wassily, who was suffering from Down syndrome, for a walk in the park. He shot him, and then killed himself. He was only 53. That’s my age bracket. From the letters he left (here is a summary in Dutch), we know his frustration of not being able to arrive at some kind of common-sense interpretation of the new quantum physics played a major role in the anxiety that had brought him to this point. He had taken courses from Ludwig Boltzmann as an aspiring young man. We, therefore, think Boltzmann’s suicide – for similar reasons – might have troubled him too.

His suicide did not come unexpectedly: he had announced it. In one of his letters to Einstein, he complains about ‘indigestion’ from the ‘unendlicher Heisenberg-Born-Dirac-Schrödinger Wurstmachinen-Physik-Betrieb.’ I’ll let you google-translate that. :-/ He also seems to have gone through the trouble of summarizing all his questions on the new approach in an article in what was then one of the top journals for physics: Einige die Quantenmechanik betreffende Erkundigungsfrage, Zeitschrift für Physik 78 (1932) 555-559 (quoted in the above-mentioned review article). This I’ll translate: Some Questions about Quantum Mechanics.

Ehrenfest

Paul Ehrenfest in happier times (painting by Harm Kamerlingh Onnes in 1920)

A diplomat-friend of mine once remarked this: “It is good you are studying physics only as a pastime. Professional physicists are often troubled people—miserable.” It is an interesting observation from a highly intelligent outsider. To be frank, I understand this strange need to probe things at the deepest level—to be able to explain what might or might not be the case (I am using Wittgenstein’s definition of reality here). Even H.A. Lorentz, who – fortunately, perhaps – died before his successor did what he did, was becoming quite alarmist about the sorry state of academic physics near the end of his life—and he, Albert Einstein, and so many others were not alone. Not then, and not now. All of the founding fathers of quantum mechanics ended up becoming pretty skeptical about the theory they had created. We have documented that elsewhere so we won’t talk too much about it here. Even John Stewart Bell himself – one of the third generation of quantum physicists, we may say – did not like his own ‘No Go Theorem’ and thought that some “radical conceptual renewal”[1] might disprove his conclusions.

The Born-Heisenberg revolution has failed: most – if not all – of contemporary high-brow physicist are pursuing alternative theories—in spite, or because, of the academic straitjackets they have to wear. If a genius like Ehrenfest didn’t buy it, then I won’t buy it either. Furthermore, the masses surely don’t buy it and, yes, truth – in this domain too – is, fortunately, being defined more democratically nowadays. The Nobel Prize Committee will have to do some serious soul-searching—if not five years from now, then ten.

We feel sad for the physicists who died unhappily—and surely for those who took their life out of depression—because the common-sense interpretation they were seeking is so self-evident: de Broglie’s intuition in regard to matter being wavelike was correct. He just misinterpreted its nature: it is not a linear but a circular wave. We quickly insert the quintessential illustration (courtesy of Celani, Vassallo and Di Tommaso) but we refer the reader for more detail to our articles or – more accessible, perhaps – our manuscript for the general public.

aa 2

The equations are easy. The mass of an electron – any matter-particle, really – is the equivalent mass of the oscillation of the charge it carries. This oscillation is, most probably, statistically regular only. So we think it’s chaotic, actually, but we also think the words spoken by Lord Pollonius in Shakespeare’s Hamlet apply to it: “Though this be madness, yet there is method in ‘t.” This means we can meaningfully speak of a cycle time and, therefore, of a frequency. Erwin Schrödinger stumbled upon this motion while exploring solutions to Dirac’s wave equation for free electrons, and Dirac immediately grasped the significance of Schrödinger’s discovery, because he mentions Schrödinger’s discovery rather prominently in his Nobel Prize Lecture:

“It is found that an electron which seems to us to be moving slowly, must actually have a very high frequency oscillatory motion of small amplitude superposed on the regular motion which appears to us. As a result of this oscillatory motion, the velocity of the electron at any time equals the velocity of light. This is a prediction which cannot be directly verified by experiment, since the frequency of the oscillatory motion is so high and its amplitude is so small. But one must believe in this consequence of the theory, since other consequences of the theory which are inseparably bound up with this one, such as the law of scattering of light by an electron, are confirmed by experiment.” (Paul A.M. Dirac, Theory of Electrons and Positrons, Nobel Lecture, December 12, 1933)

Unfortunately, Dirac confuses the concept of the electron as a particle with the concept of the (naked) charge inside. Indeed, the idea of an elementary (matter-)particle must combine the idea of a charge and its motion to account for both the particle- as well as the wave-like character of matter-particles. We do not want to dwell on all of this because we’ve written too many papers on this already. We just thought it would be good to sum up the core of our common-sense interpretation of physics. Why? To honor Boltzmann and Ehrenfest: I think of their demise as a sacrifice in search for truth.

[…]

OK. That sounds rather tragic—sorry for that! For the sake of brevity, we will just describe the electron here.

I. Planck’s quantum of action (h) and the speed of light (c) are Nature’s most fundamental constants. Planck’s quantum of action relates the energy of a particle to its cycle time and, therefore, to its frequency:

(1) h = E·T = E/f ⇔ ħ = E/ω

The charge that is whizzing around inside of the electron has zero rest mass, and so it whizzes around at the speed of light: the slightest force on it gives it an infinite acceleration. It, therefore, acquires a relativistic mass which is equal to mγ = me/2 (we refer to our paper(s) for a relativistically correct geometric argument). The momentum of the pointlike charge, in its circular or orbital motion, is, therefore, equal to p = mγ·c = me·c/2.

The (angular) frequency of the oscillation is also given by the formula for the (angular) velocity:

(2) c = a·ω ⇔ ω = c/a

While Eq. (1) is a fundamental law of Nature, Eq. (2) is a simple geometric or mathematical relation only.

II. From (1) and (2), we can now calculate the radius of this tiny circular motion as:

(3a) ħ = E/ω = E·a/c a = (ħ·c)/E

Because we know the mass of the electron is the inertial mass of the state of motion of the pointlike charge, we may use Einstein’s mass-energy equivalence relation to rewrite this as the Compton radius of the electron:

(3b) a = (ħ·c)/E = (ħ·c)/(me·c2) = ħ/(me·c)

Note that we only used two fundamental laws of Nature so far: the Planck-Einstein relation and Einstein’s mass-energy equivalence relation.

III. We must also be able to express the Planck-Einstein quantum as the product of the momentum (p) of the pointlike charge and some length λ:

(4) h = p·λ

The question here is: what length? The circumference of the loop, or its radius? The same geometric argument we used to derive the effective mass of the pointlike charge as it whizzes around at lightspeed around its center, tells us the centripetal force acts over a distance that is equal to two times the radius. Indeed, the relevant formula for the centripetal force is this:

(5) F = (mγ/me)·(E/a) = E/2a

We can therefore reduce Eq. (4) by dividing it by 2π. We then get reduced, angular or circular (as opposed to linear) concepts:

(6) ħ = (p·λ)/(2π) = (me·c/2)·(λ/π) = (me·c/2)·(2a) = me·c·a ⇔ ħ/a = me·c

We can verify the logic of our reasoning by substituting for the Compton radius:

ħ = p·λ = me·c·= me·c·a = me·c·ħ/(me·c) = ħ

IV. We can, finally, re-confirm the logic of our reason by re-deriving Einstein’s mass-energy equivalence relation as well as the Planck-Einstein relation using the ω = c/a and the ħ/a = me·c relations:

(7) ħ·ω = ħ·c/a = (ħ/ac = (me·cc = me·c2 = E

Of course, we note all of the formulas we have derived are interdependent. We, therefore, have no clear separation between axioms and derivations here. If anything, we are only explaining what Nature’s most fundamental laws (the Planck-Einstein relation and Einstein’s mass-energy equivalence relation) actually mean or represent. As such, all we have is a simple description of reality itself—at the smallest scale, of course! Everything that happens at larger scales involves Maxwell’s equations: that’s all electromagnetic in nature. No need for strong or weak forces, or for quarks—who invented that? Ehrenfest, Lorentz and all who suffered with truly understanding the de Broglie’s concept of the matter-wave might have been happier physicists if they would have seen these simple equations!

The gist of the matter is this: the intuition of Einstein and de Broglie in regard to the wave-nature of matter was, essentially, correct. However, de Broglie’s modeling of it as a wave packet was not: modeling matter-particles as some linear oscillation does not do the trick. It is extremely surprising no one thought of trying to think of some circular oscillation. Indeed, the interpretation of the elementary wavefunction as representing the mentioned Zitterbewegung of the electric charge solves all questions: it amounts to interpreting the real and imaginary part of the elementary wavefunction as the sine and cosine components of the orbital motion of a pointlike charge. We think that, in our 60-odd papers, we’ve shown such easy interpretation effectively does the trick of explaining all of the quantum-mechanical weirdness but, of course, it is up to our readers to judge that. 🙂

[1] See: John Stewart Bell, Speakable and unspeakable in quantum mechanics, pp. 169–172, Cambridge University Press, 1987 (quoted from Wikipedia). J.S. Bell died from a cerebral hemorrhage in 1990 – the year he was nominated for the Nobel Prize in Physics and which he, therefore, did not receive (Nobel Prizes are not awarded posthumously). He was just 62 years old then.

Re-writing Feynman’s Lectures?

I have a crazy new idea: a complete re-write of Feynman’s Lectures. It would be fun, wouldn’t it? I would follow the same structure—but start with Volume III, of course: the lectures on quantum mechanics. We could even re-use some language—although we’d need to be careful so as to keep Mr. Michael Gottlieb happy, of course. 🙂 What would you think of the following draft Preface, for example?

The special problem we try to get at with these lectures is to maintain the interest of the very enthusiastic and rather smart people trying to understand physics. They have heard a lot about how interesting and exciting physics is—the theory of relativity, quantum mechanics, and other modern ideas—and spend many years studying textbooks or following online courses. Many are discouraged because there are really very few grand, new, modern ideas presented to them. The problem is whether or not we can make a course which would save them by maintaining their enthusiasm.

The lectures here are not in any way meant to be a survey course, but are very serious. I thought it would be best to re-write Feynman’s Lectures to make sure that most of the above-mentioned enthusiastic and smart people would be able to encompass (almost) everything that is in the lectures. 🙂

This is the link to Feynman’s original Preface, so you can see how my preface compares to his: same-same but very different, they’d say in Asia. 🙂

[…]

Doesn’t that sound like a nice project? 🙂

Jean Louis Van Belle, 22 May 2020

Post scriptum: It looks like we made Mr. Gottlieb and/or MIT very unhappy already: the link above does not work for us anymore (see what we get below). That’s very good: it is always nice to start a new publishing project with a little controversy. 🙂 We will have to use the good old paper print edition. We recommend you buy one too, by the way. 🙂 I think they are just a bit over US$100 now. Well worth it!

To put the historical record straight, the reader should note we started this blog before Mr. Gottlieb brought Feynman’s Lectures online. We actually wonder why he would be bothered by us referring to it. That’s what classical textbooks are for, aren’t they? They create common references to agree or disagree with, and why put a book online if you apparently don’t want it to be read or discussed? Noise like this probably means I am doing something right here. 🙂

Post scriptum 2: Done ! Or, at least, the first chapter is done ! Have a look: here is the link on ResearchGate and this is the link on Phil Gibbs’ site. Please do let me know what you think of it—whether you like it or not or, more importantly, what logic makes sense and what doesn’t. 🙂

Gottlieb

Classical principles of quantum physics

I summarized my 60-odd papers into one ‘manifesto’: it outlines what amounts to a full-blown classical interpretation of quantum mechanics. Have a look at it and let me know what you think ! 🙂

I should probably do a last paper on quantum-mechanical tunneling and potential barriers and their corollary, of course: potential wells. Indeed, the ring current model comes with a dynamic view of the fields surrounding charged particles. Potential barriers should, therefore, not be thought of as static fields: they vary in time. They are the joint result of two or more charges moving around. Hence, a particle breaking through a ‘potential wall’ or coming out of a potential ‘well’ probably just uses an opening which corresponds to a classical trajectory. However, it is not an easy analysis: it should be relativistically correct and we, therefore, need to describe the fields in terms of the vector potential and all that. I’ll need to look the math again here. :-/

Post scriptum (1 June 2020): I just added an introduction to the paper. It talks about recent attempts to explain what might be going on inside of the atomic nucleus in terms of electromagnetic interactions only. Such analyses are usually referred to as an electromagnetic theory of nuclear interaction or – using more formidable language – nuclear lattice effective field theory (NLEFT) and they will, hopefully, gain much more acceptance in the future.[1] They should—because they make sense! 🙂

[1] Easily accessible references are, for example, Bernard Schaeffer (2016) or Paolo Di Sia (2018).