Dirac’s wave equation and particle models

Introduction

I had not touched physics since April last year, as I was struggling with cancer, and finally went in for surgery. It solved the problem but physical and psychological recovery was slow, and so I was in no mood to work on mathematical and physical questions. Now I am going through my ResearchGate papers again. I start with those that get a fair amount of downloads and – I am very pleased to see that happen – those are the papers that deal with very fundamental questions, and lay out the core of an intuition that is more widely shared now: physicists are lost in contradictions and will not get out of this fuzzy situation until they solve them.

[Skeptical note here: I note that those physicists who bark loudest about the need for a scientific revolution are, unfortunately, often those who obscure things even more. For example, I quickly went through Hossenfelder’s Lost in Math (and I also emailed her to highlight all that zbw theory can bring) but she did not even bother to reply and, more in general, shows no signs of being willing to go back to the roots, which are the solutions that were presented during the early Solvay conferences but, because of some weird tweak of the history of science, and despite the warnings of intellectual giants such as H.A. Lorentz, Ehrenfest, or Einstein (and also Dirac or Bell in the latter half of their lifes), were discarded. I have come to the conclusion that modern-day scientists cannot be fashionable when admitting all mysteries have actually been solved long time ago.]

The key observation or contradiction is this: the formalism of modern quantum mechanics deals with all particles – stable or unstable – as point objects: they are supposed to have no internal structure. At the same time, a whole new range of what used to be thought of as intermediate mental constructs or temporary classifications – think of quarks here, or of the boson-fermion dichotomy – acquired ontological status. We lamented that in one of very first papers (titled: the difference between a theory, a calculation and an explanation), which has few formulas and is, therefore, a much easier read than the others.

Some of my posts on this blog here were far more scathing and, therefore, not suitable to write out in papers. See, for example, my Smoking Gun Physics post, in which I talk much more loudly (but also more unscientifically) about the ontologicalization of quarks and all these theoretical force-carrying particles that physicists have invented over the past 50 years or so.

My point of view is clear and unambiguous: photons and neutrinos (both of which can be observed and measured) will do. The rest (the analysis of decay and the chain of reactions after high-energy collisions, mainly) can be analyzed using scattering matrices and other classical techniques (on that, I did write a paper highlighting the proposals of more enlightened people than me, like Bombardelli, 2016, even if I think researchers like Bombardelli should push back to basics even more than they do). By the way, I should probably go much further in my photon and neutrino models, but time prevented me from doing so. In any case, I did update and put an older paper of mine online, with some added thoughts on recent experiments that seem to confirm neutrinos have some rest mass. That is only what is to be expected, I would think. Have a look at it.

[…]

This is a rather lengthy introduction to the topic I want to write about for my public here, which is people like you and me: (amateur) physicists who want to make sense of all that is out there. So I will make a small summary of an equation I was never interested in: Dirac’s wave equation. Why my lack of interest before, and my renewed interest now?

The reason is this: Feynman clearly never believed Dirac’s equation added anything to Schrödinger’s, because he does not even mention it in his rather Lectures which, I believe, are, today still, truly seminal even if they do not go into all of the stuff mainstream quantum physicists today believe to be true (which is, I repeat, all of the metaphysics around quarks and gluons and force-carrying bosons and all that). So I did not bother to dig into it.

However, when revising my paper on de Broglie’s matter-wave, I realized that I should have analyzed Dirac’s equation too, because I do analyze Schrödinger’s wave equation there (which makes sense), and also comment on the Klein-Gordon wave equation (which, just like Dirac’s, does not make much of an impression on me). Hence, I would say my renewed interest is only there because I wanted to tidy up a little corner in this kitchen of mine. 🙂

I will stop rambling now, and get on with it.

Dirac’s wave equation: concepts and issues

We should start by reminding ourselves what a wave equation actually is: it models how waves – sound waves, or electromagnetic waves, or – in this particular case – a ‘wavicle’ or wave-particle – propagate in space and in time. As such, it is often said they model the properties of the medium (think of properties such as elasticity, density, permittivity or permeability here) but, because we do no longer think of spacetime as an aether, quantum-mechanical wave equations are far more abstract.

I should insert a personal note here. I do have a personal opinion on the presumed reality of spacetime. It is not very solid, perhaps, because I oscillate between (1) Kant’s intuition, thinking that space and time are mental constructs only, which our mind uses to structure its impressions (we are talking science here, so I should say: our measurements) versus (2) the idea that the 2D or 3D oscillations of pointlike charges within, say, an electron, a proton or a muon-electron must involve some kind of elasticity of the ‘medium’ that we commonly refer to as spacetime (I’d say that is more in line with Wittgenstein’s philosophy of reality). I should look it up but I think I do talk about the elasticity of spacetime at one or two occasions in my papers that talk about internal forces in particles, or papers in which I dig deep into the potentials that may or may not drive these oscillations. I am not sure how far I go there. Probably too far. But if properties such as vacuum permittivity or permeability are generally accepted, then why not think of elasticity? However, I did try to remain very cautious when it comes to postulating properties of the so-called spacetime vacuum, as evidenced from what I write in one of the referenced papers above:

“Besides proving that the argument of the wavefunction is relativistically invariant, this [analysis of the argument of the wavefunction] also demonstrates the relativistic invariance of the Planck-Einstein relation when modelling elementary particles.[1] This is why we feel that the argument of the wavefunction (and the wavefunction itself) is more real – in a physical sense – than the various wave equations (Schrödinger, Dirac, or Klein-Gordon) for which it is some solution. In any case, a wave equation usually models the properties of the medium in which a wave propagates. We do not think the medium in which the matter-wave propagates is any different from the medium in which electromagnetic waves propagate. That medium is generally referred to as the vacuum and, whether or not you think of it as true nothingness or some medium, we think Maxwell’s equations – which establishes the speed of light as an absolute constant – model the properties of it sufficiently well! We, therefore, think superluminal phase velocities are not possible, which is why we think de Broglie’s conceptualization of a matter particle as a wavepacket – rather than one single wave – is erroneous.[2]

The basic idea is this: if the vacuum is true nothingness, then it cannot have any properties, right? 🙂 That is why I call the spacetime vacuum, as it is being modelled in modern physics, a so-called vacuum. 🙂

[…] I guess I am rambling again, and so I should get back to the matter at hand, and quite literally so, because we are effectively talking about real-life matter here. To be precise, we are talking about Dirac’s view of an electron moving in free space. Let me add the following clarification, just to make sure we understand exactly what we are talking about: free space is space without any potential in it: no electromagnetic, gravitational or other fields you might think of.

In reality, such free space does not exist: it is just one of those idealizations which we need to model reality. All of real-life space – the Universe we live in, in other words – has potential energy in it: electromagnetic and/or gravitational potential energy (no other potential energy has been convincingly demonstrated so far, so I will not add to the confusion by suggesting there might be more). Hence, there is no such thing as free space.

What am I saying here? I am just saying that it is not bad that we remind ourselves of the fact that Dirac’s construction is theoretical from the outset. To me, it feels like trying to present electromagnetism by making full abstraction of the magnetic side of the electromagnetic force. That is all that I am saying here. Nothing more, nothing less. No offense to the greatness of a mind like Dirac’s.

[…] I may have lost you as a reader just now, so let me try to get you back: Dirac’s wave equation. Right. Dirac develops it in two rather dense sections of his Principles of Quantum Mechanics, which I will not try to summarize here. I want to make it easy for the reader, so I will limit myself to an analysis of the very first principle(s) which Dirac develops in his Nobel Prize Lecture. It is this (relativistically correct) energy equation:

E2 = m02c4 + p2c2

This equation may look unfamiliar to you but, frankly, if you are familiar with the basics of relativity theory, it should not come across as weird or unfathomable. It is one of the many basic ways of expressing relativity theory, as evidenced from the fact that Richard Feynman introduces this equation as part of his very first volume of his Lectures on Physics, and in one of the more basic chapters of it: just click on the link and work yourself through it: you will see it is just another rendering of Einstein’s mass-equivalence relation (E = mc2).

The point is this: it is very easy now to understand Dirac’s basic energy equation: the one he uses to then go from variables to quantum-mechanical operators and all of the other mathematically correct hocus-pocus that result in his wave equation. Just substitute E = mc2 for W, and then divide all by c2:

So here you are. All the rest is the usual hocus-pocus: we substitute classical variables by operators, and then we let them operate on a wavefunction (wave equations may or may not describe the medium, but wavefunctions surely do describe real-life particles), and then we have a complicated differential equation to solve and – as we made abundantly clear in this and other papers (one that you may want to read is my brief history of quantum-mechanical ideas, because I had a lot of fun writing that one, and it is not technical at all) – when you do that, you will find non-sensical solutions, except for the one that Schrödinger pointed out: the Zitterbewegung electron, which we believe corresponds to the real-life electron.

I will wrap this up (although you will say I have not done my job yet) by quoting quotes and comments from my de Broglie paper:

Prof. H. Pleijel, then Chairman of the Nobel Committee for Physics of the Royal Swedish Academy of Sciences, dutifully notes this rather inconvenient property in the ceremonial speech for the 1933 Nobel Prize, which was awarded to Heisenberg for nothing less than “the creation of quantum mechanics[1]:

“Matter is formed or represented by a great number of this kind of waves which have somewhat different velocities of propagation and such phase that they combine at the point in question. Such a system of waves forms a crest which propagates itself with quite a different velocity from that of its component waves, this velocity being the so-called group velocity. Such a wave crest represents a material point which is thus either formed by it or connected with it, and is called a wave packet. […] As a result of this theory, one is forced to the conclusion to conceive of matter as not being durable, or that it can have definite extension in space. The waves, which form the matter, travel, in fact, with different velocity and must, therefore, sooner or later separate. Matter changes form and extent in space. The picture which has been created, of matter being composed of unchangeable particles, must be modified.”

This should sound very familiar to you. However, it is, obviously, not true: real-life particles – electrons or atoms traveling in space – do not dissipate. Matter may change form and extent in space a little bit – such as, for example, when we are forcing them through one or two slits[2] – but not fundamentally so![3]

We repeat again, in very plain language this time: Dirac’s wave equation is essentially useless, except for the fact that it actually models the electron itself. That is why only one of its solutions make sense, and that is the very trivial solution which Schrödinger pointed out: the Zitterbewegung electron, which we believe corresponds to the real-life electron. 🙂 It just goes through space and time like any ordinary particle would do, but its trajectory is not given by Dirac’s wave equation. In contrast, Schrödinger’s wave equation (with or without a potential being present: in free or non-free space, in other words) does the trick and – against mainstream theory – I dare say, after analysis of its origins, that it is relativistically correct. Its only drawback is that it does not incorporate the most essential property of an elementary particle: its spin. That is why it models electron pairs rather than individual electrons.

We can easily generalize to protons or other elementary or non-elementary particles. For a deeper discussion of Dirac’s wave equation (which is what you probably expected), I must refer, once again, to Annex II of my paper on the interpretation of de Broglie’s matter-wave: it is all there, really, and – glancing at it all once again – the math is actually quite basic. In any case, paraphrasing Euclid in his reply to King Ptolemy’s question, I would say that there is no royal road to quantum mechanics. One must go through its formalism and, far more important, its history of thought. 🙂

To conclude, I would like to return to one of the remarks I made in the introduction. What about the properties of the vacuum? I will remain cautious and, hence, not answer that question. I prefer to let you think about this rather primitive classification of what is relative and not, and how the equations in physics mix both of it. 🙂

 


[1] To be precise, Heisenberg got a postponed prize from 1932. Erwin Schrödinger and Paul A.M. Dirac jointly got the 1933 prize. Prof. Pleijel acknowledges all three in more or less equal terms in the introduction of his speech: “This year’s Nobel Prizes for Physics are dedicated to the new atomic physics. The prizes, which the Academy of Sciences has at its disposal, have namely been awarded to those men, Heisenberg, Schrödinger, and Dirac, who have created and developed the basic ideas of modern atomic physics.”

[2] The wave-particle duality of the ring current model should easily explain single-electron diffraction and interference (the electromagnetic oscillation which keeps the charge swirling would necessarily interfere with itself when being forced through one or two slits), but we have not had the time to engage in detailed research here.

[3] We will slightly nuance this statement later but we will not fundamentally alter it. We think of matter-particles as an electric charge in motion. Hence, as it acts on a charge, the nature of the centripetal force that keeps the particle together must be electromagnetic. Matter-particles, therefore, combine wave-particle duality. Of course, it makes a difference when this electromagnetic oscillation, and the electric charge, move through a slit or in free space. We will come back to this later. The point to note is: matter-particles do not dissipate. Feynman actually notes that at the very beginning of his Lectures on quantum mechanics, when describing the double-slit experiment for electrons: “Electrons always arrive in identical lumps.”


[1] The relativistic invariance of the Planck-Einstein relation emerges from other problems, of course. However, we see the added value of the model here in providing a geometric interpretation: the Planck-Einstein relation effectively models the integrity of a particle here.

[2] See our paper on matter-waves, amplitudes, and signals.

Advertisement

Deep electron orbitals and the essence of quantum physics

After a long break (more than six months), I have started to engage again in a few conversations. I also looked at the 29 papers on my ResearchGate page, and I realize some of them would need to be re-written or re-packaged so as to ensure a good flow. Also, some of the approaches were more productive than others (some did not lead anywhere at all, actually), and I would need to point those out. I have been thinking about how to approach this, and I think I am going to produce an annotated version of these papers, with comments and corrections as mark-ups. Re-writing or re-structuring all of them would require to much work.

The mark-up of those papers is probably going to be based on some ‘quick-fire’ remarks (a succession of thoughts triggered by one and the same question) which come out of the conversation below, so I thank these thinkers for having kept me in the loop of a discussion I had followed but not reacted to. It is an interesting one – on the question of ‘deep electron orbitals’ (read: the orbitals of negative charge inside of a nucleus exist and, if so, how one can model them. If one could solve that question, one would have a theoretical basis for what is referred to as low-energy nuclear reactions. That was known formerly as cold fusion, but that got a bit of a bad name because of a number of crooks spoiling the field, unfortunately.

PS: I leave the family names of my correspondents in the exchange below out so they cannot be bothered. One of them, Jerry, is a former American researcher at SLAC. Andrew – the key researcher on DEPs – is a Canadian astrophysicist, and the third one – Jean-Luc – is a rather prominent French scientist in LENR.]

From: Jean Louis Van Belle
Sent: 18 November 2021 22:51
Subject: Staying engaged (5)

Oh – and needless to say, Dirac’s basic equation can, of course, be expanded using the binomial expansion – just like the relativistic energy-momentum relation, and then one can ‘cut off’ the third-, fourth-, etc-order terms and keep the first and second-order terms only. Perhaps it is equations like that kept you puzzled (I should check your original emails). In any case, this way of going about energy equations for elementary particles is a bit the same as those used in perturbation equations in which – as Dirac complained – one randomly selects terms that seem to make sense and discard others because they do not seem to make sense. Of course, Dirac criticized perturbation theory much more severely than this – and rightly so. 😊 😊 JL

From: Jean Louis Van Belle
Sent: 18 November 2021 22:10
Subject: Staying engaged (4)

Also – I remember you had some questions on an energy equation – not sure which one – but so I found Dirac’s basic equation (based on which he derives the ‘Dirac’ wave equation) is essentially useless because it incorporates linear momentum only. As such, it repeats de Broglie’s mistake, and that is to interpret the ‘de Broglie’ wavelength as something linear. It is not: frequencies, wavelengths are orbital frequencies and orbital circumferences. So anything you would want to do with energy equations that are based on that, lead nowhere – in my not-so-humble opinion, of course. To illustrate the point, compare the relativistic energy-momentum relation and Dirac’s basic equation in his Nobel Prize lecture (I hope the subscripts/superscripts get through your email system so they display correctly):

m02c4 = E2 – p2c2 (see, for example, Feynman-I-16, formula 16-3)

Divide the above by c2 and re-arrange and you get Dirac’s equation: W2/c2 – pr2 – m2/c2 = 0 (see his 1933 Nobel Prize Lecture)

So that cannot lead anywhere. It’s why I totally discard Dirac’s wave equation (it has never yielded any practical explanation of a real-life phenomenon anyway, if I am not mistaken).

Cheers – JL

From: Jean Louis Van Belle
Sent: 18 November 2021 21:49
Subject: Staying engaged (3)

Just on ‘retarded sources’ and ‘retarded fields’ – I have actually tried to think of the ‘force mechanism’ inside of an electron or a proton (what keeps the pointlike charge in this geometric orbit around a center of mass?). I thought long and hard about some kind of model in which we have the charge radiate out a sub-Planck field, and that its ‘retarded effects’ might arrive ‘just in time’ to the other side of the orbital (or whatever other point on the orbital) so as to produce the desired ‘course correction’ might explain it. I discarded it completely: I am now just happy that we have ‘reduced’ the mystery to this ‘Planck-scale quantum-mechanical oscillation’ (in 2D or 3D orbitals) without the need for an ‘aether’, or quantized spacetime, or ‘virtual particles’ actually ‘holding the thing together’.

Also, a description in terms of four-vectors (scalar and vector potential) does not immediately call for ‘retarded time’ variables and all that, so that is another reason why I think one should somehow make the jump from E-B fields to scalar and vector potential, even if the math is hard to visualize. If we want to ‘visualize’ things, Feynman’s discussion of the ‘energy’ and ‘momentum’ flow in https://www.feynmanlectures.caltech.edu/II_27.html might make sense, because I think analyses in terms of Poynting vectors are relativistically current, aren’t they? It is just an intuitive idea…

Cheers – JL

From: Jean Louis Van Belle
Sent: 18 November 2021 21:28
Subject: Staying engaged (2)

But so – in the shorter run – say, the next three-six months, I want to sort out those papers on ResearchGate. The one on the de Broglie’s matter-wave (interpreting the de Broglie wavelength as the circumference of a loop rather than as a linear wavelength) is the one that gets most downloads, and rightly so. The rest is a bit of a mess – mixing all kinds of things I tried, some of which worked, but other things did not. So I want to ‘clean’ that up… 😊 JL

From: Jean Louis Van Belle
Sent: 18 November 2021 21:21
Subject: Staying engaged…

Please do include me in the exchanges, Andrew – even if I do not react, I do read them because I do need some temptation and distraction. As mentioned, I wanted to focus on building a credible n = p + e model (for free neutrons but probably more focused on a Schrodinger-like D = p + e + p Platzwechsel model, because the deuteron nucleus is stable). But so I will not do that the way I studied the zbw model of the electron and proton (I believe that is sound now) – so that’s with not putting in enough sleep. I want to do it slowly now. I find a lot of satisfaction in the fact that I think there is no need for complicated quantum field theories (fields are quantized, but in a rather obvious way: field oscillations – just like matter-particles – pack Planck’s quantum of (physical) action which – depending on whether you freeze time or positions as a variable, expresses itself as a discrete amount of energy or, alternatively, as a discrete amount of momentum), nor is there any need for this ‘ontologization’ of virtual field interactions (sub-Planck scale) – the quark-gluon nonsense.

Also, it makes sense to distinguish between an electromagnetic and a ‘strong’ or ‘nuclear’ force: the electron and proton have different form factors (2D versus 3D oscillations, but that is a bit of a non-relativistic shorthand for what might be the case) but, in addition, there is clearly a much stronger force at play within the proton – whose strength is the same kind of ‘scale’ as the force that gives the muon-electron its rather enormous mass. So that is my ‘belief’ and the ‘heuristic’ models I build (a bit of ‘numerology’ according to Dr Pohl’s rather off-hand remarks) support it sufficiently for me to make me feel at peace about all these ‘Big Questions’.

I am also happy I figured out these inconsistencies around 720-degree symmetries (just the result of a non-rigorous application of Occam’s Razor: if you use all possible ‘signs’ in the wavefunction, then the wavefunction may represent matter as well as anti-matter particles, and these 720-degree weirdness dissolves). Finally, the kind of ‘renewed’ S-matrix programme for analyzing unstable particles (adding a transient factor to wavefunctions) makes sense to me, but even the easiest set of equations look impossible to solve – so I may want to dig into the math of that if I feel like having endless amounts of time and energy (which I do not – but, after this cancer surgery, I know I will only die on some ‘moral’ or ‘mental’ battlefield twenty or thirty years from now – so I am optimistic).

So, in short, the DEP question does intrigue me – and you should keep me posted, but I will only look at it to see if it can help me on that deuteron model. 😊 That is the only ‘deep electron orbital’ I actually believe in. Sorry for the latter note.

Cheers – JL   

From: Andrew
Sent: 16 November 2021 19:05
To: Jean-Luc; Jerry; Jean Louis
Subject: Re: retarded potential?

Dear Jean-Louis,

Congratulations on your new position. I understand your present limitations, despite your incredible ability to be productive. They must be even worse than those imposed by my young kids and my age. Do you wish for us to not include you in our exchanges on our topic? Even with no expectation of your contributing at this point, such emails might be an unwanted temptation and distraction.

Dear Jean-Luc,

Thank you for the Wiki-Links. They are useful. I agree that the 4-vector potential should be considered. Since I am now considering the nuclear potentials as well as the deep orbits, it makes sense to consider the nuclear vector potentials to have an origin in the relativistic Coulomb potentials. I am facing this in my attempts to calculate the deep orbits from contributions to the potential energies that have a vector component, which non-rel Coulomb potentials do not have.

For examples: do we include the losses in Vcb (e.g., from the binding energy BE) when we make the relativistic correction to the potential; or, how do we relativistically treat pseudo potentials such as that of centrifugal force? We know that for equilibrium, the average forces must cancel. However, I’m not sure that it is possible to write out a proper expression for “A” to fit such cases.

Best regards to all,

Andrew

_ _ _

On Fri, Nov 12, 2021 at 1:42 PM Jean-Luc wrote:

Dear all,

I totally agree with the sentence of Jean-Louis, which I put in bold in his message, about vector potential and scalar potential, combined into a 4-vector
potential A
, for representing EM field in covariant formulation. So EM representation by 4-vector A has been very developed, as wished by JL,
in the framework of QED.

We can note the simplicity of Lorentz gauge written by using A.
   https://en.wikipedia.org/wiki/Lorenz_gauge_condition

We can see the reality of vector potential
in the Aharonov-Bohm effect:
    https://en.wikipedia.org/wiki/Aharonov-Bohm_effect.
In fact, we can see that vector potential contains more information than E,B fields.
Best regards

   Jean-Luc
Le 12/11/2021 à 05:43, Jean Louis Van Belle a écrit :

Hi All – I’ve been absent in the discussion, and will remain absent for a while. I’ve been juggling a lot of work – my regular job at the Ministry of Interior (I got an internal promotion/transfer, and am working now on police and security sector reform) plus consultancies on upcoming projects in Nepal. In addition, I am still recovering from my surgery – I got a bad flue (not C19, fortunately) and it set back my auto-immune system, I feel. I have a bit of a holiday break now (combining the public holidays of 11 and 15 November in Belgium with some days off to bridge so I have a rather nice super-long weekend – three in one, so to speak).

As for this thread, I feel like it is not ‘phrasing’ the discussion in the right ‘language’. Thinking of E-fields and retarded potential is thinking in terms of 3D potential, separating out space and time variables without using the ‘power’ of four-vectors (four-vector potential, and four-vector space-time). It is important to remind ourselves that we are measuring fields in continuous space and time (but, again, this is relativistic space-time – so us visualizing a 3D potential at some point in space is what it is: we visualize something because our mind needs that – wants that). The fields are discrete, however: a field oscillation packs one unit of Planck – always – and Planck’s quantum of action combines energy and momentum: we should not think of energy and momentum as truly ‘separate’ (discrete) variables, just like we should not think of space and time as truly ‘separate’ (continuous) variables.

I do not quite know what I want to say here – or how I should further work it out. I am going to re-read my papers. I think I should further develop the last one (https://www.researchgate.net/publication/351097421_The_concepts_of_charge_elementary_ring_currents_potential_potential_energy_and_field_oscillations), in which I write that the vector potential is more real than the electric field and the scalar potential should be further developed, and probably it is the combined scalar and vector potential that are the ’real’ things. Not the electric and magnetic field. Hence, illustrations like below – in terms of discs and cones in space – do probably not go all that far in terms of ‘understanding’ what it is going on… It’s just an intuition…

Cheers – JL

From: Andrew
Sent: 23 September 2021 17:17
To: Jean-Luc; Jerry; Jean Louis
Subject: retarded potential?

Dear Jean-Luc,

Becasue of the claim that gluons are tubal, I have been looking at the disk-shaped E-field lines of the highly-relativistic electron and comparing it to the retarded potential, which, based on timing, would seem to give a cone rather than a disk (see figure). This makes a difference when we consider a deep-orbiting electron. It even impacts selection of the model for impact of an electron when considering diffraction and interference.

Even if the field appears to be spreading out as a cone, the direction of the field lines are that of a disk from the retarded source. However, how does it interact with the radial field of a stationary charge?

Do you have any thoughts on the matter.

Best regards,

Andrew

_ _ _

On Thu, Sep 23, 2021 at 5:05 AM Jean-Luc wrote:

Dear Andrew, Thank you for the references. Best regards, Jean-Luc

Le 18/09/2021 à 17:32, Andrew a écrit :
> This might have useful thoughts concerning the question of radiation
> decay to/from EDOs.
>
> Quantum Optics Electrons see the quantum nature of light
> Ian S. Osborne
> We know that light is both a wave and a particle, and this duality
> arises from the classical and quantum nature of electromagnetic
> excitations. Dahan et al. observed that all experiments to date in
> which light interacts with free electrons have been described with
> light considered as a wave (see the Perspective by Carbone). The
> authors present experimental evidence revealing the quantum nature of
> the interaction between photons and free electrons. They combine an
> ultrafast transmission electron microscope with a silicon-photonic
> nanostructure that confines and strengthens the interaction between
> the light and the electrons. The “quantum” statistics of the photons
> are imprints onto the propagating electrons and are seen directly in
> their energy spectrum.
> Science, abj7128, this issue p. 1324; see also abl6366, p. 1309

Feynman’s Lectures: A Survivor’s Guide

A few days ago, I mentioned I felt like writing a new book: a sort of guidebook for amateur physicists like me. I realized that is actually fairly easy to do. I have three very basic papers – one on particles (both light and matter), one on fields, and one on the quantum-mechanical toolbox (amplitude math and all of that). But then there is a lot of nitty-gritty to be written about the technical stuff, of course: self-interference, superconductors, the behavior of semiconductors (as used in transistors), lasers, and so many other things – and all of the math that comes with it. However, for that, I can refer you to Feynman’s three volumes of lectures, of course. In fact, I should: it’s all there. So… Well… That’s it, then. I am done with the QED sector. Here is my summary of it all (links to the papers on Phil Gibbs’ site):

Paper I: Quantum behavior (the abstract should enrage the dark forces)

Paper II: Probability amplitudes (quantum math)

Paper III: The concept of a field (why you should not bother about QFT)

Paper IV: Survivor’s guide to all of the rest (keep smiling)

Paper V: Uncertainty and the geometry of the wavefunction (the final!)

The last paper is interesting because it shows statistical indeterminism is the only real indeterminism. We can, therefore, use Bell’s Theorem to prove our theory is complete: there is no need for hidden variables, so why should we bother about trying to prove or disprove they can or cannot exist?

Jean Louis Van Belle, 21 October 2020

Note: As for the QCD sector, that is a mess. We might have to wait another hundred years or so to see the smoke clear up there. Or, who knows, perhaps some visiting alien(s) will come and give us a decent alternative for the quark hypothesis and quantum field theories. One of my friends thinks so. Perhaps I should trust him more. 🙂

As for Phil Gibbs, I should really thank him for being one of the smartest people on Earth – and for his site, of course. Brilliant forum. Does what Feynman wanted everyone to do: look at the facts, and think for yourself. 🙂

Re-writing Feynman’s Lectures?

I have a crazy new idea: a complete re-write of Feynman’s Lectures. It would be fun, wouldn’t it? I would follow the same structure—but start with Volume III, of course: the lectures on quantum mechanics. We could even re-use some language—although we’d need to be careful so as to keep Mr. Michael Gottlieb happy, of course. 🙂 What would you think of the following draft Preface, for example?

The special problem we try to get at with these lectures is to maintain the interest of the very enthusiastic and rather smart people trying to understand physics. They have heard a lot about how interesting and exciting physics is—the theory of relativity, quantum mechanics, and other modern ideas—and spend many years studying textbooks or following online courses. Many are discouraged because there are really very few grand, new, modern ideas presented to them. The problem is whether or not we can make a course which would save them by maintaining their enthusiasm.

The lectures here are not in any way meant to be a survey course, but are very serious. I thought it would be best to re-write Feynman’s Lectures to make sure that most of the above-mentioned enthusiastic and smart people would be able to encompass (almost) everything that is in the lectures. 🙂

This is the link to Feynman’s original Preface, so you can see how my preface compares to his: same-same but very different, they’d say in Asia. 🙂

[…]

Doesn’t that sound like a nice project? 🙂

Jean Louis Van Belle, 22 May 2020

Post scriptum: It looks like we made Mr. Gottlieb and/or MIT very unhappy already: the link above does not work for us anymore (see what we get below). That’s very good: it is always nice to start a new publishing project with a little controversy. 🙂 We will have to use the good old paper print edition. We recommend you buy one too, by the way. 🙂 I think they are just a bit over US$100 now. Well worth it!

To put the historical record straight, the reader should note we started this blog before Mr. Gottlieb brought Feynman’s Lectures online. We actually wonder why he would be bothered by us referring to it. That’s what classical textbooks are for, aren’t they? They create common references to agree or disagree with, and why put a book online if you apparently don’t want it to be read or discussed? Noise like this probably means I am doing something right here. 🙂

Post scriptum 2: Done ! Or, at least, the first chapter is done ! Have a look: here is the link on ResearchGate and this is the link on Phil Gibbs’ site. Please do let me know what you think of it—whether you like it or not or, more importantly, what logic makes sense and what doesn’t. 🙂

Gottlieb

The last words of H.A. Lorentz

I talked about the Solvay Conferences in my previous post(s). The Solvay Conference proceedings are a real treasury trove. Not only are they very pleasant to read, but they also debunk more than one myth or mystery in quantum physics!

It is part of scientific lore, for example, that the 1927 Solvay Conference was a sort of battlefield on new physics between Heisenberg and Einstein. Surprisingly, the papers and write-up of discussions reveal that Einstein hardly intervened. They also reveal that ‘battlefield stories’ such as Heisenberg telling Einstein to “stop telling God what to do” or – vice versa – Einstein declaring “God doesn’t play dice” are what they are: plain gossip or popular hear-say. Neither Heisenberg nor Einstein ever said that—or not at the occasion of the 1927 Solvay Conference, at least! Instead, we see very nuanced and very deep philosophical statements—on both sides of the so-called ‘divide’ or ‘schism’.

From all interventions, the intervention of the Dutch scientist Hendrik Antoon Lorentz stands out. I know (most of) my readers don’t get French, and so I might translate it into English one of these days. In the meanwhile, you may want to google-translate it yourself!

It is all very weird, emotional and historical. H.A. Lorentz – clearly the driving force behind those pre-WW II Solvay Conferences – died a few months after the 1927 Conference. In fact, the 1927 conference proceedings have both the sad announcement of his demise as well his interventions—such was the practice of actually physically printing stuff at the time.

For those who do read French, here you go:

DISCUSSION GENERALE DES IDEES NOUVELLES EMISES.

Causalité, Déterminisme. Probabilité.

Intervention de M. Lorentz:

“Je voudrais attirer l ’attention sur les difficultés qu’on rencontre dans les anciennes théories. Nous voulons nous faire une représentation des phénomènes, nous en former une image dans notre esprit. Jusqu’ici, nous avons toujours voulu former ces images au moyen des notions ordinaires de temps et d’espace. Ces notions sont peut-être innées; en tout cas, elles se sont développées par notre expérience personnelle, par nos observations journalières. Pour moi, ces notions sont claires et j ’avoue que je ne puis me faire une idée de la physique sans ces notions. L ’image que je veux me former des phénomènes doit être absolument nette et définie et il me semble que nous ne pouvons nous former une pareille image que dans ce système d’espace et de temps.

Pour moi, un électron est un corpuscule qui, a un instant donne, se trouve en un point détermine de l ’espace, et si j ’ai eu l ’idée qu’a un moment suivant ce corpuscule se trouve ailleurs, je dois songer à sa trajectoire, qui est une ligne dans l’espace. Et si cet électron rencontre un atome et y pénètre, et qu’après plusieurs aventures il quitte cet atome, je me forge une théorie dans laquelle cet électron conserve son individualité; c’est-à-dire que j ’imagine une ligne suivant laquelle cet électron passe à travers cet atome. Il se peut, évidemment, que cette théorie soit bien difficile à développer, mais a priori cela ne me parait pas impossible.

Je me figure que, dans la nouvelle théorie, on a encore de ces électrons. Il est possible, évidemment, que dans la nouvelle théorie, bien développée, il soit nécessaire de supposer que ces électrons subissent des transformations. Je veux bien admettre que l’électron se fond en un nuage. Mais alors je chercherai à quelle occasion cette transformation se produit. Si l’on voulait m’interdire une pareille recherche en invoquant un principe, cela me gênerait beaucoup. Il me semble qu’on peut toujours espérer qu’on fera plus tard ce que nous ne pouvons pas encore faire en ce moment. Même si l’on abandonne les anciennes idées, on peut toujours conserver les anciennes dénominations. Je voudrais conserver cet idéal d’autrefois, de décrire tout ce qui se passe dans le monde par des images nettes. Je suis prêt à admettre d’autres théories, à condition qu’on puisse les traduire par des images claires et nettes.

Pour ma part, bien que n’étant pas encore familiarisé avec les nouvelles idées que j’entends exprimer maintenant, je pourrais me représenter ces idées ainsi. Prenons le cas d’un électron qui rencontre un atome; supposons que cet électron quitte cet atome et qu’en même temps il y ait émission d’un quantum de lumière. Il faut considérer, en premier lieu, les systèmes d’ondes qui correspondent à l ’électron et à l’atome avant le choc. Après le choc, nous aurons de nouveaux systèmes d’ondes. Ces systèmes d’ondes pourront etre décrits par une fonction ψ définie dans un espace a un grand nombre de dimensions qui satisfait une équation différentielle. La nouvelle mécanique ondulatoire opèrera avec cette équation et établira la fonction ψ avant et après le choc.

Or, il y a des phénomènes qui apprennent qu’ il y a autre chose encore que ces ondes, notamment des corpuscules; on peut faire, par exemple, une expérience avec un cylindre de Faraday; il y a donc à tenir compte de l’individualité des électrons et aussi des photons. Je pense que je trouverais que, pour expliquer les phénomènes, il suffit d’admettre que l’expression ψψ* donne la probabilité que ces électrons et ces photons existent dans un volume détermine; cela me suffirait pour expliquer les expériences.

Mais les exemples donnes par M. Heisenberg m’apprennent que j’aurais atteint ainsi tout ce que l’expérience me permet d’atteindre. Or, je pense que cette notion de probabilité serait à mettre à la fin, et comme conclusion, des considérations théoriques, et non pas comme axiome a priori, quoique je veuille bien admettre que cette indétermination correspond aux possibilités expérimentales. Je pourrais toujours garder ma foi déterministe pour les phénomènes fondamentaux, dont je n’ai pas parlé. Est-ce qu’un esprit plus profond ne pourrait pas se rendre compte des mouvements de ces électrons. Ne pourrait-on pas garder le déterminisme en en faisant l’objet d’une croyance ? Faut-il nécessairement ériger l’ indéterminisme en principe?

I added the bold italics above. A free translation of this phrase is this:

Why should we elevate determinism or  – as Born en Heisenberg do – its opposite (indeterminism) to a philosophical principle?

What a beautiful statement ! Lorentz died of a very trivial cause: erysipelas, commonly known as St Anthony’s fire. :-/

Where things went wrong, exactly !

As mentioned in my previous post, Oliver Consa traces all of the nonsense in modern physics back to the Shelter Island (1947), Pocono (1948) and Oldstone (1949) Conferences. However, the first Solvay Conference that was organized after WW II was quite significant too. Niels Bohr and Robert Oppenheimer pretty much dominated it. Bohr does so by providing the introductory lecture ‘On the Notions of Causality and Complementarity’, while Oppenheimer’s ‘Electron Theory’ sets the tone for subsequent Solvay Conferences—most notably the one that would consecrate quantum field theory (QFT), which was held 13 years later (1961).

Indeed, the discussion between Oppenheimer and Dirac on the ‘Electron Theory’ paper in 1948 seems to be where things might have gone wrong—in terms of the ‘genealogy’ or ‘archaelogy’ of modern ideas, so to speak. In fact, both Oppenheimer and Dirac make rather historical blunders there:

  1. Oppenheimer uses perturbation theory to arrive at some kind of ‘new’ model of an electron, based on Schwinger’s new QFT models—which, as we now know, do not really lead anywhere.
  2. Dirac, however, is just too stubborn too: he simply keeps defending his un-defendable electron equation— which, of course, also doesn’t lead anywhere. [It is rather significant he was no longer invited for the next Solvay Conference.]

It is, indeed, very weird that Dirac does not follow through on his own conclusion: “Only a small part of the wave function has a physical meaning. We now have the problem of picking out that very small physical part of the exact solution of the wave equation.

It’s the ring current or Zitterbewegung electron, of course. The one trivial solution he thought was so significant in his 1933 Nobel Prize lecture… The other part of the solution(s) is/are, effectively, bizarre oscillations which he refers to as ‘run-away electrons’.

It’s nice to sort of ‘get’ this. 🙂

Explaining the Lamb shift in classical terms

Corona-virus is bad, but it does have one advantage: more time to work on my hobby ! I finally managed to have a look at what the (in)famous Lamb shift may or may not be. Here is the link to the paper.

I think it’s good. Why? Well… It’s that other so-called ‘high precision test’ of mainstream quantum mechanics (read: quantum field theory)m but so I found it’s just like the rest: ‘Cargo Cult Science.’ [I must acknowledge a fellow amateur physicist and blogger for that reference: it is, apparently, a term coined by Richard Feynman!]

To All: Enjoy and please keep up the good work in these very challenging times !

🙂

Mainstream QM: A Bright Shining Lie

Yesterday night, I got this email from a very bright young physicist: Dr. Oliver Consa. He is someone who – unlike me – does have the required Dr and PhD credentials in physics (I have a drs. title in economics) – and the patience that goes with it – to make some more authoritative statements in the weird world of quantum mechanics. I recommend you click the link in the email (copied below) and read the paper. Please do it! 

It is just 12 pages, and it is all extremely revealing. Very discomforting, actually, in light of all the other revelations on fake news in other spheres of life.

Many of us – and, here, I just refer to those who are reading my post – all sort of suspected that some ‘inner circle’ in the academic circuit had cooked things up:the Mystery Wallahs, as I refer to them now. Dr. Consa’s paper shows our suspicion is well-founded.

QUOTE

Dear fellow scientist,

I send you this mail because you have been skeptical about Foundations of Physics. I think that this new paper will be of your interest. Feel free to share it with your colleagues or publish it on the web. I consider it important that this paper serves to open a public debate on this subject.

Something is Rotten in the State of QED
https://vixra.org/pdf/2002.0011v1.pdf

Abstract
“Quantum electrodynamics (QED) is considered the most accurate theory in the history of science. However, this precision is based on a single experimental value: the anomalous magnetic moment of the electron (g-factor). An examination of QED history reveals that this value was obtained using illegitimate mathematical traps, manipulations and tricks. These traps included the fraud of Kroll & Karplus, who acknowledged that they lied in their presentation of the most relevant calculation in QED history. As we will demonstrate in this paper, the Kroll & Karplus scandal was not a unique event. Instead, the scandal represented the fraudulent manner in which physics has been conducted from the creation of QED through today.”  (12 pag.)

Best Regards,
Oliver Consa
oliver.consa@gmail.com

UNQUOTE

The Mystery Wallahs

I’ve been working across Asia – mainly South Asia – for over 25 years now. You will google the exact meaning but my definition of a wallah is a someone who deals in something: it may be a street vendor, or a handyman, or anyone who brings something new. I remember I was one of the first to bring modern mountain bikes to India, and they called me a gear wallah—because they were absolute fascinated with the number of gears I had. [Mountain bikes are now back to a 2 by 10 or even a 1 by 11 set-up, but I still like those three plateaux in front on my older bikes—and, yes, my collection is becoming way too large but I just can’t do away with it.]

Any case, let me explain the title of this post. I stumbled on the work of the research group around Herman Batelaan in Nebraska. Absolutely fascinating ! Not only did they actually do the electron double-slit experiment, but their ideas on an actual Stern-Gerlach experiment with electrons are quite interesting: https://digitalcommons.unl.edu/cgi/viewcontent.cgi?article=1031&context=physicsgay

I also want to look at their calculations on momentum exchange between electrons in a beam: https://iopscience.iop.org/article/10.1088/1742-6596/701/1/012007.

Outright fascinating. Brilliant ! […]

It just makes me wonder: why is the outcome of this 100-year old battle between mainstream hocus-pocus and real physics so undecided?

I’ve come to think of mainstream physicists as peddlers in mysteries—whence the title of my post. It’s a tough conclusion. Physics is supposed to be the King of Science, right? Hence, we shouldn’t doubt it. At the same time, it is kinda comforting to know the battle between truth and lies rages everywhere—including inside of the King of Science.

JL

A common-sense interpretation of (quantum) physics

This is my summary of what I refer to as a common-sense interpretation of quantum physics. It’s a rather abstruse summary of the 40 papers I wrote over the last two years.

1. A force acts on a charge. The electromagnetic force acts on an electric charge (there is no separate magnetic charge) and the strong force acts on a strong charge. A charge is a charge: a pointlike ‘thing’ with zero rest mass. The idea of an electron combines the idea of a charge and its motion (Schrödinger’s Zitterbewegung). The electron’s rest mass is the equivalent mass of the energy in its motion (mass without mass). The elementary wavefunction represents this motion.

2. There is no weak force: a force theory explaining why charges stay together must also explain when and how they separate. A force works through a force field: the idea that forces are mediated by virtual messenger particles resembles 19th century aether theory. The fermion-boson dichotomy does not reflect anything real: we have charged and non-charged wavicles (electrons versus photons, for example).

3. The Planck-Einstein law embodies a (stable) wavicle. A stable wavicle respects the Planck-Einstein relation (E = hf) and Einstein’s mass-energy equivalence relation (E = m·c2). A wavicle will, therefore, carry energy but it will also pack one or more units of Planck’s quantum of action. Planck’s quantum of action represents an elementary cycle in Nature. An elementary particle embodies the idea of an elementary cycle.

4. The ‘particle zoo’ is a collection of unstable wavicles: they disintegrate because their cycle is slightly off (the integral of the force over the distance of the loop and over the cycle time is not exactly equal to h).

5. An electron is a wavicle that carries charge. A photon does not carry charge: it carries energy between wavicle systems (atoms, basically). It can do so because it is an oscillating field.

6. An atom is a wavicle system. A wavicle system has an equilibrium energy state. This equilibrium state packs one unit of h. Higher energy states pack two, three,…, n units of h. When an atom transitions from one energy state to another, it will emit or absorb a photon that (i) carries the energy difference between the two energy states and (ii) packs one unit of h.

7. Nucleons (protons and neutrons) are held together because of a strong force. The strong force acts on a strong charge, for which we need to define a new unit: we choose the dirac but – out of respect for Yukawa, we write one dirac as 1 Y. If Yukawa’s function models the strong force correctly, then the strong force – which we denote as FN – can be calculated from the Yukawa potential:

F1

This function includes a scale parameter a and a nuclear proportionality constant υ0. Besides its function as an (inverse) mathematical proportionality constant, it also ensures the physical dimensions on the left- and the right-hand side of the force equation are the same. We can choose to equate the numerical value of υ0 to one.

8. The nuclear force attracts two positive electric charges. The electrostatic force repels them. These two forces are equal at a distance r = a. The strong charge unit (gN) can, therefore, be calculated. It is equal to:

F2

9. Nucleons (protons or neutrons) carry both electric as well as strong charge (qe and gN). A kinematic model disentangling both has not yet been found. Such model should explain the magnetic moment of protons and neutrons.

10. We think of a nucleus as wavicle system too. When going from one energy state to another, the nucleus emits or absorbs neutrinos. Hence, we think of the neutrino as the photon of the strong force. Such changes in energy states may also involve the emission and/or absorption of an electric charge (an electron or a positron).

Does this make sense? I look forward to your thoughts. 🙂

[…]

Because the above is all very serious, I thought it would be good to add something that will make you smile. 🙂

saint-schrodinger-as-long-as-the-tomb-is-closed-jesus-is-both-dead-and-alive

God’s Number explained

My posts on the fine-structure constant – God’s Number as it is often referred to – have always attracted a fair amount of views. I think that’s because I have always tried to clarify this or that relation by showing how and why exactly it pops us in this or that formula (e.g. Rydberg’s energy formula, the ratio of the various radii of an electron (Thomson, Compton and Bohr radius), the coupling constant, the anomalous magnetic moment, etcetera), as opposed to what most seem to try to do, and that is to further mystify it. You will probably not want to search through all of my writing so I will just refer you to my summary of these efforts on the viXra.org site: “Layered Motions: the Meaning of the Fine-Structure Constant.

However, I must admit that – till now – I wasn’t quite able to answer this very simple question: what is that fine-structure constant? Why exactly does it appear as a scaling constant or a coupling constant in almost any equation you can think of but not in, say, Einstein’s mass-energy equivalence relation, or the de Broglie relations?

I finally have a final answer (pun intended) to the question, and it’s surprisingly easy: it is the radius of the naked charge in the electron expressed in terms of the natural distance unit that comes out of our realist interpretation of what an electron actually is. [For those who haven’t read me before, this realist interpretation is based on Schrödinger’s discovery of the Zitterbewegung of an electron.] That natural distance unit is the Compton radius of the electron: it is the effective radius of an electron as measured in inelastic collisions between high-energy photons and the electron. I like to think of it as a quantum of space in which interference happens but you will want to think that through for yourself. 

The point is: that’s it. That’s all. All the other calculations follow from it. Why? It would take me a while to explain that but, if you carefully look at the logic in my classical calculations of the anomalous magnetic moment, then you should be able to  understand why these calculations are somewhat more fundamental than the others and why we can, therefore, get everything else out of them. 🙂

Post scriptum: I quickly checked the downloads of my papers on Phil Gibbs’ site, and I am extremely surprised my very first paper (the quantum-mechanical wavefunction as a gravitational wave) of mine still gets downloads. To whomever is interested in this paper, I would say: the realist interpretation we have been pursuing – based on the Zitterbewegung model of an electron – is based on the idea of a naked charge (with zero rest mass) orbiting around some center. The energy in its motion – a perpetual current ring, really – gives the electron its (equivalent) mass. That’s just Wheeler’s idea of ‘mass without mass’. But the force is definitely not gravitational. It cannot be. The force has to grab onto something, and all it can grab onto here is that naked charge. The force is, therefore, electromagnetic. It must be. I now look at my very first paper as a first immature essay. It did help me to develop some basic intuitive ideas on what any realist interpretation of QM should look like, but the quantum-mechanical wavefunction has nothing to do with gravity. Quantum mechanics is electromagnetics: we just add the quantum. The idea of an elementary cycle. Gravity is dealt with by general relativity theory: energy – or its equivalent mass – bends spacetime. That’s very significant, but it doesn’t help you when analyzing the QED sector of physics. I should probably pull this paper of the site – but I won’t. Because I think it shows where I come from: very humble origins. 🙂

The metaphysics of physics

I realized that my last posts were just some crude and rude soundbites, so I thought it would be good to briefly summarize them into something more coherent. Please let me know what you think of it.

The Uncertainty Principle: epistemology versus physics

Anyone who has read anything about quantum physics will know that its concepts and principles are very non-intuitive. Several interpretations have therefore emerged. The mainstream interpretation of quantum mechanics is referred to as the Copenhagen interpretation. It mainly distinguishes itself from more frivolous interpretations (such as the many-worlds and the pilot-wave interpretations) because it is… Well… Less frivolous. Unfortunately, the Copenhagen interpretation itself seems to be subject to interpretation.

One such interpretation may be referred to as radical skepticism – or radical empiricism[1]: we can only say something meaningful about Schrödinger’s cat if we open the box and observe its state. According to this rather particular viewpoint, we cannot be sure of its reality if we don’t make the observation. All we can do is describe its reality by a superposition of the two possible states: dead or alive. That’s Hilbert’s logic[2]: the two states (dead or alive) are mutually exclusive but we add them anyway. If a tree falls in the wood and no one hears it, then it is both standing and not standing. Richard Feynman – who may well be the most eminent representative of mainstream physics – thinks this epistemological position is nonsensical, and I fully agree with him:

“A real tree falling in a real forest makes a sound, of course, even if nobody is there. Even if no one is present to hear it, there are other traces left. The sound will shake some leaves, and if we were careful enough we might find somewhere that some thorn had rubbed against a leaf and made a tiny scratch that could not be explained unless we assumed the leaf were vibrating.” (Feynman’s Lectures, III-2-6)

So what is the mainstream physicist’s interpretation of the Copenhagen interpretation of quantum mechanics then? To fully answer that question, I should encourage the reader to read all of Feynman’s Lectures on quantum mechanics. But then you are reading this because you don’t want to do that, so let me quote from his introductory Lecture on the Uncertainty Principle: “Making an observation affects the phenomenon. The point is that the effect cannot be disregarded or minimized or decreased arbitrarily by rearranging the apparatus. When we look for a certain phenomenon we cannot help but disturb it in a certain minimum way.” (ibidem)

It has nothing to do with consciousness. Reality and consciousness are two very different things. After having concluded the tree did make a noise, even if no one was there to  hear it, he wraps up the philosophical discussion as follows: “We might ask: was there a sensation of sound? No, sensations have to do, presumably, with consciousness. And whether ants are conscious and whether there were ants in the forest, or whether the tree was conscious, we do not know. Let us leave the problem in that form.” In short, I think we can all agree that the cat is dead or alive, or that the tree is standing or not standing¾regardless of the observer. It’s a binary situation. Not something in-between. The box obscures our view. That’s all. There is nothing more to it.

Of course, in quantum physics, we don’t study cats but look at the behavior of photons and electrons (we limit our analysis to quantum electrodynamics – so we won’t discuss quarks or other sectors of the so-called Standard Model of particle physics). The question then becomes: what can we reasonably say about the electron – or the photon – before we observe it, or before we make any measurement. Think of the Stein-Gerlach experiment, which tells us that we’ll always measure the angular momentum of an electron – along any axis we choose – as either +ħ/2 or, else, as -ħ/2. So what’s its state before it enters the apparatus? Do we have to assume it has some definite angular momentum, and that its value is as binary as the state of our cat (dead or alive, up or down)?

We should probably explain what we mean by a definite angular momentum. It’s a concept from classical physics, and it assumes a precise value (or magnitude) along some precise direction. We may challenge these assumptions. The direction of the angular momentum may be changing all the time, for example. If we think of the electron as a pointlike charge – whizzing around in its own space – then the concept of a precise direction of its angular momentum becomes quite fuzzy, because it changes all the time. And if its direction is fuzzy, then its value will be fuzzy as well. In classical physics, such fuzziness is not allowed, because angular momentum is conserved: it takes an outside force – or torque – to change it. But in quantum physics, we have the Uncertainty Principle: some energy (force over a distance, remember) can be borrowed – so to speak – as long as it’s swiftly being returned – within the quantitative limits set by the Uncertainty Principle: ΔE·Δt = ħ/2.

Mainstream physicists – including Feynman – do not try to think about this. For them, the Stern-Gerlach apparatus is just like Schrödinger’s box: it obscures the view. The cat is dead or alive, and each of the two states has some probability – but they must add up to one – and so they will write the state of the electron before it enters the apparatus as the superposition of the up and down states. I must assume you’ve seen this before:

|ψ〉 = Cup|up〉 + Cdown|down〉

It’s the so-called Dirac or bra-ket notation. Cup is the amplitude for the electron spin to be equal to +ħ/2 along the chosen direction – which we refer to as the z-direction because we will choose our reference frame such that the z-axis coincides with this chosen direction – and, likewise, Cup is the amplitude for the electron spin to be equal to -ħ/2 (along the same direction, obviously). Cup and Cup will be functions, and the associated probabilities will vary sinusoidally – with a phase difference so as to make sure both add up to one.

The model is consistent, but it feels like a mathematical trick. This description of reality – if that’s what it is – does not feel like a model of a real electron. It’s like reducing the cat in our box to the mentioned fuzzy state of being alive and dead at the same time. Let’s try to come up with something more exciting. 😊

[1] Academics will immediately note that radical empiricism and radical skepticism are very different epistemological positions but we are discussing some basic principles in physics here rather than epistemological theories.

[2] The reference to Hilbert’s logic refers to Hilbert spaces: a Hilbert space is an abstract vector space. Its properties allow us to work with quantum-mechanical states, which become state vectors. You should not confuse them with the real or complex vectors you’re used to. The only thing state vectors have in common with real or complex vectors is that (1) we also need a base (aka as a representation in quantum mechanics) to define them and (2) that we can make linear combinations.

The ‘flywheel’ electron model

Physicists describe the reality of electrons by a wavefunction. If you are reading this article, you know how a wavefunction looks like: it is a superposition of elementary wavefunctions. These elementary wavefunctions are written as Ai·exp(-iθi), so they have an amplitude Ai  and an argument θi = (Ei/ħ)·t – (pi/ħ)·x. Let’s forget about uncertainty, so we can drop the index (i) and think of a geometric interpretation of A·exp(-iθ) = A·eiθ.

Here we have a weird thing: physicists think the minus sign in the exponent (-iθ) should always be there: the convention is that we get the imaginary unit (i) by a 90° rotation of the real unit (1) – but the rotation is counterclockwise rotation. I like to think a rotation in the clockwise direction must also describe something real. Hence, if we are seeking a geometric interpretation, then we should explore the two mathematical possibilities: A·eiθ and A·e+iθ. I like to think these two wavefunctions describe the same electron but with opposite spin. How should we visualize this? I like to think of A·eiθ and A·e+iθ as two-dimensional harmonic oscillators:

eiθ = cos(-θ) + i·sin(-θ) = cosθ – i·sinθ

e+iθ = cosθ + i·sinθ

So we may want to imagine our electron as a pointlike electric charge (see the green dot in the illustration below) to spin around some center in either of the two possible directions. The cosine keeps track of the oscillation in one dimension, while the sine (plus or minus) keeps track of the oscillation in a direction that is perpendicular to the first one.

Figure 1: A pointlike charge in orbit

Circle_cos_sin

So we have a weird oscillator in two dimensions here, and we may calculate the energy in this oscillation. To calculate such energy, we need a mass concept. We only have a charge here, but a (moving) charge has an electromagnetic mass. Now, the electromagnetic mass of the electron’s charge may or may not explain all the mass of the electron (most physicists think it doesn’t) but let’s assume it does for the sake of the model that we’re trying to build up here. The point is: the theory of electromagnetic mass gives us a very simple explanation for the concept of mass here, and so we’ll use it for the time being. So we have some mass oscillating in two directions simultaneously: we basically assume space is, somehow, elastic. We have worked out the V-2 engine metaphor before, so we won’t repeat ourselves here.

Figure 2: A perpetuum mobile?

V2

Previously unrelated but structurally similar formulas may be related here:

  1. The energy of an oscillator: E = (1/2)·m·a2ω2
  2. Kinetic energy: E = (1/2)·m·v2
  3. The rotational (kinetic) energy that’s stored in a flywheel: E = (1/2)·I·ω2 = (1/2)·m·r2·ω2
  4. Einstein’s energy-mass equivalence relation: E = m·c2

Of course, we are mixing relativistic and non-relativistic formulas here, and there’s the 1/2 factor – but these are minor issues. For example, we were talking not one but two oscillators, so we should add their energies: (1/2)·m·a2·ω2 + (1/2)·m·a2·ω2 = m·a2·ω2. Also, one can show that the classical formula for kinetic energy (i.e. E = (1/2)·m·v2) morphs into E = m·c2 when we use the relativistically correct force equation for an oscillator. So, yes, our metaphor – or our suggested physical interpretation of the wavefunction, I should say – makes sense.

If you know something about physics, then you know the concept of the electromagnetic mass – its mathematical derivation, that is – gives us the classical electron radius, aka as the Thomson radius. It’s the smallest of a trio of radii that are relevant when discussing electrons: the other two radii are the Bohr radius and the Compton scattering radius respectively. The Thomson radius is used in the context of elastic scattering: the frequency of the incident particle (usually a photon), and the energy of the electron itself, do not change. In contrast, Compton scattering does change the frequency of the photon that is being scattered, and also impacts the energy of our electron. [As for the Bohr radius, you know that’s the radius of an electron orbital, roughly speaking – or the size of a hydrogen atom, I should say.]

Now, if we combine the E = m·a2·ω2 and E = m·c2 equations, then a·ω must be equal to c, right? Can we show this? Maybe. It is easy to see that we get the desired equality by substituting the amplitude of the oscillation (a) for the Compton scattering radius r = ħ/(m·c), and ω (the (angular) frequency of the oscillation) by using the Planck relation (ω = E/ħ):     

a·ω = [ħ/(m·c)]·[E/ħ] = E/(m·c) = m·c2/(m·c) = c

We get a wonderfully simple geometric model of an electron here: an electric charge that spins around in a plane. Its radius is the Compton electron radius – which makes sense – and the radial velocity of our spinning charge is the speed of light – which may or may not make sense. Of course, we need an explanation of why this spinning charge doesn’t radiate its energy away – but then we don’t have such explanation anyway. All we can say is that the electron charge seems to be spinning in its own space – that it’s racing along a geodesic. It’s just like mass creates its own space here: according to Einstein’s general relativity theory, gravity becomes a pseudo-force—literally: no real force. How? I am not sure: the model here assumes the medium – empty space – is, somehow, perfectly elastic: the electron constantly borrows energy from one direction and then returns it to the other – so to speak. A crazy model, yes – but is there anything better? We only want to present a metaphor here: a possible visualization of quantum-mechanical models.

However, if this model is to represent anything real, then many more questions need to be answered. For starters, let’s think about an interpretation of the results of the Stern-Gerlach experiment.

Precession

A spinning charge is a tiny magnet – and so it’s got a magnetic moment, which we need to explain the Stern-Gerlach experiment. But it doesn’t explain the discrete nature of the electron’s angular momentum: it’s either +ħ/2 or -ħ/2, nothing in-between, and that’s the case along any direction we choose. How can we explain this? Also, space is three-dimensional. Why would electrons spin in a perfect plane? The answer is: they don’t.

Indeed, the corollary of the above-mentioned binary value of the angular momentum is that the angular momentum – or the electron’s spin – is never completely along any direction. This may or may not be explained by the precession of a spinning charge in a field, which is illustrated below (illustration taken from Feynman’s Lectures, II-35-3).

Figure 3: Precession of an electron in a magnetic fieldprecession

So we do have an oscillation in three dimensions here, really – even if our wavefunction is a two-dimensional mathematical object. Note that the measurement (or the Stein-Gerlach apparatus in this case) establishes a line of sight and, therefore, a reference frame, so ‘up’ and ‘down’, ‘left’ and ‘right’, and ‘in front’ and ‘behind’ get meaning. In other words, we establish a real space. The question then becomes: how and why does an electron sort of snap into place?

The geometry of the situation suggests the logical angle of the angular momentum vector should be 45°. Now, if the value of its z-component (i.e. its projection on the z-axis) is to be equal to ħ/2, then the magnitude of J itself should be larger. To be precise, it should be equal to ħ/√2 ≈ 0.7·ħ (just apply Pythagoras’ Theorem). Is that value compatible with our flywheel model?

Maybe. Let’s see. The classical formula for the magnetic moment is μ = I·A, with I the (effective) current and A the (surface) area. The notation is confusing because I is also used for the moment of inertia, or rotational mass, but… Well… Let’s do the calculation. The effective current is the electron charge (qe) divided by the period (T) of the orbital revolution: : I = qe/T. The period of the orbit is the time that is needed for the electron to complete one loop. That time (T) is equal to the circumference of the loop (2π·a) divided by the tangential velocity (vt). Now, we suggest vt = r·ω = a·ω = c, and the circumference of the loop is 2π·a. For a, we still use the Compton radius a = ħ/(m·c). Now, the formula for the area is A = π·a2, so we get:

μ = I·A = [qe/T]·π·a2 = [qe·c/(2π·a)]·[π·a2] = [(qe·c)/2]·a = [(qe·c)/2]·[ħ/(m·c)] = [qe/(2m)]·ħ

In a classical analysis, we have the following relation between angular momentum and magnetic moment:

μ = (qe/2m)·J

Hence, we find that the angular momentum J is equal to ħ, so that’s twice the measured value. We’ve got a problem. We would have hoped to find ħ/2 or ħ/√2. Perhaps it’s  because a = ħ/(m·c) is the so-called reduced Compton scattering radius…

Well… No.

Maybe we’ll find the solution one day. I think it’s already quite nice we have a model that’s accurate up to a factor of 1/2 or 1/√2. 😊

Post scriptum: I’ve turned this into a small article which may or may not be more readable. You can link to it here. Comments are more than welcome.

Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://en.support.wordpress.com/copyright-and-the-dmca/

A Survivor’s Guide to Quantum Mechanics?

When modeling electromagnetic waves, the notion of left versus right circular polarization is quite clear and fully integrated in the mathematical treatment. In contrast, quantum math sticks to the very conventional idea that the imaginary unit (i) is – always! – a counter-clockwise rotation by 90 degrees. We all know that –i would do just as an imaginary unit as i, because the definition of the imaginary unit says the only requirement is that its square has to be equal to –1, and (–i)2 is also equal to –1.

So we actually have two imaginary units: i and –i. However, physicists stubbornly think there is only one direction for measuring angles, and that is counter-clockwise. That’s a mathematical convention, Professor: it’s something in your head only. It is not real. Nature doesn’t care about our conventions and, therefore, I feel the spin ‘up’ versus spin ‘down’ should correspond to the two mathematical possibilities: if the ‘up’ state is represented by some complex function, then the ‘down’ state should be represented by its complex conjugate.

This ‘additional’ rule wouldn’t change the basic quantum-mechanical rules – which are written in terms of state vectors in a Hilbert space (and, yes, a Hilbert space is an unreal as it sounds: its rules just say you should separate cats and dogs while adding them – which is very sensible advice, of course). However, they would, most probably (just my intuition – I need to prove it), avoid these crazy 720 degree symmetries which inspire the likes of Penrose to say there is no physical interpretation on the wavefunction.

Oh… As for the title of my post… I think it would be a great title for a book – because I’ll need some space to work it all out. 🙂

Quantum math: garbage in, garbage out?

This post is basically a continuation of my previous one but – as you can see from its title – it is much more aggressive in its language, as I was inspired by a very thoughtful comment on my previous post. Another advantage is that it avoids all of the math. 🙂 It’s… Well… I admit it: it’s just a rant. 🙂 [Those who wouldn’t appreciate the casual style of what follows, can download my paper on it – but that’s much longer and also has a lot more math in it – so it’s a much harder read than this ‘rant’.]

My previous post was actually triggered by an attempt to re-read Feynman’s Lectures on Quantum Mechanics, but in reverse order this time: from the last chapter to the first. [In case you doubt, I did follow the correct logical order when working my way through them for the first time because… Well… There is no other way to get through them otherwise. 🙂 ] But then I was looking at Chapter 20. It’s a Lecture on quantum-mechanical operators – so that’s a topic which, in other textbooks, is usually tackled earlier on. When re-reading it, I realize why people quickly turn away from the topic of physics: it’s a lot of mathematical formulas which are supposed to reflect reality but, in practice, few – if any – of the mathematical concepts are actually being explained. Not in the first chapters of a textbook, not in its middle ones, and… Well… Nowhere, really. Why? Well… To be blunt: I think most physicists themselves don’t really understand what they’re talking about. In fact, as I have pointed out a couple of times already, Feynman himself admits so much:

“Atomic behavior appears peculiar and mysterious to everyone—both to the novice and to the experienced physicist. Even the experts do not understand it the way they would like to.”

So… Well… If you’d be in need of a rather spectacular acknowledgement of the shortcomings of physics as a science, here you have it: if you don’t understand what physicists are trying to tell you, don’t worry about it, because they don’t really understand it themselves. 🙂

Take the example of a physical state, which is represented by a state vector, which we can combine and re-combine using the properties of an abstract Hilbert space. Frankly, I think the word is very misleading, because it actually doesn’t describe an actual physical state. Why? Well… If we look at this so-called physical state from another angle, then we need to transform it using a complicated set of transformation matrices. You’ll say: that’s what we need to do when going from one reference frame to another in classical mechanics as well, isn’t it?

Well… No. In classical mechanics, we’ll describe the physics using geometric vectors in three dimensions and, therefore, the base of our reference frame doesn’t matter: because we’re using real vectors (such as the electric of magnetic field vectors E and B), our orientation vis-á-vis the object – the line of sight, so to speak – doesn’t matter.

In contrast, in quantum mechanics, it does: Schrödinger’s equation – and the wavefunction – has only two degrees of freedom, so to speak: its so-called real and its imaginary dimension. Worse, physicists refuse to give those two dimensions any geometric interpretation. Why? I don’t know. As I show in my previous posts, it would be easy enough, right? We know both dimensions must be perpendicular to each other, so we just need to decide if both of them are going to be perpendicular to our line of sight. That’s it. We’ve only got two possibilities here which – in my humble view – explain why the matter-wave is different from an electromagnetic wave.

I actually can’t quite believe the craziness when it comes to interpreting the wavefunction: we get everything we’d want to know about our particle through these operators (momentum, energy, position, and whatever else you’d need to know), but mainstream physicists still tell us that the wavefunction is, somehow, not representing anything real. It might be because of that weird 720° symmetry – which, as far as I am concerned, confirms that those state vectors are not the right approach: you can’t represent a complex, asymmetrical shape by a ‘flat’ mathematical object!

Huh? Yes. The wavefunction is a ‘flat’ concept: it has two dimensions only, unlike the real vectors physicists use to describe electromagnetic waves (which we may interpret as the wavefunction of the photon). Those have three dimensions, just like the mathematical space we project on events. Because the wavefunction is flat (think of a rotating disk), we have those cumbersome transformation matrices: each time we shift position vis-á-vis the object we’re looking at (das Ding an sich, as Kant would call it), we need to change our description of it. And our description of it – the wavefunction – is all we have, so that’s our reality. However, because that reality changes as per our line of sight, physicists keep saying the wavefunction (or das Ding an sich itself) is, somehow, not real.

Frankly, I do think physicists should take a basic philosophy course: you can’t describe what goes on in three-dimensional space if you’re going to use flat (two-dimensional) concepts, because the objects we’re trying to describe (e.g. non-symmetrical electron orbitals) aren’t flat. Let me quote one of Feynman’s famous lines on philosophers: “These philosophers are always with us, struggling in the periphery to try to tell us something, but they never really understand the subtleties and depth of the problem.” (Feynman’s Lectures, Vol. I, Chapter 16)

Now, I love Feynman’s Lectures but… Well… I’ve gone through them a couple of times now, so I do think I have an appreciation of the subtleties and depth of the problem now. And I tend to agree with some of the smarter philosophers: if you’re going to use ‘flat’ mathematical objects to describe three- or four-dimensional reality, then such approach will only get you where we are right now, and that’s a lot of mathematical mumbo-jumbo for the poor uninitiated. Consistent mumbo-jumbo, for sure, but mumbo-jumbo nevertheless. 🙂 So, yes, I do think we need to re-invent quantum math. 🙂 The description may look more complicated, but it would make more sense.

I mean… If physicists themselves have had continued discussions on the reality of the wavefunction for almost a hundred years now (Schrödinger published his equation in 1926), then… Well… Then the physicists have a problem. Not the philosophers. 🙂 As to how that new description might look like, see my papers on viXra.org. I firmly believe it can be done. This is just a hobby of mine, but… Well… That’s where my attention will go over the coming years. 🙂 Perhaps quaternions are the answer but… Well… I don’t think so either – for reasons I’ll explain later. 🙂

Post scriptum: There are many nice videos on Dirac’s belt trick or, more generally, on 720° symmetries, but this links to one I particularly like. It clearly shows that the 720° symmetry requires, in effect, a special relation between the observer and the object that is being observed. It is, effectively, like there is a leather belt between them or, in this case, we have an arm between the glass and the person who is holding the glass. So it’s not like we are walking around the object (think of the glass of water) and making a full turn around it, so as to get back to where we were. No. We are turning it around by 360°! That’s a very different thing than just looking at it, walking around it, and then looking at it again. That explains the 720° symmetry: we need to turn it around twice to get it back to its original state. So… Well… The description is more about us and what we do with the object than about the object itself. That’s why I think the quantum-mechanical description is defective.

Should we reinvent wavefunction math?

Preliminary note: This post may cause brain damage. 🙂 If you haven’t worked yourself through a good introduction to physics – including the math – you will probably not understand what this is about. So… Well… Sorry. 😦 But if you have… Then this should be very interesting. Let’s go. 🙂

If you know one or two things about quantum math – Schrödinger’s equation and all that – then you’ll agree the math is anything but straightforward. Personally, I find the most annoying thing about wavefunction math are those transformation matrices: every time we look at the same thing from a different direction, we need to transform the wavefunction using one or more rotation matrices – and that gets quite complicated !

Now, if you have read any of my posts on this or my other blog, then you know I firmly believe the wavefunction represents something real or… Well… Perhaps it’s just the next best thing to reality: we cannot know das Ding an sich, but the wavefunction gives us everything we would want to know about it (linear or angular momentum, energy, and whatever else we have an operator for). So what am I thinking of? Let me first quote Feynman’s summary interpretation of Schrödinger’s equation (Lectures, III-16-1):

“We can think of Schrödinger’s equation as describing the diffusion of the probability amplitude from one point to the next. […] But the imaginary coefficient in front of the derivative makes the behavior completely different from the ordinary diffusion such as you would have for a gas spreading out along a thin tube. Ordinary diffusion gives rise to real exponential solutions, whereas the solutions of Schrödinger’s equation are complex waves.”

Feynman further formalizes this in his Lecture on Superconductivity (Feynman, III-21-2), in which he refers to Schrödinger’s equation as the “equation for continuity of probabilities”. His analysis there is centered on the local conservation of energy, which makes me think Schrödinger’s equation might be an energy diffusion equation. I’ve written about this ad nauseam in the past, and so I’ll just refer you to one of my papers here for the details, and limit this post to the basics, which are as follows.

The wave equation (so that’s Schrödinger’s equation in its non-relativistic form, which is an approximation that is good enough) is written as:formula 1The resemblance with the standard diffusion equation (shown below) is, effectively, very obvious:formula 2As Feynman notes, it’s just that imaginary coefficient that makes the behavior quite different. How exactly? Well… You know we get all of those complicated electron orbitals (i.e. the various wave functions that satisfy the equation) out of Schrödinger’s differential equation. We can think of these solutions as (complex) standing waves. They basically represent some equilibrium situation, and the main characteristic of each is their energy level. I won’t dwell on this because – as mentioned above – I assume you master the math. Now, you know that – if we would want to interpret these wavefunctions as something real (which is surely what want to do!) – the real and imaginary component of a wavefunction will be perpendicular to each other. Let me copy the animation for the elementary wavefunction ψ(θ) = a·ei∙θ = a·ei∙(E/ħ)·t = a·cos[(E/ħ)∙t]  i·a·sin[(E/ħ)∙t] once more:

Circle_cos_sin

So… Well… That 90° angle makes me think of the similarity with the mathematical description of an electromagnetic wave. Let me quickly show you why. For a particle moving in free space – with no external force fields acting on it – there is no potential (U = 0) and, therefore, the Vψ term – which is just the equivalent of the the sink or source term S in the diffusion equation – disappears. Therefore, Schrödinger’s equation reduces to:

∂ψ(x, t)/∂t = i·(1/2)·(ħ/meff)·∇2ψ(x, t)

Now, the key difference with the diffusion equation – let me write it for you once again: ∂φ(x, t)/∂t = D·∇2φ(x, t) – is that Schrödinger’s equation gives us two equations for the price of one. Indeed, because ψ is a complex-valued function, with a real and an imaginary part, we get the following equations:

  1. Re(∂ψ/∂t) = −(1/2)·(ħ/meffIm(∇2ψ)
  2. Im(∂ψ/∂t) = (1/2)·(ħ/meffRe(∇2ψ)

Huh? Yes. These equations are easily derived from noting that two complex numbers a + i∙b and c + i∙d are equal if, and only if, their real and imaginary parts are the same. Now, the ∂ψ/∂t = i∙(ħ/meff)∙∇2ψ equation amounts to writing something like this: a + i∙b = i∙(c + i∙d). Now, remembering that i2 = −1, you can easily figure out that i∙(c + i∙d) = i∙c + i2∙d = − d + i∙c. [Now that we’re getting a bit technical, let me note that the meff is the effective mass of the particle, which depends on the medium. For example, an electron traveling in a solid (a transistor, for example) will have a different effective mass than in an atom. In free space, we can drop the subscript and just write meff = m.] 🙂 OK. Onwards ! 🙂

The equations above make me think of the equations for an electromagnetic wave in free space (no stationary charges or currents):

  1. B/∂t = –∇×E
  2. E/∂t = c2∇×B

Now, these equations – and, I must therefore assume, the other equations above as well – effectively describe a propagation mechanism in spacetime, as illustrated below:

propagation

You know how it works for the electromagnetic field: it’s the interplay between circulation and flux. Indeed, circulation around some axis of rotation creates a flux in a direction perpendicular to it, and that flux causes this, and then that, and it all goes round and round and round. 🙂 Something like that. 🙂 I will let you look up how it goes, exactly. The principle is clear enough. Somehow, in this beautiful interplay between linear and circular motion, energy is borrowed from one place and then returns to the other, cycle after cycle.

Now, we know the wavefunction consist of a sine and a cosine: the cosine is the real component, and the sine is the imaginary component. Could they be equally real? Could each represent half of the total energy of our particle? I firmly believe they do. The obvious question then is the following: why wouldn’t we represent them as vectors, just like E and B? I mean… Representing them as vectors (I mean real vectors here – something with a magnitude and a direction in a real space – as opposed to state vectors from the Hilbert space) would show they are real, and there would be no need for cumbersome transformations when going from one representational base to another. In fact, that’s why vector notation was invented (sort of): we don’t need to worry about the coordinate frame. It’s much easier to write physical laws in vector notation because… Well… They’re the real thing, aren’t they? 🙂

What about dimensions? Well… I am not sure. However, because we are – arguably – talking about some pointlike charge moving around in those oscillating fields, I would suspect the dimension of the real and imaginary component of the wavefunction will be the same as that of the electric and magnetic field vectors E and B. We may want to recall these:

  1. E is measured in newton per coulomb (N/C).
  2. B is measured in newton per coulomb divided by m/s, so that’s (N/C)/(m/s).

The weird dimension of B is because of the weird force law for the magnetic force. It involves a vector cross product, as shown by Lorentz’ formula:

F = qE + q(v×B)

Of course, it is only one force (one and the same physical reality), as evidenced by the fact that we can write B as the following vector cross-product: B = (1/c)∙ex×E, with ex the unit vector pointing in the x-direction (i.e. the direction of propagation of the wave). [Check it, because you may not have seen this expression before. Just take a piece of paper and think about the geometry of the situation.] Hence, we may associate the (1/c)∙ex× operator, which amounts to a rotation by 90 degrees, with the s/m dimension. Now, multiplication by i also amounts to a rotation by 90° degrees. Hence, if we can agree on a suitable convention for the direction of rotation here, we may boldly write:

B = (1/c)∙ex×E = (1/c)∙iE

This is, in fact, what triggered my geometric interpretation of Schrödinger’s equation about a year ago now. I have had little time to work on it, but think I am on the right track. Of course, you should note that, for an electromagnetic wave, the magnitudes of E and B reach their maximum, minimum and zero point simultaneously (as shown below). So their phase is the same.

E and B

In contrast, the phase of the real and imaginary component of the wavefunction is not the same, as shown below.wavefunction

In fact, because of the Stern-Gerlach experiment, I am actually more thinking of a motion like this:

Wavefunction 2But that shouldn’t distract you. 🙂 The question here is the following: could we possibly think of a new formulation of Schrödinger’s equation – using vectors (again, real vectors – not these weird state vectors) rather than complex algebra?

I think we can, but then I wonder why the inventors of the wavefunction – Heisenberg, Born, Dirac, and Schrödinger himself, of course – never thought of that. 🙂

Hmm… I need to do some research here. 🙂

Post scriptum: You will, of course, wonder how and why the matter-wave would be different from the electromagnetic wave if my suggestion that the dimension of the wavefunction component is the same is correct. The answer is: the difference lies in the phase difference and then, most probably, the different orientation of the angular momentum. Do we have any other possibilities? 🙂

P.S. 2: I also published this post on my new blog: https://readingeinstein.blog/. However, I thought the followers of this blog should get it first. 🙂