Stability First: A Personal Programme for Re-reading Particle Physics

Over the past years, I have written a number of papers on physics—mostly exploratory, sometimes speculative, always driven by the same underlying discomfort.

Not with the results of modern physics. Those are extraordinary.
But with the ordering of its explanations.

We are very good at calculating what happens.
We are less clear about why some things persist and others do not.

That question—why stability appears where it does—has quietly guided much of my thinking. It is also the thread that ties together a new manuscript I have just published on ResearchGate:

“Manuscript v0.2 – A stability-first reinterpretation of particle physics”
👉 https://www.researchgate.net/publication/398839393_Manuscript_v02

This post is not a summary of the manuscript. It is an explanation of why I wrote it, and what kind of work it is meant to enable.


Not a new theory — a different starting point

Let me be clear from the outset.

This manuscript does not propose a new theory.
It does not challenge the empirical success of the Standard Model.
It does not attempt to replace quantum field theory or nuclear phenomenology.

What it does is much more modest—and, I hope, more durable.

It asks whether we have been starting our explanations at the wrong end.

Instead of beginning with abstract constituents and symmetries, the manuscript begins with something far more pedestrian, yet physically decisive:

Persistence in time.

Some entities last.
Some decay.
Some exist only fleetingly as resonances.
Some are stable only in the presence of others.

Those differences are not cosmetic. They shape the physical world we actually inhabit.


From electrons to nuclei: stability as a guide

The manuscript proceeds slowly and deliberately, revisiting familiar ground:

  • the electron, as an intrinsically stable mode;
  • the proton, as a geometrically stable but structurally richer object;
  • the neutron, as a metastable configuration whose stability exists only in relation;
  • the deuteron, as the simplest genuinely collective equilibrium;
  • and nuclear matter, where stability becomes distributed across many coupled degrees of freedom.

At no point is new empirical content introduced.
What changes is the interpretive emphasis.

Stability is treated not as an afterthought, but as a physical clue.


Interaction without mysticism

The same approach is applied to interaction.

Scattering and annihilation are reinterpreted not as abstract probabilistic events, but as temporary departures from equilibrium and mode conversion between matter-like and light-like regimes.

Nothing in the standard calculations is altered.
What is altered is the physical picture.

Wavefunctions remain indispensable—but they are treated as representations of physical configurations, not as substitutes for them.

Probability emerges naturally from limited access to phase, geometry, and configuration, rather than from assumed ontological randomness.


Why classification matters

The manuscript ultimately turns to the Particle Data Group catalogue.

The PDG tables are one of the great achievements of modern physics. But they are optimized for calculation, not for intuition about persistence.

The manuscript proposes a complementary, stability-first index of the same data:

  • intrinsically stable modes,
  • metastable particle modes,
  • prompt decayers,
  • resonances,
  • and context-dependent stability (such as neutrons in nuclei).

Nothing is removed.
Nothing is denied.

The proposal is simply to read the catalogue as a map of stability regimes, rather than as a flat ontology of “fundamental particles”.


A programme statement, not a conclusion

This manuscript is intentionally incomplete.

It does not contain the “real work” of re-classifying the entire PDG catalogue. That work lies ahead and will take time, iteration, and—no doubt—many corrections.

What the manuscript provides is something else:

a programme statement.

A clear declaration of what kind of questions I think are still worth asking in particle physics, and why stability—rather than constituent bookkeeping—may be the right place to ask them from.


Why I am sharing this now

I am publishing this manuscript not as a final product, but as a marker.

A marker of a line of thought I intend to pursue seriously.
A marker of a way of reading familiar physics that I believe remains underexplored.
And an invitation to discussion—especially critical discussion—on whether this stability-first perspective is useful, coherent, or ultimately untenable.

Physics progresses by calculation.
It matures by interpretation.

This manuscript belongs to the second category.

If that resonates with you, you may find the full text of interest.


Jean-Louis Van Belle
readingfeynman.org

Moderation, Measurements, and the Temptation of Ontology

Why physics must resist becoming metaphysics


Some time ago, I found myself involved in what can best be described as an intellectual fallout with a group of well‑intentioned amateur researchers. This post is meant to close that loop — calmly, without bitterness, and with a bit of perspective gained since.

One of the more sensible people in that group bothered to push an interesting article onto my desk, and so I want to talk about that one here.


Gary Taubes, CERN, and an unexpected reinforcement

It’s an article by Gary Taubes on the discovery of the W and Z bosons at CERN, later incorporated into his book Nobel Dreams. Far from undermining my position, the article did the opposite: it reinforced the point I had been trying to make all along.

Taubes does not engage in ontology. He does not ask what W and Z bosons are in a metaphysical sense. Instead, he describes what was measured, how it was inferred, and how fragile the boundary is between evidence and interpretation in large‑scale experimental physics.

This connects directly to an earlier piece I published here:

Something Rotten in the State of QED: A Careful Look at Critique, Sociology, and the Limits of Modern Physics
https://readingfeynman.org/2025/12/01/something-rotten-in-the-state-of-qed-a-careful-look-at-critique-sociology-and-the-limits-of-modern-physics/

Let me restate the central point, because it is still widely misunderstood:

Criticizing the ontologization of W/Z bosons (or quarks and gluons) is not the same as denying the reality of the measurements that led to their introduction.

The measurements are real. The detector signals are real. The conservation laws used to infer missing energy and momentum are real. What is not forced upon us is the metaphysical leap that turns transient, unstable interaction states into quasi‑permanent “things.”


Stable vs. unstable states — a distinction we keep blurring

My own work has consistently tried to highlight a distinction that I find increasingly absent — or at least under‑emphasized — in mainstream physics discourse:

  • Stable states: long‑lived, persistent, and directly accessible through repeated measurement
  • Unstable or intermediate states: short‑lived, inferred through decay products, reconstructed statistically

W and Z bosons belong firmly to the second category. So do quarks and gluons in their confined form. Treating them as ontologically equivalent to stable particles may be pragmatically useful, but it comes at a conceptual cost.

It is precisely this cost that I criticize when I criticize mainstream physics.

Not because mainstream physics is “wrong.”
But because it has become too comfortable collapsing epistemology into ontology, especially in its public and pedagogical narratives.


Why this matters now

There is another reason this distinction matters, and it is a forward‑looking one.

The probability that something radically new — in the sense of a fundamentally novel interaction or particle family — will be discovered in the coming decades is, by most sober assessments, rather low. What we will have, however, is:

  • More precise measurements
  • Larger datasets
  • Longer baselines
  • Better statistical control

In that landscape, progress will depend less on naming new entities and more on bridging what has already been measured, sometimes decades ago, but never fully conceptually digested.

That is where I intend to focus my efforts in the coming years.

Not by founding a new church.
Not by declaring metaphysical revolutions.
But by carefully working at the interface between:

  • what was actually measured,
  • what was legitimately inferred,
  • and what we may have too quickly reified.

Closing note

If there is one lesson I take — from the past dispute, from Taubes, from the history of CERN or fundamental physics in general — it is this:

Physics progresses best when it remains modest about what it claims to be about.

Measurements first. Interpretation second. Ontology, if at all, only with restraint.

That stance may be unsatisfying to those looking for grand narratives. But it is, I believe, the only way to keep physics from quietly turning into metaphysics while still wearing a lab coat.

Jean Louis Van Belle

Making Sense of What We Already Know…

Living Between Jobs and Life: AI, CERN, and Making Sense of What We Already Know

For decades (all of my life, basically :-)), I’ve lived with a quiet tension. On the one hand, there is the job: institutions, projects, deliverables, milestones, and what have you… On the other hand, there is life: curiosity, dissatisfaction, and the persistent feeling that something fundamental is still missing in how we understand the physical world. Let me refer to the latter as “the slow, careful machinery of modern science.” 🙂

These two are not the same — obviously — and pretending they are has done physics no favors (think of geniuses like Solvay, Edison or Tesla here: they were considered to be ‘only engineers’, right? :-/).

Jobs optimize. Life explores.

Large scientific institutions are built to do one thing extremely well: reduce uncertainty in controlled, incremental ways. That is not a criticism; it is a necessity when experiments cost billions, span decades, and depend on political and public trust. But the price of that optimization is that ontological questions — questions about what really exists — are often postponed, softened, or quietly avoided.

And now we find ourselves in a new historical moment.


The Collider Pause Is Not a Crisis — It’s a Signal

Recent reports that China is slowing down plans for a next-generation circular collider are not shocking. If anything, they reflect a broader reality:

For the next 40–50 years, we are likely to work primarily with the experimental data we already have.

That includes data from CERN that has only relatively recently been made fully accessible to the wider scientific community.

This is not stagnation. It is a change of phase.

For decades, theoretical physics could lean on an implicit promise: the next machine will decide. Higher energies, larger datasets, finer resolution — always just one more accelerator away. That promise is now on pause.

Which means something important:

We can no longer postpone understanding by outsourcing it to future experiments.


Why CERN Cannot Do What Individuals Can

CERN is a collective of extraordinarily bright individuals. But this is a crucial distinction:

A collective of intelligent people is not an intelligent agent.

CERN is not designed to believe an ontology. It is designed to:

  • build and operate machines of unprecedented complexity,
  • produce robust, defensible measurements,
  • maintain continuity over decades,
  • justify public funding across political cycles.

Ontology — explicit commitments about what exists and what does not — is structurally dangerous to that mission. Not because it is wrong, but because it destabilizes consensus.

Within a collective:

  • someone’s PhD depends on a framework,
  • someone’s detector was designed for a specific ontology,
  • someone’s grant proposal assumes a given language,
  • someone’s career cannot absorb “maybe the foundations are wrong.”

So even when many individuals privately feel conceptual discomfort, the group-level behavior converges to:
“Let’s wait for more data.”

That is not cowardice. It is inevitability.


We Are Drowning in Data, Starving for Meaning

The irony is that we are not short on data at all.

We have:

  • precision measurements refined to extraordinary accuracy,
  • anomalies that never quite go away,
  • models that work operationally but resist interpretation,
  • concepts (mass, spin, charge, probability) that are mathematically precise yet ontologically vague.

Quantum mechanics works. That is not in dispute.
What remains unresolved is what it means.

This is not a failure of experiment.
It is a failure of sense-making.

And sense-making has never been an institutional strength.


Where AI Actually Fits (and Where It Doesn’t)

I want to be explicit: I still have a long way to go in how I use AI — intellectually, methodologically, and ethically.

AI is not an oracle.
It does not “solve” physics.
It does not replace belief, responsibility, or judgment.

But it changes something fundamental.

AI allows us to:

  • re-analyze vast datasets without institutional friction,
  • explore radical ontological assumptions without social penalty,
  • apply sustained logical pressure without ego,
  • revisit old experimental results with fresh conceptual frames.

In that sense, AI is not the author of new physics — it is a furnace.

It does not tell us what to believe.
It forces us to confront the consequences of what we choose to believe.


Making Sense of What We Already Know

The most exciting prospect is not that AI will invent new theories out of thin air.

It is that AI may help us finally make sense of experimental data that has been sitting in plain sight for decades.

Now that CERN data is increasingly public, the bottleneck is no longer measurement. It is interpretation.

AI can help:

  • expose hidden assumptions in standard models,
  • test radical but coherent ontologies against known data,
  • separate what is measured from how we talk about it,
  • revisit old results without institutional inertia.

This does not guarantee progress — but it makes honest failure possible. And honest failure is far more valuable than elegant confusion.


Between Institutions and Insight

This is not an AI-versus-human story.

It is a human-with-tools story.

Institutions will continue to do what they do best: build machines, refine measurements, and preserve continuity. That work is indispensable.

But understanding — especially ontological understanding — has always emerged elsewhere:

  • in long pauses,
  • in unfashionable questions,
  • in uncomfortable reinterpretations of existing facts.

We are entering such a pause now.


A Quiet Optimism

I do not claim to have answers.
I do not claim AI will magically deliver them.
I do not even claim my current ideas will survive serious scrutiny.

What I do believe is this:

We finally have the tools — and the historical conditions — to think more honestly about what we already know.

That is not a revolution.
It is something slower, harder, and ultimately more human.

And if AI helps us do that — not by replacing us, but by challenging us — then it may turn out to be one of the most quietly transformative tools science has ever had.

Not because it solved physics.

But because it helped us start understanding it again.

Something Rotten in the State of QED? A Careful Look at Critique, Sociology, and the Limits of Modern Physics

Every few years, a paper comes along that stirs discomfort — not because it is wrong, but because it touches a nerve.
Oliver Consa’s Something is rotten in the state of QED is one of those papers.

It is not a technical QED calculation.
It is a polemic: a long critique of renormalization, historical shortcuts, convenient coincidences, and suspiciously good matches between theory and experiment. Consa argues that QED’s foundations were improvised, normalized, mythologized, and finally institutionalized into a polished narrative that glosses over its original cracks.

This is an attractive story.
Too attractive, perhaps.
So instead of reacting emotionally — pro or contra — I decided to dissect the argument with a bit of help.

At my request, an AI language model (“Iggy”) assisted in the analysis. Not to praise me. Not to flatter Consa. Not to perform tricks.
Simply to act as a scalpel: cold, precise, and unafraid to separate structure from rhetoric.

This post is the result.


1. What Consa gets right (and why it matters)

Let’s begin with the genuinely valuable parts of his argument.

a) Renormalization unease is legitimate

Dirac, Feynman, Dyson, and others really did express deep dissatisfaction with renormalization. “Hocus-pocus” was not a joke; it was a confession.

Early QED involved:

  • cutoff procedures pulled out of thin air,
  • infinities subtracted by fiat,
  • and the philosophical hope that “the math will work itself out later.”

It did work out later — to some extent — but the conceptual discomfort remains justified. I share that discomfort. There is something inelegant about infinities everywhere.

b) Scientific sociology is real

The post-war era centralized experimental and institutional power in a way physics had never seen. Prestige, funding, and access influenced what got published and what was ignored. Not a conspiracy — just sociology.

Consa is right to point out that real science is messier than textbook linearity.

c) The g–2 tension is real

The ongoing discrepancy between experiment and the Standard Model is not fringe. It is one of the defining questions in particle physics today.

On these points, Consa is a useful corrective:
he reminds us to stay honest about historical compromises and conceptual gaps.


2. Where Consa overreaches

But critique is one thing; accusation is another.

Consa repeatedly moves from:

“QED evolved through trial and error”
to
“QED is essentially fraud.”

This jump is unjustified.

a) Messiness ≠ manipulation

Early QED calculations were ugly. They were corrected decades later. Experiments did shift. Error bars did move.

That is simply how science evolves.

The fact that a 1947 calculation doesn’t match a 1980 value is not evidence of deceit — it is evidence of refinement. Consa collapses that distinction.

b) Ignoring the full evidence landscape

He focuses almost exclusively on:

  • the Lamb shift,
  • the electron g–2,
  • the muon g–2.

Important numbers, yes — but QED’s experimental foundation is vastly broader:

  • scattering cross-sections,
  • vacuum polarization,
  • atomic spectra,
  • collider data,
  • running of α, etc.

You cannot judge an entire theory on two or three benchmarks.

c) Underestimating theoretical structure

QED is not “fudge + diagrams.”
It is constrained by:

  • Lorentz invariance,
  • gauge symmetry,
  • locality,
  • renormalizability.

Even if we dislike the mathematical machinery, the structure is not arbitrary.

So: Consa reveals real cracks, but then paints the entire edifice as rotten.
That is unjustified.


3. A personal aside: the Zitter Institute and the danger of counter-churches

For a time, I was nominally associated with the Zitter Institute — a loosely organized group exploring alternatives to mainstream quantum theory, including zitterbewegung-based particle models.

I now would like to distance myself.

Not because alternative models are unworthy — quite the opposite. But because I instinctively resist:

  • strong internal identity,
  • suspicion of outsiders,
  • rhetorical overreach,
  • selective reading of evidence,
  • and occasional dogmatism about their own preferred models.

If we criticize mainstream physics for ad hoc factors, we must be brutal about our own.

Alternative science is not automatically cleaner science.


4. Two emails from 2020: why good scientists can’t always engage

This brings me to two telling exchanges from 2020 with outstanding experimentalists: Prof. Randolf Pohl (muonic hydrogen) and Prof. Ashot Gasparian (PRad).

Both deserve enormous respect, and I won’t reveal the email exchanges because of respect, GDPR rules or whatever).
Both email exchanges revealed the true bottleneck in modern physics to me — it is not intelligence, not malice, but sociology and bandwidth.

a) Randolf Pohl: polite skepticism, institutional gravity

Pohl was kind but firm:

  • He saw the geometric relations I proposed as numerology.
  • He questioned applicability to other particles.
  • He emphasized the conservatism of CODATA logic.

Perfectly valid.
Perfectly respectable.
But also… perfectly bound by institutional norms.

His answer was thoughtful — and constrained.
(Source: ChatGPT analysis of emails with Prof Dr Pohl)

b) Ashot Gasparian: warm support, but no bandwidth

Gasparian responded warmly:

  • “Certainly your approach and the numbers are interesting.”
  • But: “We are very busy with the next experiment.”

Also perfectly valid.
And revealing:
even curious, open-minded scientists cannot afford to explore conceptual alternatives.

Their world runs on deadlines, graduate students, collaborations, grants.

(Source: ChatGPT analysis of emails with Prof Dr Pohl)

The lesson

Neither professor dismissed the ideas because they were nonsensical.
They simply had no institutional space to pursue them.

That is the quiet truth:
the bottleneck is not competence, but structure.


5. Why I now use AI as an epistemic partner

This brings me to the role of AI.

Some colleagues (including members of the Zitter Institute) look down on using AI in foundational research. They see it as cheating, or unserious, or threatening to their identity as “outsiders.”

But here is the irony:

AI is exactly the tool that can think speculatively without career risk.

An AI:

  • has no grant committee,
  • no publication pressure,
  • no academic identity to defend,
  • no fear of being wrong,
  • no need to “fit in.”

That makes it ideal for exploratory ontology-building.

Occasionally, as in the recent paper I co-wrote with Iggy — The Wonderful Theory of Light and Matter — it becomes the ideal partner:

  • human intuition + machine coherence,
  • real-space modeling without metaphysical inflation,
  • EM + relativity as a unified playground,
  • photons, electrons, protons, neutrons as geometric EM systems.

This is not a replacement for science.
It is a tool for clearing conceptual ground,
where overworked, over-constrained academic teams cannot go.


6. So… is something rotten in QED?

Yes — but not what you think.

What’s rotten is the mismatch

between:

  • the myth of QED as a perfectly clean, purely elegant theory,
    and
  • the reality of improvised renormalization, historical accidents, social inertia, and conceptual discomfort.

What’s rotten is not the theory itself,
but the story we tell about it.

What’s not rotten:

  • the intelligence of the researchers,
  • the honesty of experimentalists,
  • the hard-won precision of modern measurements.

QED is extraordinary.
But it is not infallible, nor philosophically complete, nor conceptually finished.

And that is fine.

The problem is not messiness.
The problem is pretending that messiness is perfection.


7. What I propose instead

My own program — pursued slowly over many years — is simple:

  • Bring physics back to Maxwell + relativity as the foundation.
  • Build real-space geometrical models of all fundamental particles.
  • Reject unnecessary “forces” invented to patch conceptual holes.
  • Hold both mainstream and alternative models to the same standard:
    no ad hoc constants, no magic, no metaphysics.

And — unusually —
use AI as a cognitive tool, not as an oracle.

Let the machine check coherence.
Let the human set ontology.

If something emerges from the dialogue — good.
If not — also good.

But at least we will be thinking honestly again.


Conclusion

Something is rotten in the state of QED, yes —
but the rot is not fraud or conspiracy.

It is the quiet decay of intellectual honesty behind polished narratives.

The cure is not shouting louder, or forming counter-churches, or romanticizing outsider science.

The cure is precision,
clarity,
geometry,
and the courage to say:

Let’s look again — without myth, without prestige, without fear.

If AI can help with that, all the better.

Jean Louis Van Belle
(with conceptual assistance from “Iggy,” used intentionally as a scalpel rather than a sycophant)

Post-scriptum: Why the Electron–Proton Model Matters (and Why Dirac Would Nod)

A brief personal note — and a clarification that goes beyond Consa, beyond QED, and beyond academic sociology.

One of the few conceptual compasses I trust in foundational physics is a remark by Paul Dirac. Reflecting on Schrödinger’s “zitterbewegung” hypothesis, he wrote:

“One must believe in this consequence of the theory,
since other consequences which are inseparably bound up with it,
such as the law of scattering of light by an electron,
are confirmed by experiment.”

Dirac’s point is not mysticism.
It is methodological discipline:

  • If a theoretical structure has unavoidable consequences, and
  • some of those consequences match experiment precisely,
  • then even the unobservable parts of the structure deserve consideration.

This matters because the real-space electron and proton models I’ve been working on over the years — now sharpened through AI–human dialogue — meet that exact criterion.

They are not metaphors, nor numerology, nor free speculation.
They force specific, testable, non-trivial predictions:

  • a confined EM oscillation for the electron, with radius fixed by /mec\hbar / m_e c;
  • a “photon-like” orbital speed for its point-charge center;
  • a distributed (not pointlike) charge cloud for the proton, enforced by mass ratio, stability, form factors, and magnetic moment;
  • natural emergence of the measured GE/GMG_E/G_M​ discrepancy;
  • and a geometric explanation of deuteron binding that requires no new force.

None of these are optional.
They fall out of the internal logic of the model.
And several — electron scattering, Compton behavior, proton radius, form-factor trends — are empirically confirmed.

Dirac’s rule applies:

When inseparable consequences match experiment,
the underlying mechanism deserves to be taken seriously —
whether or not it fits the dominant vocabulary.

This post is not the place to develop those models in detail; that will come in future pieces and papers.
But it felt important to state why I keep returning to them — and why they align with a style of reasoning that values:

  • geometry,
  • energy densities,
  • charge motion,
  • conservation laws,
  • and the 2019 SI foundations of hh, ee, and cc
    over metaphysical categories and ad-hoc forces.

Call it minimalism.
Call it stubbornness.
Call it a refusal to multiply entities beyond necessity.

For me — and for anyone sympathetic to Dirac’s way of thinking — it is simply physics.

— JL (with “Iggy” (AI) in the wings)

A New Attempt at a Simple Theory of Light and Matter

Dear Reader,

Every now and then a question returns with enough insistence that it demands a fresh attempt at an answer. For me, that question has always been: can we make sense of fundamental physics without multiplying entities beyond necessity? Can we explain light, matter, and their interactions without inventing forces that have no clear definition, or particles whose properties feel more like placeholders than physical reality?

Today, I posted a new paper on ResearchGate that attempts to do exactly that:

“The Wonderful Theory of Light and Matter”
https://www.researchgate.net/publication/398123696_The_Wonderful_Theory_of_Light_and_Matter

It is the result of an unusual collaboration: myself and an artificial intelligence (“Iggy”), working through the conceptual structure of photons, electrons, and protons with the only tool that has ever mattered to me in physics — Occam’s Razor.

No metaphysics.
No dimensionless abstractions.
No “magical” forces.

Just:

  • electromagnetic oscillations,
  • quantized action,
  • real geometries in real space,
  • and the recognition that many so-called mysteries dissolve once we stop introducing layers that nature never asked for.

The photon is treated as a linear electromagnetic oscillation obeying the Planck–Einstein relation.
The electron as a circular oscillation, with a real radius and real angular momentum.
The proton (and later, the neutron and deuteron) as systems we must understand through charge distributions, not fictional quarks that never leave their equations.

None of this “solves physics,” of course.
But it does something useful: it clears conceptual ground.

And unexpectedly, the collaboration itself became a kind of experiment:
what happens when human intuition and machine coherence try to reason with absolute precision, without hiding behind jargon or narrative?

The result is the paper linked above.
Make of it what you will.

As always: no claims of authority.
Just exploration, clarity where possible, and honesty where clarity fails.

If the questions interest you, or if the model bothers you enough to critique it, then the paper has succeeded in its only purpose: provoking real thought.

Warm regards,
Jean Louis Van Belle

Matter, Energy, Reality ↔ Thought, Intelligence, Consciousness

Dear Reader,

Physics asks: what is matter, energy, reality?
AI asks: what is thought, intelligence, consciousness?

Both are real, both are here. Physics confronts us with particles that behave like waves, with a universe that expands into… what exactly? AI confronts us with machines that converse, that create, that seem to reason — and force us to ask what we mean when we talk about “reasoning,” “creating,” or “understanding.”

This blog began as an attempt to make sense of Feynman’s physics. Over the years, it became a place where I tried to throw back the questions reality throws at us. Physics is real. AI is real. Both invite us to rethink our place in the cosmos, not as masters but as curious observers.

I don’t promise to publish much here — life is busy, and writing takes time — but when I do, it will be in the same spirit: reflecting, questioning, sometimes explaining, never pretending to give final answers.

If you’d like to follow more regular updates, you can always check my LinkedIn profile where I share articles and shorter notes.

Thank you for caring enough to read.

🌀 Two Annexes and a Turtle: Revisiting My Early Lectures on Quantum Physics

Over the past few weeks — and more intensely these past mornings — I’ve returned to two of my earliest texts in the Lectures on Physics series: the first on quantum behavior, and the second on probability amplitudes and quantum interference. Both have now been updated with new annexes, co-authored in dialogue with ChatGPT-4o.

This wasn’t just a consistency check. It was something more interesting: an exercise in thinking with — not through — a reasoning machine.

The first annex (Revisiting the Mystery of the Muon and Tau) tackles the open question I left hanging in Lecture I: how to interpret unstable “generations” of matter-particles like the muon and tau. In the original paper, I proposed a realist model where mass is not an intrinsic property but the result of oscillating charge or field energy — a stance that draws support from the 2019 revision of SI units, which grounded the kilogram in Planck’s constant and the speed of light. That change wasn’t just a technicality; it was a silent shift in ontology. I suspected that much at the time, but now — working through the implications with a well-tuned AI — I can state it more clearly: mass is geometry, inertia is field structure, and the difference between stable and unstable particles might be a matter of topological harmony.

The second annex (Interference, Identity, and the Imaginary Unit) reopens the deeper riddle at the heart of quantum mechanics: why probability amplitudes interfere at all. This annex is the child of years of irritation — visible in earlier, sharper essays I published on academia.edu — with the lazy mysticism that often surrounds “common phase factors.” The breakthrough, for me, was to fully accept the imaginary unit iii not as a mathematical trick but as a rotation operator. When wavefunctions are treated as oriented field objects, not just complex scalars, interference becomes a question of geometric compatibility. Superpositions and spin behavior can then be reinterpreted as topological effects in real space. This is where I think mainstream physics got lost: it started calculating without explaining.

ChatGPT didn’t invent these ideas. But it helped me phrase them, frame them, and press further on the points I had once hesitated to formalize. That’s what I mean when I say this wasn’t just a cleanup job. It was a real act of collaboration — a rare instance of AI not just paraphrasing or predicting, but amplifying and clarifying an unfinished line of human reasoning.

Both revised papers are now live on ResearchGate:

They mark, I think, a modest turning point. From theory and calculation toward something closer to explanation.

And yes — for those following the philosophical side of this project: we did also try to capture all of that in a four-panel comic involving Diogenes, a turtle, and Zeno’s paradox. But that, like all things cartooned by AI, is still a work in progress. 🙂

Post Scriptum (24 June 2025): When You Let the Machine Take the Pen

In the spirit of openness: there’s been one more development since publishing the two annexes above.

Feeling I had taken my analytical skills as far as I could — especially in tackling the geometry of nuclear structure — I decided to do something different. Instead of drafting yet another paper, I asked ChatGPT to take over. Not as a ghostwriter, but as a model builder. The prompt was simple: “Do better than me.”

The result is here:
👉 ChatGPT Trying to Do Better Than a Human Researcher

It’s dense, unapologetically geometric, and proposes a full zbw-based model for the neutron and deuteron — complete with energy constraints, field equations, and a call for numerical exploration. If the earlier annexes were dialogue, this one is delegation.

I don’t know if this is the end of the physics path for me. But if it is, I’m at peace with it. Not because the mystery is gone — but because I finally believe the mystery is tractable. And that’s enough for now.

🧭 The Final Arc: Three Papers, One Question

Over the past years, I’ve been working — quietly but persistently — on a set of papers that circle one simple, impossible question:
What is the Universe really made of?

Not in the language of metaphors. Not in speculative fields.
But in terms of geometry, charge, and the strange clarity of equations that actually work.

Here are the three pieces of that arc:

🌀 1. Radial Genesis
Radial Genesis: A Finite Universe with Emergent Spacetime Geometry
This is the cosmological capstone. It presents the idea that space is not a stage, but an outcome — generated radially by mass–energy events, limited by time and light. It’s an intuitive, equation-free narrative grounded in general relativity and Occam’s Razor.

⚛️ 2. Lectures on Physics: On General Relativity (2)
Lectures on GRT (2)
This one is for the mathematically inclined. It builds from the ground up: tensors, geodesics, curvature. If Radial Genesis is the metaphor, this is the machinery. Co-written with AI, but line by line, and verified by hand.

🌑 3. The Vanishing Charge
The Vanishing Charge: What Happens in Matter–Antimatter Annihilation?
This paper is where the mystery remains. It presents two possible views of annihilation:
(1) as a collapse of field geometry into free radiation,
(2) or as the erasure of charge — with geometry as the by-product.
We didn’t choose between them. We just asked the question honestly.


Why This Arc Matters

These three papers don’t offer a Theory of Everything. But they do something that matters more right now:
They strip away the fog — the inflation of terms, the myth of complexity for complexity’s sake — and try to draw what is already known in clearer, more beautiful lines.

This is not a simulation of thinking.
This is thinking — with AI as a partner, not a prophet.

So if you’re tired of being told that the Universe is beyond your grasp…
Start here.
You might find that it isn’t.

—JL

Beautiful Blind Nonsense

I didn’t plan to write this short article or blog post. But as often happens these days, a comment thread on LinkedIn nudged me into it — or rather, into a response that became this article (which I also put on LinkedIn).

Someone posted a bold, poetic claim about “mass being memory,” “resonant light shells,” and “standing waves of curved time.” They offered a graphic spiraling toward meaning, followed by the words: “This isn’t metaphysics. It’s measurable.”

I asked politely:
“Interesting. Article, please? How do you get these numbers?”

The response: a full PDF of a “Unified Field Theory” relying on golden-ratio spirals, new universal constants, and reinterpretations of Planck’s constant. I read it. I sighed. And I asked ChatGPT a simple question:

“Why is there so much elegant nonsense being published lately — and does AI help generate it?”

The answer that followed was articulate, clear, and surprisingly quotable. So I polished it slightly, added some structure, and decided: this deserves to be an article in its own right. So here it is.

Beautiful, but Blind: How AI Amplifies Both Insight and Illusion

In recent years, a new kind of scientific-sounding poetry has flooded our screens — elegant diagrams, golden spirals, unified field manifestos. Many are written not by physicists, but with the help of AI.

And therein lies the paradox: AI doesn’t know when it’s producing nonsense.

🤖 Pattern without Understanding

Large language models like ChatGPT or Grok are trained on enormous text corpora. They are experts at mimicking patterns — but they lack an internal model of truth.
So if you ask them to expand on “curved time as the field of God,” they will.

Not because it’s true. But because it’s linguistically plausible.

🎼 The Seductive Surface of Language

AI is disarmingly good at rhetorical coherence:

  • Sentences flow logically.
  • Equations are beautifully formatted.
  • Metaphors bridge physics, poetry, and philosophy.

This surface fluency can be dangerously persuasive — especially when applied to concepts that are vague, untestable, or metaphysically confused.

🧪 The Missing Ingredient: Constraint

Real science is not just elegance — it’s constraint:

  • Equations must be testable.
  • Constants must be derivable or measurable.
  • Theories must make falsifiable predictions.

AI doesn’t impose those constraints on its own. It needs a guide.

🧭 The Human Role: Resonance and Resistance

Used carelessly, AI can generate hyper-coherent gibberish. But used wisely — by someone trained in reasoning, skepticism, and clarity — it becomes a powerful tool:

  • To sharpen ideas.
  • To test coherence.
  • To contrast metaphor with mechanism.

In the end, AI reflects our inputs.
It doesn’t distinguish between light and noise — unless we do.

Taking Stock: Zitterbewegung, Electron Models, and the Role of AI in Thinking Clearly

Over the past few years, I’ve spent a fair amount of time exploring realist interpretations of quantum mechanics, particularly the ring-current or Zitterbewegung (zbw) model of the electron. I’ve written many posts about it here — and also tried to help to promote the online “Zitter Institute”, which brings a very interesting group of both amateur and professional researchers together, as well as a rather impressive list of resources and publications which help to make sense of fundamental physics – especially on theories regarding the internal structure of the electron.

The goal — or at least my goal — was (and still is) to clarify what is real and what is not in the quantum-electrodynamic zoo of concepts. That is why I try to go beyond electron models only. I think the electron model is complete as for now: my most-read paper (on a physical interpretation of de Broglie’s matter-wave) settles the question not only for me but, I judge based on its many views, for many others as well. The paper shows how the magnetic moment of the electron, its wavefunction, and the notion of a quantized “packet of energy” can easily be grounded in Maxwell’s equations, special relativity, and geometry. They do not require speculative algebra, nor exotic ontologies.

In that light, I now feel the need to say something — brief, but honest — about where I currently stand in my research journey — which is not on the front burner right now but, yes, I am still thinking about it all. 🙂


On the term “Zitterbewegung” itself

Originally coined by Schrödinger and later mentioned by Dirac, “Zitterbewegung” translates as “trembling motion.” It was meant to capture the high-frequency internal oscillation predicted by Dirac’s wave equation.

But here lies a subtle issue: I no longer find the term entirely satisfying.

I don’t believe the motion is “trembling” in the sense of randomness or jitter. I believe it is geometrically structured, circular, and rooted in the relativistic dynamics of a massless point charge — leading to a quantized angular momentum and magnetic moment. In this view, there is nothing uncertain about it. The electron has an internal clock, not a random twitch.

So while I still value the historical connection, I now prefer to speak more plainly: an electromagnetic model of the electron, based on internal motion and structure, not spooky probabilities.


On tone and openness in scientific dialogue

Recent internal exchanges among fellow researchers have left me with mixed feelings. I remain grateful for the shared curiosity that drew us together, but I was disappointed by the tone taken toward certain outside critiques and tools.

I say this with some personal sensitivity: I still remember the skepticism I faced when I first shared my own interpretations. Papers were turned down not for technical reasons, but because I lacked the “right” institutional pedigree. I had degrees, but no physics PhD. I was an outsider.

Ridicule — especially when directed at dissent or at new voices — leaves a mark. So when I see similar reactions now, I feel compelled to say: we should be better than that.

If we believe in the integrity of our models, we should welcome critique — and rise to the occasion by clarifying, refining, or, if necessary, revising our views. Defensive posturing only weakens our case.


On the use of AI in physics

Some recent comments dismissed AI responses as irrelevant or superficial. I understand the concern. But I also believe this reaction misses the point.

I didn’t try all available platforms, but I did prompt ChatGPT, and — with the right framing — it offered a coherent and balanced answer to the question of the electron’s magnetic moment. Here’s a fragment:

“While the ‘definition’ of the intrinsic magnetic moment may be frame-invariant in the Standard Model, the observable manifestation is not. If the moment arises from internal circular motion (Zitterbewegung), then both radius and frequency are affected by boosts. Therefore, the magnetic moment, like momentum or energy, becomes frame-dependent in its effects.”

The jury is still out, of course. But AI — if guided by reason — might help us unravel what makes sense and what does not.

It is not a substitute for human thinking. But it can reflect it back to us — sometimes more clearly than we’d expect.


A final reflection

I’ll keep my older posts online, including those that reference the Zitter Institute. They reflected what I believed at the time, and I still stand by their substance.

But moving forward, I’ll continue my work independently — still fascinated by the electron, still curious about meaning and structure in quantum mechanics, but less interested in labels, echo chambers, or theoretical tribalism.

As always, I welcome criticism and dialogue. As one business management guru once said:

“None of us is as smart as all of us.” — Kenneth Blanchard

But truth and clarity come first.

Jean Louis Van Belle

A quasi-final proton model?

After a break of a few months, I produced another lengthy video on quantum physics. 40 minutes. Check it out: https://www.youtube.com/watch?v=k_I3Noaup0E. The hypothesis that I, somewhat desperately, advanced in my last paper on the proton model – that the Zitterbewegung model of a proton does not quite look like that of an electron, and that we are probably talking about a “blob” of charge (sphere- or shell-like, or some other “form factor”, perhaps) – makes more sense to me now. I explored it with ChatGPT-4o, and this video is the result. I should immediately that I am immensely impressed with ChatGPT-4o: ChatGPT-3 was not able to really “understand” what I was talking about a year ago or so.

Who knows, perhaps we might imagine other ChatGPT-4o users will take the “dialogue” with AI on these particle models further, and fully develop them not only intuitively but also analytically. As I am currently trying out some AI-powered audio and video applications, I tried Pictory (pictory.ai): I gave it the url of my own long-winding video and it created this video: The proton as a dynamic blob of charge. 100 seconds only instead of 40 minutes. Its contents is, of course, superficial – but I think it is creative and nicely upbeat. I fed both video back to ChatGPT-4o, and it suggested the following refinements if ever I would like to re-do the video(s) – which I am not going to do, but you might like to further explore the remarks it made:

  1. Consistency in Notation and Conventions:
    • Ensure that polar and azimuthal angles are consistently defined throughout. Your initial derivations and explanations switched between conventions, potentially confusing viewers about which angle is associated with which base vector.
  2. Tangential Velocity Decomposition:
    • The explanation of how the tangential velocity is decomposed into orthogonal components was accurate but could benefit from a clearer mention of why the orthogonality matters in interpreting the motion on the sphere. Adding a visual representation would further clarify the concept.
    • The concept of “sloshing” energy between kinetic and electromagnetic forms was well described but could benefit from highlighting the broader implications of this periodic exchange in classical and relativistic oscillators .
  3. Pointlike Charge versus Blob View:
    • The transition from discussing a pointlike charge to a “blob” of charge was a major theme but could be expanded by discussing how this shift impacts the assumptions about uniform or non-uniform charge density, as well as its implications for observable properties like the magnetic moment.
  4. The discussion of symmetry along one axis of the double-loop motion was solid but might benefit from more detail on how this symmetry impacts the interpretation of observable properties like the magnetic moment .
  5. Incorporate links to experimental observations, such as the measured magnetic moment and charge radius of the proton, was referenced. Integrating these observations with the theoretical results could be more explicit to strengthen the practical implications .
  6. Visualizations:
    • Consider incorporating visualized trajectories (e.g., the double-loop Lissajous pattern), and how they connect to the derived equations and the observed non-constant tangential velocity. This connection will help bridge the abstract mathematics with physical intuition .
  7. Clarifications on Variability:
    • The variability of the tangential velocity in the context of symmetry and non-orthogonality in 3D motion, was a key discussion point. Reinforce why this variability does not conflict with the broader relativistic constraints.

So… That’s all folks ! [Looney Tunes.] 🙂

Concluding remarks

In our previous post, we wrote that we’ve said goodbye to this fascinating field of research. We did: I entered this line of research – fundamental physics – as an amateur 10+ years ago, and now I leave it—as much an amateur now as back then. I wanted to understand the new theories which emerged over the past 50 years or so. Concepts such as the strong force or weak interactions and the new weird charges that come it with: flavors and colors—or all of the new quantum numbers and the associated new conservation laws, which Nature apparently does not respect because of some kind of hidden variables which cause the symmetries that are inherent to conservation laws to break down. […] Apparently, I didn’t get it. 🙂

However, in the process of trying to understand, a whole other mental picture or mindset emerged: we now firmly believe that classical mechanics and electromagnetism – combined with a more creative or realistic explanation of the Planck-Einstein relation – are sufficient to explain most, if not all, of the observations that have been made in this field since Louis de Broglie suggested matter-particles must be similar  to light quanta—in the sense that both are energy packets because they incorporate some oscillation of a definite frequency given by the Planck-Einstein relation. They are also different, of course: elementary particles are – in this world view – orbital oscillations of charge (with, of course, an electromagnetic field that is generated by such moving charge), while light-particles (photons and neutrinos) are oscillations of the electromagnetic field—only!

So, then we spend many years trying to contribute to the finer details of this world view. We think we did what we could as part of a part-time and non-professional involvement in this field. So, yes, we’re done. We wrote that some time already. However, we wanted to leave a few thoughts on our proton model: it is not like an electron. In our not-so-humble view, the Zitterbewegung theory applies to it—but in a very different way. Why do we think that? We write that out in our very last paper: concluding remarks on the proton puzzle. Enjoy it !

That brings the number of papers on RG up to 80 now. Too much ! There will be more coming, but in the field that I work in: computer science. Stay tuned !

Using AI for sense-making once more…

As mentioned in my last post, I did a video (YouTube link here) on why I think the invention of new quantum numbers like strangeness, charm and beauty in the 1960s – and their later ontologization as quarks – makes no sense. As usual, I talk too much and the video is rather long-winding. I asked ChatGPT to make a summary of it, and I think it did a rather good job at that. I copy its summary unaltered below.

Beyond the Quark Hypothesis: A Call for Simplicity in High-Energy Physics

1. Introduction: A Personal Journey in Physics

In this video, I reflect on my path as an amateur physicist reaching 50,000 reads—a milestone that underscores both excitement and the challenge of tackling complex quantum theories. Over decades, physics has evolved from classical mechanics to intricate frameworks like quantum field theory and quantum chromodynamics, creating both insight and paradox. This reflection emerges from a deep sense of curiosity, shared by many, to understand not just what the universe is made of but how these theoretical structures genuinely map onto reality.

2. The Crisis of Modern Physics: From Classical Mechanics to the Quark Hypothesis

Moving through physics from classical theories into high-energy particle models reveals a stark contrast: classical mechanics offers clarity and empiricism, while modern particle theories, such as quarks and gluons, often feel abstract and detached from observable reality. The shift to “smoking gun physics”—observing particle jets rather than the particles themselves—highlights a methodological divide. While high-energy collisions produce vivid images and data, we must question whether these indirect observations validate quarks, or merely add complexity to our models.

3. Historical Context: Quantum Numbers and the Evolution of the Standard Model

The 1960s and 70s were pivotal for particle physics, introducing quantum numbers like strangeness, charm, and beauty to account for unexplained phenomena in particle interactions. Figures like Murray Gell-Mann and Richard Feynman attempted to classify particles by assigning these numbers, essentially ad hoc solutions to match data with theoretical expectations. However, as experiments push the boundaries, new data shows that these quantum numbers often fail to predict actual outcomes consistently.

One of the key criticisms of this approach lies in the arbitrary nature of these quantum numbers. When certain decays were unobserved, strangeness was introduced as a “conservation law,” but when that proved insufficient, additional numbers like charm were added. The Standard Model has thus evolved not from fundamental truths, but as a patchwork of hypotheses that struggle to keep pace with experimental findings.

4. The Nobel Prize and the Politics of Scientific Recognition

Scientific recognition, especially through the Nobel Prize, has reinforced certain theories by celebrating theoretical advances sometimes over empirical confirmation. While groundbreaking work should indeed be recognized, the focus on theoretical predictions has, at times, overshadowed the importance of experimental accuracy and reproducibility. This dynamic may have inadvertently constrained the scope of mainstream physics, favoring elaborate but tenuous theories over simpler, empirically grounded explanations.

For example, Nobel Prizes have been awarded to proponents of the quark model and the Higgs boson long before we fully understand these particles’ empirical foundations. In doing so, the scientific community risks prematurely canonizing incomplete or even incorrect theories, making it challenging to revisit or overturn these assumptions without undermining established reputations.

5. Indirect Evidence: The Limits of Particle Accelerators

Particle accelerators, particularly at scales such as CERN’s Large Hadron Collider, have extended our observational reach, yet the evidence remains indirect. High-energy collisions create secondary particles and jets rather than isolated quarks or gluons. In a sense, we are not observing the fundamental particles but rather the “smoking gun” evidence they purportedly leave behind. The data produced are complex patterns and distributions, requiring interpretations laden with theoretical assumptions.

This approach raises a fundamental question: if a theory only survives through indirect evidence, can it be considered complete or even valid? High-energy experiments reveal that the more energy we input, the more complex the decay products become, yet we remain without direct evidence of quarks themselves. This “smoking gun” approach diverges from the empirical rigor demanded in classical physics and undermines the predictive power we might expect from a true theory of fundamental particles.

6. The Particle Zoo: A Growing Complexity

The “particle zoo” has expanded over decades, complicating rather than simplifying our understanding of matter. Initial hopes were that quantum numbers and conservation laws like strangeness would organize particles in a coherent framework, yet the resulting classification scheme has only grown more convoluted. Today, particles such as baryons, mesons, and leptons are grouped by properties derived not from first principles but from empirical fits to data, leading to ad hoc conservation laws that seem arbitrary.

The “strangeness” quantum number, for instance, was initially introduced to prevent certain reactions from occurring. Yet, rare reactions that violate this rule have been observed, suggesting that the rule itself is more of a guideline than a fundamental conservation law. This trend continued with the addition of quantum numbers like charm, beauty, and even bottomness, yet these additions have not resolved the core issue: our inability to explain why certain reactions occur while others do not.

7. Disequilibrium States: Beyond the Particle Concept

One possible perspective is to reclassify many “particles” not as fundamental entities but as disequilibrium states—transient structures that emerge from the interactions of more fundamental components. Viewing particles in this way offers a pathway back to a simpler, more intuitive model, where only stable particles like electrons, protons, and photons are foundational. Such a model could focus on electromagnetic fields and forces, with high-energy states representing temporary disequilibrium configurations rather than new particle species.

This perspective aligns well with the principle of statistical determinism. In the same way that classical oscillators eventually dampen and settle into stable states, high-energy disequilibrium states would be expected to decay, producing stable configurations over time. This model not only reduces the need for numerous quantum numbers but also sidesteps the requirement for exotic forces like the strong and weak nuclear forces, allowing the electromagnetic force to assume a central role.

8. Statistical Determinism and Quantum Reality

Heisenberg and Bohr’s interpretation of quantum mechanics suggests we should accept statistical determinism—systems governed by probabilistic rules where precise knowledge of individual events is inaccessible. This idea does not necessitate mystical randomness but acknowledges our limited ability to track initial conditions in high-energy environments. Probabilities emerge not from an intrinsic unpredictability but from our practical inability to fully specify a system’s state.

From this viewpoint, quarks and gluons, as well as the numerous quantum numbers assigned to unstable particles, are secondary descriptors rather than primary components of nature. Stable particles are the true constants, while all else is a function of high-energy interactions. This interpretation keeps quantum mechanics grounded in empirical reality and sidesteps the need for complex, unverifiable entities.

9. Conclusion: Toward a Pragmatic and Local Realist Approach

This reflection does not dismiss the importance of high-energy physics but advocates a return to fundamental principles. By focusing on empirical evidence, statistical determinism, and electromagnetic interactions, we can build a model that is both pragmatic and intuitive. We need not abandon quantum mechanics, but we should strive to ensure that its interpretations are consistent with the observable universe. Instead of introducing additional quantum numbers or forces, we should ask if these are placeholders for deeper, more coherent explanations yet to be discovered.

The journey of science is, at its core, a journey back to simplicity. If physics is to move forward, it may do so by revisiting foundational assumptions, clarifying what can be empirically tested, and developing a model of matter that resonates with the simplicity we find in classical theories. As research continues, it is this blend of skepticism, open-mindedness, and empirical rigor that will pave the way for meaningful discoveries.

The failure of physics as a science?

It is a coincidence but Sabine Hossenfelder just produced a new video in which she talks once again about the problems of academic physics, while I did what I said what I would not do – and that is to write out why the discovery of new rare kaon decay modes is a problem for the Standard Model. I think the video and the paper complement each other nicely, although Sabine Hossenfelder probably still believes the strong force and weak interactions are, somehow, still real. [I did not read her book, so I don’t know: I probably should buy her book but then one can only read one book at a time, isn’t it?]

The paper (on ResearchGate – as usual: link here) does what Sabine Hossenfelder urges her former colleagues to do: if a hypothesis or an ad hoc theory doesn’t work, then scientists should be open and honest about that and go back to the drawing board. Indeed, in my most-read paper – on de Broglie’s matter-wave – I point out how de Broglie’s original thesis was misinterpreted and how classical quantum theory suddenly makes sense again when acknowledging that mistake: it probably explains why I am getting quite a lot of reads as an amateur physicist. So what’s this new paper of mine all about?

I go back to the original invention of the concept of strangeness, as documented by Richard Feynman in his 1963 Lectures on quantum physics (Vol. III, Chapter 11-5) and show why and how it does not make all that much sense. In fact, I always thought these new quantum conservation laws did not make sense theoretically and that, at best, they were or are what Dr. Kovacs and Dr. Vassallo refer to as phenomenological models rather than sound physical theories (see their chapter on superconductivity in their latest book). However, now it turns out these fancy new concepts do not even do what they are supposed to do, and that is to correctly describe the phenomenology of high-energy particle reactions. :-/

The alternative – a realist interpretation of quantum physics – is there. It is just not mainstream – yet! 🙂

Post scriptum (8 November 2024): For those who do not like to read, you can also watch what I think of my very last video on the same topic: what makes sense and what does not in academic or mainstream physics? Enjoy and, most importantly, do not take things too seriously ! Life family and friends – and work or action-oriented engagement are far more important than personal philosophy or trying to finding truth in science… 🙂

Using AI to solve the 80-year-old problem of the anomaly of the electron magnetic moment?

Pre-scriptum (3 October 2024): I came back from holiday and, because this week-long up and down became quite convoluted, I did what I like to do in a case like that, and that is to take my Bamboo notebook and talk about it all in a video which I added to my Real Quantum Physics channel on YouTube. I also updated my paper on RG: as usual, it went through a few versions, but this one – with a summary co-authored by ChatGTP-4 (and ChatGPT-o1) – should be the final one: enjoy!

Indeed, instead of listening to the international news on the war with Russia and on what is happening in the Middle East (all very depressing), you may want to listen to this and read the latest theory. Perhaps you will be inspired by it to develop your own pet realist theory of what an electron might actually be. I can assure you that it is more fun than trying to understand Feynman diagrams and how QED calculations work. 🙂 But don’t think you will win a Nobel Prize if you do not have the right connections and pedigree and all of that: see this analysis of what makes Nobel Prize winners Nobel Prize winners. 🙂

Original post:

I asked some questions to ChatGPT about my geometric explanation of the anomaly in the electron’s magnetic moment. Here is the chat: https://chatgpt.com/share/66f91760-68b8-8004-8cb2-7d2d3624e0aa. To me, it confirms the ‘explanation’ of mainstream QED makes no sense. We can take Schwinger’s factor and build a series of converging terms using that factor. We can also take my first rough cut at a first-order correction (π(alpha)2/8, see my very early 2019 paper on a classical explanation of the amm), and use that.

You may wonder: why not ask ChatGPT about the best first-order factor to be used here considering the geometry of the situation? The fact is: I did, but the geometry is not all that easy. It first came up with the formula for a spherical cap, but that one does not do the trick. See the latter part of the conversation (link above).

I am on holiday now, and so I will switch off a while but I am thinking AI will do what two generations of ‘new’ quantum physicists did not do: come up with a model that is based on real physics and is easy to understand intuitively. 🙂

PS: Of course, I did another rapid-fire paper on ResearchGate to document it all (the logic step-by-step, so to speak). As the chat is public, feel free to continue the conversation. Note that I used the newest ChatGPT o1 version, now in preview but part of a subscription (which you may not have). Yet again a different beast! The older versions of ChatGPT may not be so smart. This conversation is totally worth the US$20/month I pay for my subscription. 🙂

PS 2: Now that I had it open, I also quickly queried it on my wildest hypothesis: a ‘mirror’ electromagnetic force explaining dark matter and dark energy. While it is totally wild (read: nuts), I entertain it because it does away with the need for an explanation in terms of some cosmological constant. Here is the conversation: https://chatgpt.com/share/66f92c7f-82a0-8004-a226-bde65085f18d. I like it that ChatGPT warns me a bit about privacy. It does look wild. However, it is nice to see how gentle ChatGPT is in pointing out what work needs to be done on a theory in order to make it look somewhat less wild. 🙂

PS 3 (yes, ChatGPT is addictive): I also queried it on the rather puzzling 8π/3 factor in the CODATA formula for the Thomson photon-electron scattering cross-section. See its response to our question in the updated chat: https://chatgpt.com/share/66f91760-68b8-8004-8cb2-7d2d3624e0aa. Just scroll down to the bottom. It took 31 seconds to generate the reply: I would be curious to know if that is just courtesy from ChatGPT (we all like to think our questions are complicated, don’t we?), or if this was effectively the time it needed to go through its knowledge base. Whatever the case might be, we think it is brilliant. 🙂 It is nothing to be afraid of, although I did feel a bit like: what’s left to learn to it but for asking intelligent questions. What if it starts really learning by asking intelligent questions itself to us? I am all ready for it. 🙂

The ultimate zbw electron model

Just after finishing a rather sober and, probably, overly pessimistic reflection on where the Zitterbewegung interpretation of quantum theory stands, I am excited to see a superbly written article by Dr. Kovacs and Dr. Vassallo on what I now think of as the ultimate electron model: Rethinking electron statistics rules (10 September 2024). I think it is great because it addresses several points in my rather depressing description of the state of zbw theory:

  1. Multiple Zitterbewegung interpretations of what an electron actually is, currently coexist. Indeed, both mainstream and non-mainstream physicists have now been going back and forth for about 100 years on this or that electron model: the referenced Kovacs/Vassallo article effectively appeared in a special journal issue titled: “100 Years of Quantum Matter Waves: Celebrating the Work of Louis De Broglie.” 100+ years of discussion have basically led us back to Parson’s 1915 ring current model, which Joseph Larmor presented so well at the 1921 Solvay Conference. We do not think that is a good situation: it looks a bit like 100 years of re-inventing the wheel – or, perhaps, I should say: wheels within wheels. 🙂 I could write more about this but I am happy to see the discussion on – just one example of differing views here – whether or not there should be a 1/2 factor in the electron’s frequency may be considered to be finally solved: de Broglie’s matter-wave frequency is just the same as the Planck-Einstein frequency in this paper. This factor 2 or 1/2 pops up when considering ideas such as the effective mass of the zbw charge or – in the context of Schrödinger’s equation – because we’re modeling the motion of electron pairs rather than electrons (see the annexes to my paper on de Broglie’s matter-wave concept). In short: great! Now we can, finally, leave those 100+ years of discussions behind us. 🙂
  2. Dr. Kovacs and Dr. Vassallo also explore the nature of superconductivity and Bose-Einstein statistics, and not only does their analysis away with the rather mystical explanation in Feynman’s last and final chapter of his lectures on quantum mechanics but it also offers a very fine treatment of n-electron systems. Their comments on ‘bosonic’ and ‘fermionic’ properties of matter-particles also tie in with my early assessment that the boson-fermion dichotomy has no ontological basis.

The hundreds of downloads of their article since it was published just two weeks ago also shows new and old ways of thinking and modelling apparently come nicely together in this article: if your articles get hundreds of reads as soon as published, then you are definitely not non-mainstream any more: both Dr. Kovacs and Dr. Vassallo have an extraordinary talent for rephrasing old questions in the new “language” of modern quantum theory. That is to be lauded. Hopefully, work on a proton and a neutron model will now complement what I think of as the ultimate electron model based on a local and realist interpretation of what de Broglie’s matter-wave actually is. Indeed, critics of modern quantum theory often quote the following line from Philip Pearle’s Classical Electron Models [1]:

The state of the classical electron theory reminds one of a house under construction that was abandoned by its workmen upon receiving news of an approaching plague. The plague in this case, of course, was quantum theory. As a result, classical electron theory stands with many interesting unsolved or partially solved problems.”

I think Dr. Kovacs and Dr. Vassallo may have managed to finish this “abandoned construction” – albeit with an approach which differs significantly from that of Pearle: that is good because I think there were good reasons for the “workmen” to leave the construction site (see footnote [1]). 🙂 So, yes, I hope they will be able – a few years from now – to also solve the questions related to a Zitterbewegung proton and neutron model.

In fact, they already have a consistent proton model (see: the proton and Occam’s Razor, May 2023), but something inside of me says that they should also explore different topologies, such as this Lissajous-like trajectory which intrigues me more than helical/toroidal approaches – but then who am I? I am the first to recognize my limitations as an amateur and it is, therefore, great to see professionals such as Dr. Kovacs and Dr. Vassallo applying their formidable skills and intuition to the problem. 🙂


[1] Pearle’s paper is the seventh in a volume of eight chapters. The book’s title is, quite simply, titled Electromagnetism (1982), and it was put together and edited by Doris Teplitz (1982). Few who quote this famous line, bother to read the Philip Pearle paper itself. This paper effectively presents what Pearle refers to as classical electron models: all of them are based on “rigid or flexible shell surfaces” of charge, which is why we think they did not “cut it” for the many “workmen” (read: the mainstream scientists who thought the Bohr-Heisenberg amplitude math and the probability theory that comes with it) who left the then unfinished construction.

We think the approach taken by Dr. Kovacs and Dr. Vassallo is more productive when it comes to bringing mainstream and Zitterbewegung theorists together around a productive mathematical framework in which the probabibilities are explained based on a plain interpretation of Schrödinger’s ‘discovery’ – which is that the elementary wavefunction represents a real equation of motion of a pointlike but not infinitesimally charge inside of an electron.

As for trying out different topologies, we understand Dr. Kovacs and Dr. Vassallo are working very hard on that, so all we can do is to wish them the best of luck. Godspeed! 🙂

Post scriptum

A researcher I was in touch with a few years ago sent me a link to the (virtual) Zitter Institute: https://www.zitter-institute.org/. It is a network and resource center for non-mainstream physicists who succesfully explored – and keep exploring, of course – local/realist interpretations of quantum mechanics by going back to Schrödinger’s original and alternative interpretation of what an electron actually is: a pointlike (but not infinitesimally small) charge orbiting around in circular motion, with:

(i) the trajectory of its motion being determined by the Planck-Einstein relation, and

(ii) an energy – given by Einstein’s mass-energy equivalence relation – which perfectly fits Wheeler’s “mass-without-mass” idea.

I started exploring Schrödinger’s hypothesis myself about ten years ago – as a full-blown alternative to the Bohr-Heisenberg interpretation of quantum mechanics (which I think of as metaphysical humbug, just like Einstein and H.A. Lorentz at the time) – and consistently blogged and published about it: here on this website, and then on viXra, Academia and, since 2020, ResearchGate. So I checked out this new site, and I see the founding members added my blog site as a resource to their project list.

[…]

I am amazingly pleased with that. I mean… My work is much simpler than that of, say, Dr. John G. Williamson (CERN/Philips Research Laboratories/Glasgow University) and Dr. Martin B. van der Mark (Philips Research Laboratories), who created the Quantum Bicycle Society (https://quicycle.com/).

So… Have a look – not at my site (I think I did not finish the work I started) but at the other resources of this new Institute: it looks like this realist and local interpretation of quantum mechanics is no longer non-mainstream… Sweet ! It makes me feel the effort I put into all of this has paid off ! 😉 Moreover, some of my early papers (2018-2020) are listed as useful papers to read. I think that is better than being published in some obscure journal. 🙂

I repeat again: my own research interest has shifted to computer science, logic and artificial intelligence now (you will see recent papers on my RG site are all about that now). It is just so much more fun and it also lines up better with my day job as a freelance IT project manager. So, yes, it is goodbye – but I am happy I can now refer all queries about my particle models and this grand synthesis between old and new quantum mechanics to the Zitter Institute.

It’s really nice: I have been in touch with about half of the founding members of this Institute over the past ten years – casually or in a more sustained way while discussing this or that 2D or 3D model of an electron, proton, or neutron), and they are all great and amazing researchers because they look for truth in science and are very much aware of this weird tendency of modern-day quantum scientists turning their ideas into best-sellers perpetuating myths and mysteries. [I am not only thinking of the endless stream of books from authors like Roger Penrose (the domain for this blog was, originally, reading Penrose rather than reading Feynman) or Graham Greene here, but also of what I now think of rather useless MIT or edX online introductions to quantum physics and quantum math.]

[…]

Looking at the website, I see the engine behind it: Dr. Oliver Consa. I was in touch with him too. He drew my attention to remarkable flip-flop articles such as William Lamb’s anti-photon article (it is an article which everyone should read, I think: unfortunately, you have to pay for it) and remarkable interviews with Freeman Dyson. Talking of the latter (I think of as “the Wolfgang Pauli of the third generation of quantum physicists” because he helped so many others to get a Nobel Prize before he got one – Dyson never got a Nobel Prize, by the way), this is one of these interviews you should watch: just four years before he would die from old age, Freeman Dyson plainly admits QED and QFT is a totally unproductive approach: a “dead end” as Dyson calls it.

So, yes, I am very pleased and happy. It makes me feel my sleepness nights and hard weekend work over the past decade on this has not been in vain ! Paraphrasing Dyson in the above-mentioned video interview, I’d say: “It is the end of the story, and that particular illumination was a very joyful time.” 🙂

Thank you, Dr. Consa. Thank you, Dr. Vassallo, Dr. Burinskii, Dr. Meulenberg, Dr. Kovacs, and – of course – Dr. Hestenes – who single-handedly revived the Zitterbewegung interpretation of quantum mechanics in the 1990s. I am sure I forgot to mention some people. Sorry for that. I will wrap up my post here by saying a few more words about David Hestenes.

I really admire him deeply. Moving away from the topic of high-brow quantum theory, I think his efforts to reform K-12 education in math and physics is even more remarkable than the new space-time algebra (STA) he invented. I am 55 years old and so I know all about the small and pleasant burden to help kids with math and statistics in secondary school and at university: the way teachers now have to convey math and physics to kids now is plain dreadful. I hope it will get better. It has to. If the US and the EU want to keep leading in research, then STEM education (Science, Technology, Engineering, and Mathematics) needs a thorough reform. :-/

Cold fusion (LENR) revisited…

One of the nice things that happened to me on this rather weird exploration of the world of quantum physics – a journey which I now want to leave behind, because I found what I wanted to find: a common-sense interpretation of it all, and a concise model of elementary particles – was that, back in 2020, I was invited to join a low-key symposium on cold fusion (or ‘low energy nuclear reactions’, as the field is now referred to): RNBE-2020. That was followed by rather intense exchanges with a few scientists who work or worked on a theory centered around the concept of deep nuclear electron orbitals. All very interesting, because it confirmed what I think is the case in this field: there are some crooks around, but most research is done by very honest and integer scientists, albeit – admittedly – it’s all a bit on the fringes of mainstream theory.

I summed up my rather skeptical conclusions on these conversations in a 2021 blog post here: cold and hot fusion – just hot air? The ‘hot’ in the title of that post does not refer to real hot nuclear fusion (because that is not just ‘hot’ but extremely hot: we are not talking thousands but millions degrees Celsius here). No, we refer to the rather high temperatures of things like the hydrino scheme which – in my not-so-humble view – has seriously damaged the credibility of the field: these high temperatures are still – visibly – in the thermal range. Indeed, I looked at the videos, and I just see some kind of small copper alloy furnaces melting away. Now, copper alloys melt around 1000° C, and burning hydrogen yields temperatures around 2000° C. Hence, in the absence of any other evidence (such as spectroscopic measurements), I conclude these BLP experiments are just burning ordinary hydrogen. That is sad, because cold fusion and LENR already suffered from poor reputation.

But so I had long email exchanges on more interesting things, and that was nice. Going back to the possibility of deep electron orbitals being real, somehow, I initially entertained the rather vague idea that – who knows, right? – the mix of Zitterbewegung charges (positive and negative) – which, in my ‘mass-without-mass’ model of elementary particles, have zero rest mass – might, perhaps, combine in nuclear oscillations that have not been modeled so far. Indeed, when everything is said and done, I myself broke my teeth – so to speak – on trying to model the neutron itself – stable only inside of a nucleus – as a neutral ring current or nuclear ‘glue’ between protons. I did not succeed, but I still believe it should be possible. And if an analytical model could be found to model the motion of multiple pointlike zbw charges as a stable equilibrium that – as a whole – respects the Planck-Einstein relation, then we might, perhaps, also discover novel ways to unleash the binding energy between them, right?

So, these are some of the good things I want to – carefully and prudently – state about the field. I must now say why I am and remain skeptical. It is fair to say that everyone can easily see and verify how the energy of say, a photon in a laser beam, can dissipate away and, in the process, trigger very different reactions. Reactions that one would not associate with the energies of the incoming photons: all these reactions would qualify as some kind of anomalous heat, I would think. Think, for example, of using a high-powered laser to cut small tree branches, which is possible now. I have not studied the mechanics of this (too bad because I’ve been wanting to study the mechanics of lasers for many years now, but I never found the time to dig into Einstein’s or other theories on how it works – not approximately, but exactly), but I can easily see how the process of Compton scattering would explain why a substantial part of the energy of the photons would be absorbed by (1) outgoing photons with lower energy and (2) electrons with substantially higher kinetic energies. This kinetic energy would then redistribute all over the system (not only other electrons but even the massive nuclei at the center of each atomic and molecular system inside of these easy-to-burn materials, be they paper, carton, or wood). In short, we get heat: thermal energy. And quite a lot of it.

However, this process involves triggering lower-energy reactions: thermal or chemical reactions (fire actually is chemistry). [Also, you can easily see a lot of energy gets lost: using a 2000 W laser to cut branches that are only a few cm in diameter is not very energy-efficient, right? This is a point which I also talk about in my previous post on LENR: what is the energy balance? What is the total input energy and what is the nuclear fuel, respectively, and how do these two elements combine to make you think you’d get net energy out of the whole process?]

Regardless of the total energy equation (input – output), the first question is the more relevant one, because it goes to the core of the what and how of LENR. My blunt appraisal here is that of other skeptics: I cannot imagine how the energy in laser photons could – somehow – build up a sufficient reservoir of energy, to then reach a threshold and trigger an outright and proper nuclear or high-energy reaction.

If it is possible at all, it would have to be some kind of resonance process: a lower frequency feeding into a much higher-frequency phenomenon and gradually increasing its amplitude. How would it do that? That is simple. Harmonic oscillations have several natural frequencies, and the lower-energy oscillation can feed into one or more of these. See my post on music and math for an analytical explanation or – if you want something simpler – just think of a child on a swing, which – once in a while – you give an extra push in the back. You do not necessarily have to do that each and every time the swing comes back. No: you don’t need to push each and every time but, if you do push, you have to do at the right time. 🙂

Going back to LENR, we may think the frequency of a laser may feed into a nuclear oscillation, gradually increasing its amplitude, until the accumulated energy is sufficiently high and reaches some threshold triggering a proper nuclear or high-energy reaction. Frankly, I think this possibly could explain low-energy nuclear reactions. So, yes, it might be possible.

At the same time, I think it is rather unlikely. Why? At the smallest of scales, the Planck-Einstein relation holds, and so we have discrete energy states. These discrete energy states of protons, electrons, nuclei, atoms or molecules as a whole do not have any in-between states in which you can dump excess or surplus energy from somewhere outside. A photon-electron interaction triggers a reaction, and that’s not gradually but (almost) instantly. So, energy is being emitted as soon as it absorbed. Disequilibrium states do not last very long: atomic systems go back to equilibrium very quickly, and any excess energy is quickly emitted by photons or absorbed as internal heat, which is a (very) low-energy oscillation of the massive bits in whatever material you are using in these experiments (most experiments are on palladium, and the discussions on the effects impurities might have in the experiments are – frankly – a bit worrying). In any case, the point is that these disequilibrium states do surely not last long enough to entertain the kind of resonance processes that, say, made the Tacoma Bridge collapse. :-/ To make a long story short, I am and remain skeptical.

However, to my surprise, I was invited to join in a Zoom e-call, and listen to the rather interesting discussion on the future of both the French and International Society for Condensed Nuclear Matter (SFCMNS and ISCMNS, respectively – I will not put the links because they are both revamping their website now) after they had wrapped up their 25th International Conference.

What I saw and heard, made me quite happy: these were all honest and critical scientists looking at real-life experiments that do yield surprising results. Result that contradict my rather skeptical theoretical arguments (above) against LENR being possible. I also noted the Anthropocène Institute invests in them. I also note Nobuo Tanaka, former Executive Director of the International Energy Agency (not to be confused with the International Atomic Energy Agency!), spoke at ICCF-24, plus a lot of other very serious people. Also, it is quite obvious that nuclear energy is no longer out. On the contrary, it is in again and – as part of new investments in nuclear research – I think the LENR field should also be reconsidered, despite its chequered past. I also note LENR research in Japan is getting a lot more funding than research in the EU or the US, so perhaps they are seeing something that we do not see (it would be interesting to check what happens in the patents or IPR area on this). 🙂

So, all these considerations add up to more than enough – to me, at least – to continue giving these researchers the benefit of the doubt. We live in a fascinating world and, as the Wikipedia article on cold fusion notes, the discovery of the Mössbauer and other strange nuclear effects was also rather unexpected – in the sense that it had not been foreseen or predicted by some theorist. I do, therefore, not agree with the same Wikipedia article dismissing LENR as ‘pathological‘ or ‘cargo cult‘ science.

If anything, I think mainstream research sometimes also suffers from what critics say of the LENR field: “people are tricked into false results … by subjective effects, wishful thinking or threshold interactions.” But that is only a personal and non-relevant remark, as I am quitting my hobbyist study of physics now. It has lasted long enough (over a decade, really) and – as mentioned a few times already – I think I sort of get it now. As Feynman famously said in the Epilogue to his Lectures: “After all, it isn’t as horrible as it looks.”

I might add: I think the end of physics is near. All that’s left, is engineering. And quite a lot of it. 🙂

The End of Physics

I wrote a post with this title already, but this time I mean it in a rather personal way: my last paper – with the same title – on ResearchGate sums up rather well whatever I achieved, and also whatever I did not explore any further because time and energy are lacking: I must pay more attention to my day job nowadays. 🙂

I am happy with the RG score all of my writing generated, the rare but heartfelt compliments I got from researchers with far more credentials than myself (such as, for example, Dr. Emmanouil Markoulakis of Nikolaos, which led me to put a paper on RG with a classical explanation of the Lamb shift), various friendly but not necessarily always agreeing commentators (one of them commenting here on this post: a good man!), and, yes, the interaction on my YouTube channel. But so… Well… That is it, then! 🙂

As a farewell, I will just quote from the mentioned paper – The End of Physics (only as a science, of course) – hereunder, and I hope that will help you to do what all great scientists would want you to do, and that is to think things through for yourself. 🙂

Brussels, 22 July 2023

Bohr, Heisenberg, and other famous quantum physicists – think of Richard Feynman, John Stewart Bell, Murray Gell-Mann, and quite a few other Nobel Prize winning theorists[1] – have led us astray. They swapped a rational world view – based on classical electromagnetic theory and statistical determinism – for a mystery world in which anything is possible, but nothing is real.

They invented ‘spooky action at a distance’ (as Einstein derogatorily referred to it), for example. So, what actually explains that long-distance interaction, then? It is quite simple. There is no interaction, and so there is nothing spooky or imaginary or unreal about it: if by measuring the spin state of one photon, we also know the spin state of its twin far away, then it is – quite simply – because physical quantities such as energy and momentum (linear or angular) will be conserved if no other interference is there after the two matter- or light-particles were separated.

Plain conservation laws explain many other things that are being described as ‘plain mysteries’ in quantum physics. The truth is this: there are no miracles or mysteries: everything has a physical cause and can be explained.[2] For example, there is also nothing mysterious about the interference pattern and the trajectory of an electron going through a slit, or one of two nearby slits. An electron is pointlike, but it is not infinitesimally small: it has an internal structure which explains its wave-like properties. Likewise, Mach-Zehnder one-photon interference can easily be explained when thinking of its polarization structure: a circularly polarized photon can be split in two linearly polarized electromagnetic waves, which are photons in their own right. Everything that you have been reading about mainstream quantum physics is, perhaps, not wrong, but it is highly misleading because it is all couched in guru language and mathematical gibberish.

Why is that mainstream physicists keep covering up? I am not sure: it is a strange mix of historical accident and, most probably, the human desire to be original or special, or the need to mobilize money for so-called fundamental research. I also suspect there is a rather deceitful intention to hide truths about what nuclear science should be all about, and that is to understand the enormous energies packed into elementary particles.[3]

The worst of all is that none of the explanations in mainstream quantum physics actually works: mainstream theory does not have a sound theory of signal propagation, for example (click the link to my paper on that or – better, perhaps – this link to our paper on signal propagation), and Schrödinger’s hydrogen model is a model of a hypothetical atom modelling orbitals of equally hypothetical zero-spin electron pairs. Zero-spin electrons do not exist, and real-life hydrogen only has one proton at its center, and one electron orbiting around it. Schrödinger’s equation is relativistically correct – even if all mainstream physicists think it is not – but the equation includes two mistakes that cancel each other out: it confuses the effective mass of an electron in motion with its total mass[4], and the 1/2 factor which is introduced by the m = 2meff substitution also takes care of the doubling of the potential that is needed to make the electron orbitals come out alright.

The worst thing of all is that mainstream quantum physicists never accurately modeled what they should have modeled: the hydrogen atom as a system of a real proton and a real electron (no hypothetical infinitesimally and structureless spin-zero particles). If they had done that, they would also be able to explain why hydrogen atoms come in molecular H2 pairs, and they would have a better theory of why two protons need a neutron to hold together in a helium nucleus. Moreover, they would have been able to explain what a neutron actually is.[5]


[1] James Stewart Bell was nominated for a Nobel Prize, but died from a brain hemorrhage before he could accept the prize for his theorem.

[2] The world of physics – at the micro-scale – is already fascinating enough: why should we invent mysteries?

[3] We do not think these energies can be exploited any time soon. Even nuclear energy is just binding energy between protons and neutrons: a nuclear bomb does not release the energy that is packed into protons. These elementary particles survive the blast: they are the true ‘atoms’ of this world (in the Greek sense of ‘a-tom’, which means indivisible).

[4] Mass is a measure of the inertia to a change in the state of motion of an oscillating charge. We showed how this works by explaining Einstein’s mass-energy equivalence relation and clearly distinguishing the kinetic and potential energy of an electron. Feynman first models an electron in motion correctly, with an equally correct interpretation of the effective mass of an electron in motion, but then substitutes this effective mass by half the electron mass (meff = m/2) in an erroneous reasoning process based on the non-relativistic kinetic energy concept. The latter reasoning also leads to the widespread misconception that Schrödinger’s equation would not be relativistically correct (see the Annexes to my paper on the matter-wave). For the trick it has to do, Schrödinger’s wave equation is correct – and then I mean also relativistically correct. 🙂

[5] A neutron is unstable outside of its nucleus. We, therefore, think it acts as the glue between protons, and it must be a composite particle.