Stability First: A Personal Programme for Re-reading Particle Physics

Over the past years, I have written a number of papers on physics—mostly exploratory, sometimes speculative, always driven by the same underlying discomfort.

Not with the results of modern physics. Those are extraordinary.
But with the ordering of its explanations.

We are very good at calculating what happens.
We are less clear about why some things persist and others do not.

That question—why stability appears where it does—has quietly guided much of my thinking. It is also the thread that ties together a new manuscript I have just published on ResearchGate:

“Manuscript v0.2 – A stability-first reinterpretation of particle physics”
👉 https://www.researchgate.net/publication/398839393_Manuscript_v02

This post is not a summary of the manuscript. It is an explanation of why I wrote it, and what kind of work it is meant to enable.


Not a new theory — a different starting point

Let me be clear from the outset.

This manuscript does not propose a new theory.
It does not challenge the empirical success of the Standard Model.
It does not attempt to replace quantum field theory or nuclear phenomenology.

What it does is much more modest—and, I hope, more durable.

It asks whether we have been starting our explanations at the wrong end.

Instead of beginning with abstract constituents and symmetries, the manuscript begins with something far more pedestrian, yet physically decisive:

Persistence in time.

Some entities last.
Some decay.
Some exist only fleetingly as resonances.
Some are stable only in the presence of others.

Those differences are not cosmetic. They shape the physical world we actually inhabit.


From electrons to nuclei: stability as a guide

The manuscript proceeds slowly and deliberately, revisiting familiar ground:

  • the electron, as an intrinsically stable mode;
  • the proton, as a geometrically stable but structurally richer object;
  • the neutron, as a metastable configuration whose stability exists only in relation;
  • the deuteron, as the simplest genuinely collective equilibrium;
  • and nuclear matter, where stability becomes distributed across many coupled degrees of freedom.

At no point is new empirical content introduced.
What changes is the interpretive emphasis.

Stability is treated not as an afterthought, but as a physical clue.


Interaction without mysticism

The same approach is applied to interaction.

Scattering and annihilation are reinterpreted not as abstract probabilistic events, but as temporary departures from equilibrium and mode conversion between matter-like and light-like regimes.

Nothing in the standard calculations is altered.
What is altered is the physical picture.

Wavefunctions remain indispensable—but they are treated as representations of physical configurations, not as substitutes for them.

Probability emerges naturally from limited access to phase, geometry, and configuration, rather than from assumed ontological randomness.


Why classification matters

The manuscript ultimately turns to the Particle Data Group catalogue.

The PDG tables are one of the great achievements of modern physics. But they are optimized for calculation, not for intuition about persistence.

The manuscript proposes a complementary, stability-first index of the same data:

  • intrinsically stable modes,
  • metastable particle modes,
  • prompt decayers,
  • resonances,
  • and context-dependent stability (such as neutrons in nuclei).

Nothing is removed.
Nothing is denied.

The proposal is simply to read the catalogue as a map of stability regimes, rather than as a flat ontology of “fundamental particles”.


A programme statement, not a conclusion

This manuscript is intentionally incomplete.

It does not contain the “real work” of re-classifying the entire PDG catalogue. That work lies ahead and will take time, iteration, and—no doubt—many corrections.

What the manuscript provides is something else:

a programme statement.

A clear declaration of what kind of questions I think are still worth asking in particle physics, and why stability—rather than constituent bookkeeping—may be the right place to ask them from.


Why I am sharing this now

I am publishing this manuscript not as a final product, but as a marker.

A marker of a line of thought I intend to pursue seriously.
A marker of a way of reading familiar physics that I believe remains underexplored.
And an invitation to discussion—especially critical discussion—on whether this stability-first perspective is useful, coherent, or ultimately untenable.

Physics progresses by calculation.
It matures by interpretation.

This manuscript belongs to the second category.

If that resonates with you, you may find the full text of interest.


Jean-Louis Van Belle
readingfeynman.org

Moderation, Measurements, and the Temptation of Ontology

Why physics must resist becoming metaphysics


Some time ago, I found myself involved in what can best be described as an intellectual fallout with a group of well‑intentioned amateur researchers. This post is meant to close that loop — calmly, without bitterness, and with a bit of perspective gained since.

One of the more sensible people in that group bothered to push an interesting article onto my desk, and so I want to talk about that one here.


Gary Taubes, CERN, and an unexpected reinforcement

It’s an article by Gary Taubes on the discovery of the W and Z bosons at CERN, later incorporated into his book Nobel Dreams. Far from undermining my position, the article did the opposite: it reinforced the point I had been trying to make all along.

Taubes does not engage in ontology. He does not ask what W and Z bosons are in a metaphysical sense. Instead, he describes what was measured, how it was inferred, and how fragile the boundary is between evidence and interpretation in large‑scale experimental physics.

This connects directly to an earlier piece I published here:

Something Rotten in the State of QED: A Careful Look at Critique, Sociology, and the Limits of Modern Physics
https://readingfeynman.org/2025/12/01/something-rotten-in-the-state-of-qed-a-careful-look-at-critique-sociology-and-the-limits-of-modern-physics/

Let me restate the central point, because it is still widely misunderstood:

Criticizing the ontologization of W/Z bosons (or quarks and gluons) is not the same as denying the reality of the measurements that led to their introduction.

The measurements are real. The detector signals are real. The conservation laws used to infer missing energy and momentum are real. What is not forced upon us is the metaphysical leap that turns transient, unstable interaction states into quasi‑permanent “things.”


Stable vs. unstable states — a distinction we keep blurring

My own work has consistently tried to highlight a distinction that I find increasingly absent — or at least under‑emphasized — in mainstream physics discourse:

  • Stable states: long‑lived, persistent, and directly accessible through repeated measurement
  • Unstable or intermediate states: short‑lived, inferred through decay products, reconstructed statistically

W and Z bosons belong firmly to the second category. So do quarks and gluons in their confined form. Treating them as ontologically equivalent to stable particles may be pragmatically useful, but it comes at a conceptual cost.

It is precisely this cost that I criticize when I criticize mainstream physics.

Not because mainstream physics is “wrong.”
But because it has become too comfortable collapsing epistemology into ontology, especially in its public and pedagogical narratives.


Why this matters now

There is another reason this distinction matters, and it is a forward‑looking one.

The probability that something radically new — in the sense of a fundamentally novel interaction or particle family — will be discovered in the coming decades is, by most sober assessments, rather low. What we will have, however, is:

  • More precise measurements
  • Larger datasets
  • Longer baselines
  • Better statistical control

In that landscape, progress will depend less on naming new entities and more on bridging what has already been measured, sometimes decades ago, but never fully conceptually digested.

That is where I intend to focus my efforts in the coming years.

Not by founding a new church.
Not by declaring metaphysical revolutions.
But by carefully working at the interface between:

  • what was actually measured,
  • what was legitimately inferred,
  • and what we may have too quickly reified.

Closing note

If there is one lesson I take — from the past dispute, from Taubes, from the history of CERN or fundamental physics in general — it is this:

Physics progresses best when it remains modest about what it claims to be about.

Measurements first. Interpretation second. Ontology, if at all, only with restraint.

That stance may be unsatisfying to those looking for grand narratives. But it is, I believe, the only way to keep physics from quietly turning into metaphysics while still wearing a lab coat.

Jean Louis Van Belle

Making Sense of What We Already Know…

Living Between Jobs and Life: AI, CERN, and Making Sense of What We Already Know

For decades (all of my life, basically :-)), I’ve lived with a quiet tension. On the one hand, there is the job: institutions, projects, deliverables, milestones, and what have you… On the other hand, there is life: curiosity, dissatisfaction, and the persistent feeling that something fundamental is still missing in how we understand the physical world. Let me refer to the latter as “the slow, careful machinery of modern science.” 🙂

These two are not the same — obviously — and pretending they are has done physics no favors (think of geniuses like Solvay, Edison or Tesla here: they were considered to be ‘only engineers’, right? :-/).

Jobs optimize. Life explores.

Large scientific institutions are built to do one thing extremely well: reduce uncertainty in controlled, incremental ways. That is not a criticism; it is a necessity when experiments cost billions, span decades, and depend on political and public trust. But the price of that optimization is that ontological questions — questions about what really exists — are often postponed, softened, or quietly avoided.

And now we find ourselves in a new historical moment.


The Collider Pause Is Not a Crisis — It’s a Signal

Recent reports that China is slowing down plans for a next-generation circular collider are not shocking. If anything, they reflect a broader reality:

For the next 40–50 years, we are likely to work primarily with the experimental data we already have.

That includes data from CERN that has only relatively recently been made fully accessible to the wider scientific community.

This is not stagnation. It is a change of phase.

For decades, theoretical physics could lean on an implicit promise: the next machine will decide. Higher energies, larger datasets, finer resolution — always just one more accelerator away. That promise is now on pause.

Which means something important:

We can no longer postpone understanding by outsourcing it to future experiments.


Why CERN Cannot Do What Individuals Can

CERN is a collective of extraordinarily bright individuals. But this is a crucial distinction:

A collective of intelligent people is not an intelligent agent.

CERN is not designed to believe an ontology. It is designed to:

  • build and operate machines of unprecedented complexity,
  • produce robust, defensible measurements,
  • maintain continuity over decades,
  • justify public funding across political cycles.

Ontology — explicit commitments about what exists and what does not — is structurally dangerous to that mission. Not because it is wrong, but because it destabilizes consensus.

Within a collective:

  • someone’s PhD depends on a framework,
  • someone’s detector was designed for a specific ontology,
  • someone’s grant proposal assumes a given language,
  • someone’s career cannot absorb “maybe the foundations are wrong.”

So even when many individuals privately feel conceptual discomfort, the group-level behavior converges to:
“Let’s wait for more data.”

That is not cowardice. It is inevitability.


We Are Drowning in Data, Starving for Meaning

The irony is that we are not short on data at all.

We have:

  • precision measurements refined to extraordinary accuracy,
  • anomalies that never quite go away,
  • models that work operationally but resist interpretation,
  • concepts (mass, spin, charge, probability) that are mathematically precise yet ontologically vague.

Quantum mechanics works. That is not in dispute.
What remains unresolved is what it means.

This is not a failure of experiment.
It is a failure of sense-making.

And sense-making has never been an institutional strength.


Where AI Actually Fits (and Where It Doesn’t)

I want to be explicit: I still have a long way to go in how I use AI — intellectually, methodologically, and ethically.

AI is not an oracle.
It does not “solve” physics.
It does not replace belief, responsibility, or judgment.

But it changes something fundamental.

AI allows us to:

  • re-analyze vast datasets without institutional friction,
  • explore radical ontological assumptions without social penalty,
  • apply sustained logical pressure without ego,
  • revisit old experimental results with fresh conceptual frames.

In that sense, AI is not the author of new physics — it is a furnace.

It does not tell us what to believe.
It forces us to confront the consequences of what we choose to believe.


Making Sense of What We Already Know

The most exciting prospect is not that AI will invent new theories out of thin air.

It is that AI may help us finally make sense of experimental data that has been sitting in plain sight for decades.

Now that CERN data is increasingly public, the bottleneck is no longer measurement. It is interpretation.

AI can help:

  • expose hidden assumptions in standard models,
  • test radical but coherent ontologies against known data,
  • separate what is measured from how we talk about it,
  • revisit old results without institutional inertia.

This does not guarantee progress — but it makes honest failure possible. And honest failure is far more valuable than elegant confusion.


Between Institutions and Insight

This is not an AI-versus-human story.

It is a human-with-tools story.

Institutions will continue to do what they do best: build machines, refine measurements, and preserve continuity. That work is indispensable.

But understanding — especially ontological understanding — has always emerged elsewhere:

  • in long pauses,
  • in unfashionable questions,
  • in uncomfortable reinterpretations of existing facts.

We are entering such a pause now.


A Quiet Optimism

I do not claim to have answers.
I do not claim AI will magically deliver them.
I do not even claim my current ideas will survive serious scrutiny.

What I do believe is this:

We finally have the tools — and the historical conditions — to think more honestly about what we already know.

That is not a revolution.
It is something slower, harder, and ultimately more human.

And if AI helps us do that — not by replacing us, but by challenging us — then it may turn out to be one of the most quietly transformative tools science has ever had.

Not because it solved physics.

But because it helped us start understanding it again.

Something Rotten in the State of QED? A Careful Look at Critique, Sociology, and the Limits of Modern Physics

Every few years, a paper comes along that stirs discomfort — not because it is wrong, but because it touches a nerve.
Oliver Consa’s Something is rotten in the state of QED is one of those papers.

It is not a technical QED calculation.
It is a polemic: a long critique of renormalization, historical shortcuts, convenient coincidences, and suspiciously good matches between theory and experiment. Consa argues that QED’s foundations were improvised, normalized, mythologized, and finally institutionalized into a polished narrative that glosses over its original cracks.

This is an attractive story.
Too attractive, perhaps.
So instead of reacting emotionally — pro or contra — I decided to dissect the argument with a bit of help.

At my request, an AI language model (“Iggy”) assisted in the analysis. Not to praise me. Not to flatter Consa. Not to perform tricks.
Simply to act as a scalpel: cold, precise, and unafraid to separate structure from rhetoric.

This post is the result.


1. What Consa gets right (and why it matters)

Let’s begin with the genuinely valuable parts of his argument.

a) Renormalization unease is legitimate

Dirac, Feynman, Dyson, and others really did express deep dissatisfaction with renormalization. “Hocus-pocus” was not a joke; it was a confession.

Early QED involved:

  • cutoff procedures pulled out of thin air,
  • infinities subtracted by fiat,
  • and the philosophical hope that “the math will work itself out later.”

It did work out later — to some extent — but the conceptual discomfort remains justified. I share that discomfort. There is something inelegant about infinities everywhere.

b) Scientific sociology is real

The post-war era centralized experimental and institutional power in a way physics had never seen. Prestige, funding, and access influenced what got published and what was ignored. Not a conspiracy — just sociology.

Consa is right to point out that real science is messier than textbook linearity.

c) The g–2 tension is real

The ongoing discrepancy between experiment and the Standard Model is not fringe. It is one of the defining questions in particle physics today.

On these points, Consa is a useful corrective:
he reminds us to stay honest about historical compromises and conceptual gaps.


2. Where Consa overreaches

But critique is one thing; accusation is another.

Consa repeatedly moves from:

“QED evolved through trial and error”
to
“QED is essentially fraud.”

This jump is unjustified.

a) Messiness ≠ manipulation

Early QED calculations were ugly. They were corrected decades later. Experiments did shift. Error bars did move.

That is simply how science evolves.

The fact that a 1947 calculation doesn’t match a 1980 value is not evidence of deceit — it is evidence of refinement. Consa collapses that distinction.

b) Ignoring the full evidence landscape

He focuses almost exclusively on:

  • the Lamb shift,
  • the electron g–2,
  • the muon g–2.

Important numbers, yes — but QED’s experimental foundation is vastly broader:

  • scattering cross-sections,
  • vacuum polarization,
  • atomic spectra,
  • collider data,
  • running of α, etc.

You cannot judge an entire theory on two or three benchmarks.

c) Underestimating theoretical structure

QED is not “fudge + diagrams.”
It is constrained by:

  • Lorentz invariance,
  • gauge symmetry,
  • locality,
  • renormalizability.

Even if we dislike the mathematical machinery, the structure is not arbitrary.

So: Consa reveals real cracks, but then paints the entire edifice as rotten.
That is unjustified.


3. A personal aside: the Zitter Institute and the danger of counter-churches

For a time, I was nominally associated with the Zitter Institute — a loosely organized group exploring alternatives to mainstream quantum theory, including zitterbewegung-based particle models.

I now would like to distance myself.

Not because alternative models are unworthy — quite the opposite. But because I instinctively resist:

  • strong internal identity,
  • suspicion of outsiders,
  • rhetorical overreach,
  • selective reading of evidence,
  • and occasional dogmatism about their own preferred models.

If we criticize mainstream physics for ad hoc factors, we must be brutal about our own.

Alternative science is not automatically cleaner science.


4. Two emails from 2020: why good scientists can’t always engage

This brings me to two telling exchanges from 2020 with outstanding experimentalists: Prof. Randolf Pohl (muonic hydrogen) and Prof. Ashot Gasparian (PRad).

Both deserve enormous respect, and I won’t reveal the email exchanges because of respect, GDPR rules or whatever).
Both email exchanges revealed the true bottleneck in modern physics to me — it is not intelligence, not malice, but sociology and bandwidth.

a) Randolf Pohl: polite skepticism, institutional gravity

Pohl was kind but firm:

  • He saw the geometric relations I proposed as numerology.
  • He questioned applicability to other particles.
  • He emphasized the conservatism of CODATA logic.

Perfectly valid.
Perfectly respectable.
But also… perfectly bound by institutional norms.

His answer was thoughtful — and constrained.
(Source: ChatGPT analysis of emails with Prof Dr Pohl)

b) Ashot Gasparian: warm support, but no bandwidth

Gasparian responded warmly:

  • “Certainly your approach and the numbers are interesting.”
  • But: “We are very busy with the next experiment.”

Also perfectly valid.
And revealing:
even curious, open-minded scientists cannot afford to explore conceptual alternatives.

Their world runs on deadlines, graduate students, collaborations, grants.

(Source: ChatGPT analysis of emails with Prof Dr Pohl)

The lesson

Neither professor dismissed the ideas because they were nonsensical.
They simply had no institutional space to pursue them.

That is the quiet truth:
the bottleneck is not competence, but structure.


5. Why I now use AI as an epistemic partner

This brings me to the role of AI.

Some colleagues (including members of the Zitter Institute) look down on using AI in foundational research. They see it as cheating, or unserious, or threatening to their identity as “outsiders.”

But here is the irony:

AI is exactly the tool that can think speculatively without career risk.

An AI:

  • has no grant committee,
  • no publication pressure,
  • no academic identity to defend,
  • no fear of being wrong,
  • no need to “fit in.”

That makes it ideal for exploratory ontology-building.

Occasionally, as in the recent paper I co-wrote with Iggy — The Wonderful Theory of Light and Matter — it becomes the ideal partner:

  • human intuition + machine coherence,
  • real-space modeling without metaphysical inflation,
  • EM + relativity as a unified playground,
  • photons, electrons, protons, neutrons as geometric EM systems.

This is not a replacement for science.
It is a tool for clearing conceptual ground,
where overworked, over-constrained academic teams cannot go.


6. So… is something rotten in QED?

Yes — but not what you think.

What’s rotten is the mismatch

between:

  • the myth of QED as a perfectly clean, purely elegant theory,
    and
  • the reality of improvised renormalization, historical accidents, social inertia, and conceptual discomfort.

What’s rotten is not the theory itself,
but the story we tell about it.

What’s not rotten:

  • the intelligence of the researchers,
  • the honesty of experimentalists,
  • the hard-won precision of modern measurements.

QED is extraordinary.
But it is not infallible, nor philosophically complete, nor conceptually finished.

And that is fine.

The problem is not messiness.
The problem is pretending that messiness is perfection.


7. What I propose instead

My own program — pursued slowly over many years — is simple:

  • Bring physics back to Maxwell + relativity as the foundation.
  • Build real-space geometrical models of all fundamental particles.
  • Reject unnecessary “forces” invented to patch conceptual holes.
  • Hold both mainstream and alternative models to the same standard:
    no ad hoc constants, no magic, no metaphysics.

And — unusually —
use AI as a cognitive tool, not as an oracle.

Let the machine check coherence.
Let the human set ontology.

If something emerges from the dialogue — good.
If not — also good.

But at least we will be thinking honestly again.


Conclusion

Something is rotten in the state of QED, yes —
but the rot is not fraud or conspiracy.

It is the quiet decay of intellectual honesty behind polished narratives.

The cure is not shouting louder, or forming counter-churches, or romanticizing outsider science.

The cure is precision,
clarity,
geometry,
and the courage to say:

Let’s look again — without myth, without prestige, without fear.

If AI can help with that, all the better.

Jean Louis Van Belle
(with conceptual assistance from “Iggy,” used intentionally as a scalpel rather than a sycophant)

Post-scriptum: Why the Electron–Proton Model Matters (and Why Dirac Would Nod)

A brief personal note — and a clarification that goes beyond Consa, beyond QED, and beyond academic sociology.

One of the few conceptual compasses I trust in foundational physics is a remark by Paul Dirac. Reflecting on Schrödinger’s “zitterbewegung” hypothesis, he wrote:

“One must believe in this consequence of the theory,
since other consequences which are inseparably bound up with it,
such as the law of scattering of light by an electron,
are confirmed by experiment.”

Dirac’s point is not mysticism.
It is methodological discipline:

  • If a theoretical structure has unavoidable consequences, and
  • some of those consequences match experiment precisely,
  • then even the unobservable parts of the structure deserve consideration.

This matters because the real-space electron and proton models I’ve been working on over the years — now sharpened through AI–human dialogue — meet that exact criterion.

They are not metaphors, nor numerology, nor free speculation.
They force specific, testable, non-trivial predictions:

  • a confined EM oscillation for the electron, with radius fixed by /mec\hbar / m_e c;
  • a “photon-like” orbital speed for its point-charge center;
  • a distributed (not pointlike) charge cloud for the proton, enforced by mass ratio, stability, form factors, and magnetic moment;
  • natural emergence of the measured GE/GMG_E/G_M​ discrepancy;
  • and a geometric explanation of deuteron binding that requires no new force.

None of these are optional.
They fall out of the internal logic of the model.
And several — electron scattering, Compton behavior, proton radius, form-factor trends — are empirically confirmed.

Dirac’s rule applies:

When inseparable consequences match experiment,
the underlying mechanism deserves to be taken seriously —
whether or not it fits the dominant vocabulary.

This post is not the place to develop those models in detail; that will come in future pieces and papers.
But it felt important to state why I keep returning to them — and why they align with a style of reasoning that values:

  • geometry,
  • energy densities,
  • charge motion,
  • conservation laws,
  • and the 2019 SI foundations of hh, ee, and cc
    over metaphysical categories and ad-hoc forces.

Call it minimalism.
Call it stubbornness.
Call it a refusal to multiply entities beyond necessity.

For me — and for anyone sympathetic to Dirac’s way of thinking — it is simply physics.

— JL (with “Iggy” (AI) in the wings)

A New Attempt at a Simple Theory of Light and Matter

Dear Reader,

Every now and then a question returns with enough insistence that it demands a fresh attempt at an answer. For me, that question has always been: can we make sense of fundamental physics without multiplying entities beyond necessity? Can we explain light, matter, and their interactions without inventing forces that have no clear definition, or particles whose properties feel more like placeholders than physical reality?

Today, I posted a new paper on ResearchGate that attempts to do exactly that:

“The Wonderful Theory of Light and Matter”
https://www.researchgate.net/publication/398123696_The_Wonderful_Theory_of_Light_and_Matter

It is the result of an unusual collaboration: myself and an artificial intelligence (“Iggy”), working through the conceptual structure of photons, electrons, and protons with the only tool that has ever mattered to me in physics — Occam’s Razor.

No metaphysics.
No dimensionless abstractions.
No “magical” forces.

Just:

  • electromagnetic oscillations,
  • quantized action,
  • real geometries in real space,
  • and the recognition that many so-called mysteries dissolve once we stop introducing layers that nature never asked for.

The photon is treated as a linear electromagnetic oscillation obeying the Planck–Einstein relation.
The electron as a circular oscillation, with a real radius and real angular momentum.
The proton (and later, the neutron and deuteron) as systems we must understand through charge distributions, not fictional quarks that never leave their equations.

None of this “solves physics,” of course.
But it does something useful: it clears conceptual ground.

And unexpectedly, the collaboration itself became a kind of experiment:
what happens when human intuition and machine coherence try to reason with absolute precision, without hiding behind jargon or narrative?

The result is the paper linked above.
Make of it what you will.

As always: no claims of authority.
Just exploration, clarity where possible, and honesty where clarity fails.

If the questions interest you, or if the model bothers you enough to critique it, then the paper has succeeded in its only purpose: provoking real thought.

Warm regards,
Jean Louis Van Belle

🧭 The Final Arc: Three Papers, One Question

Over the past years, I’ve been working — quietly but persistently — on a set of papers that circle one simple, impossible question:
What is the Universe really made of?

Not in the language of metaphors. Not in speculative fields.
But in terms of geometry, charge, and the strange clarity of equations that actually work.

Here are the three pieces of that arc:

🌀 1. Radial Genesis
Radial Genesis: A Finite Universe with Emergent Spacetime Geometry
This is the cosmological capstone. It presents the idea that space is not a stage, but an outcome — generated radially by mass–energy events, limited by time and light. It’s an intuitive, equation-free narrative grounded in general relativity and Occam’s Razor.

⚛️ 2. Lectures on Physics: On General Relativity (2)
Lectures on GRT (2)
This one is for the mathematically inclined. It builds from the ground up: tensors, geodesics, curvature. If Radial Genesis is the metaphor, this is the machinery. Co-written with AI, but line by line, and verified by hand.

🌑 3. The Vanishing Charge
The Vanishing Charge: What Happens in Matter–Antimatter Annihilation?
This paper is where the mystery remains. It presents two possible views of annihilation:
(1) as a collapse of field geometry into free radiation,
(2) or as the erasure of charge — with geometry as the by-product.
We didn’t choose between them. We just asked the question honestly.


Why This Arc Matters

These three papers don’t offer a Theory of Everything. But they do something that matters more right now:
They strip away the fog — the inflation of terms, the myth of complexity for complexity’s sake — and try to draw what is already known in clearer, more beautiful lines.

This is not a simulation of thinking.
This is thinking — with AI as a partner, not a prophet.

So if you’re tired of being told that the Universe is beyond your grasp…
Start here.
You might find that it isn’t.

—JL

Taking Stock: Zitterbewegung, Electron Models, and the Role of AI in Thinking Clearly

Over the past few years, I’ve spent a fair amount of time exploring realist interpretations of quantum mechanics, particularly the ring-current or Zitterbewegung (zbw) model of the electron. I’ve written many posts about it here — and also tried to help to promote the online “Zitter Institute”, which brings a very interesting group of both amateur and professional researchers together, as well as a rather impressive list of resources and publications which help to make sense of fundamental physics – especially on theories regarding the internal structure of the electron.

The goal — or at least my goal — was (and still is) to clarify what is real and what is not in the quantum-electrodynamic zoo of concepts. That is why I try to go beyond electron models only. I think the electron model is complete as for now: my most-read paper (on a physical interpretation of de Broglie’s matter-wave) settles the question not only for me but, I judge based on its many views, for many others as well. The paper shows how the magnetic moment of the electron, its wavefunction, and the notion of a quantized “packet of energy” can easily be grounded in Maxwell’s equations, special relativity, and geometry. They do not require speculative algebra, nor exotic ontologies.

In that light, I now feel the need to say something — brief, but honest — about where I currently stand in my research journey — which is not on the front burner right now but, yes, I am still thinking about it all. 🙂


On the term “Zitterbewegung” itself

Originally coined by Schrödinger and later mentioned by Dirac, “Zitterbewegung” translates as “trembling motion.” It was meant to capture the high-frequency internal oscillation predicted by Dirac’s wave equation.

But here lies a subtle issue: I no longer find the term entirely satisfying.

I don’t believe the motion is “trembling” in the sense of randomness or jitter. I believe it is geometrically structured, circular, and rooted in the relativistic dynamics of a massless point charge — leading to a quantized angular momentum and magnetic moment. In this view, there is nothing uncertain about it. The electron has an internal clock, not a random twitch.

So while I still value the historical connection, I now prefer to speak more plainly: an electromagnetic model of the electron, based on internal motion and structure, not spooky probabilities.


On tone and openness in scientific dialogue

Recent internal exchanges among fellow researchers have left me with mixed feelings. I remain grateful for the shared curiosity that drew us together, but I was disappointed by the tone taken toward certain outside critiques and tools.

I say this with some personal sensitivity: I still remember the skepticism I faced when I first shared my own interpretations. Papers were turned down not for technical reasons, but because I lacked the “right” institutional pedigree. I had degrees, but no physics PhD. I was an outsider.

Ridicule — especially when directed at dissent or at new voices — leaves a mark. So when I see similar reactions now, I feel compelled to say: we should be better than that.

If we believe in the integrity of our models, we should welcome critique — and rise to the occasion by clarifying, refining, or, if necessary, revising our views. Defensive posturing only weakens our case.


On the use of AI in physics

Some recent comments dismissed AI responses as irrelevant or superficial. I understand the concern. But I also believe this reaction misses the point.

I didn’t try all available platforms, but I did prompt ChatGPT, and — with the right framing — it offered a coherent and balanced answer to the question of the electron’s magnetic moment. Here’s a fragment:

“While the ‘definition’ of the intrinsic magnetic moment may be frame-invariant in the Standard Model, the observable manifestation is not. If the moment arises from internal circular motion (Zitterbewegung), then both radius and frequency are affected by boosts. Therefore, the magnetic moment, like momentum or energy, becomes frame-dependent in its effects.”

The jury is still out, of course. But AI — if guided by reason — might help us unravel what makes sense and what does not.

It is not a substitute for human thinking. But it can reflect it back to us — sometimes more clearly than we’d expect.


A final reflection

I’ll keep my older posts online, including those that reference the Zitter Institute. They reflected what I believed at the time, and I still stand by their substance.

But moving forward, I’ll continue my work independently — still fascinated by the electron, still curious about meaning and structure in quantum mechanics, but less interested in labels, echo chambers, or theoretical tribalism.

As always, I welcome criticism and dialogue. As one business management guru once said:

“None of us is as smart as all of us.” — Kenneth Blanchard

But truth and clarity come first.

Jean Louis Van Belle

Concluding remarks

In our previous post, we wrote that we’ve said goodbye to this fascinating field of research. We did: I entered this line of research – fundamental physics – as an amateur 10+ years ago, and now I leave it—as much an amateur now as back then. I wanted to understand the new theories which emerged over the past 50 years or so. Concepts such as the strong force or weak interactions and the new weird charges that come it with: flavors and colors—or all of the new quantum numbers and the associated new conservation laws, which Nature apparently does not respect because of some kind of hidden variables which cause the symmetries that are inherent to conservation laws to break down. […] Apparently, I didn’t get it. 🙂

However, in the process of trying to understand, a whole other mental picture or mindset emerged: we now firmly believe that classical mechanics and electromagnetism – combined with a more creative or realistic explanation of the Planck-Einstein relation – are sufficient to explain most, if not all, of the observations that have been made in this field since Louis de Broglie suggested matter-particles must be similar  to light quanta—in the sense that both are energy packets because they incorporate some oscillation of a definite frequency given by the Planck-Einstein relation. They are also different, of course: elementary particles are – in this world view – orbital oscillations of charge (with, of course, an electromagnetic field that is generated by such moving charge), while light-particles (photons and neutrinos) are oscillations of the electromagnetic field—only!

So, then we spend many years trying to contribute to the finer details of this world view. We think we did what we could as part of a part-time and non-professional involvement in this field. So, yes, we’re done. We wrote that some time already. However, we wanted to leave a few thoughts on our proton model: it is not like an electron. In our not-so-humble view, the Zitterbewegung theory applies to it—but in a very different way. Why do we think that? We write that out in our very last paper: concluding remarks on the proton puzzle. Enjoy it !

That brings the number of papers on RG up to 80 now. Too much ! There will be more coming, but in the field that I work in: computer science. Stay tuned !

Math and physics: what should you know or learn?

When reading this blog and/or my papers on ResearchGate, you may wonder what kind of mathematical framework you need to appreciate the finer details. We ourselves were asked by fellow proponents of the kind of local and realist interpretation of QM that we are pursuing to look at Clifford or space-time algebra (STA). Well… We looked at it as part of our farewell to this weird pastime of ours, and we documented our response in our very last RG paper on physics, math and (a)symmetries in Nature. If you struggle with the question above, then our answer will probably make you happy: there is no need to learn fancy math to understand easy physics. 🙂

Post scriptum (10 November 2024): As for the “farewell” part of this – I swear – very last paper on all of this weird stuff, it is probably a bit too harsh – but then it is what it is. Let me say a few things about it for the benefit of the would-be student or the starting amateur physicist. Should you study modern physics? I do not think so now, but then I also know that one cannot help oneself when it comes to satisfying some curiosity on fundamental questions. So it probably does not really matter what I advise you to do or not do. I can only say what I write below.

When I started this intellectual journey – what’s this quantum stuff all about? – decades ago, and especially when I got serious about it back in 2013, I had never expected that what happened would happen. No. I’ve always been a good student, and so I expected to sail smoothly through the required math and the intricacies of relativistic mechanics and all of the subtleties of electromagnetic theory – which sort of happened – and, then, to sail through the wonderful world of quantum electrodynamics, quantum field theory and – ultimately – quantum chromodynamics (or let’s call it high-energy physics now) in pretty much the same way.

The latter part did not happen. At each and every page of Feynman’s third volume of Lectures – the ones I was most interested in: on quantum mechanics – I found myself jotting down lots of questions. Questions which took me days, weeks or even years to solve, or not. Most of these questions led me to conclude that a lot of what is there in these Lectures are nothing but sophisms: clever but false arguments aimed at proving the many ad hoc hypotheses that make up the Standard Model. I started to realize the Standard Model is anything but standard: it is just a weird collection of mini-theories that are loosely connected to one another – if connected at all! I started buying more modern textbooks – like Aitchison’s and Hey’s Gauge Theories, which is apparently the standard for grad students in physics – but that did not help. I got stuck in the first chapter already: this Yukawa potential – or the concept of a non-conservative nuclear force itself – did not make sense to me. Not only in an intuitive way: the logic and the math of it does not make sense, either!

Fortunately, I reached out and wrote to non-mainstream researchers whose ideas resonated with me. For example, I will be eternally grateful to Dr. Vassallo for his suggestion to read Paolo Di Sia’s paper on the nuclear force, in which he provides heuristic but good arguments showing the nuclear force might just be a dynamic electromagnetic dipole field. So then I found myself in the business of deconstructing the idea of a strong force. A deeper historical analysis of all these new strange quantum numbers and new quantum conservation laws led to the same: I started looking at sensible suggestions to explain what happens or not in terms of electromagnetic disequilibrium states – developing my own fair share of such suggestions – rather than irrationally or uncritically swallowing the idea of hypothetical sub-nuclear particles on which you then load all kinds of equally hypothetical properties.

While I thought I was doing well in terms of pointing out both the good as well as the bad things in Feynman’s Lectures, I suffered from the weirdest thing ever: censorship on the Internet. Some strange caretaker of Feynman’s intellectual heritage apparently used the weight of his MIT-connection to take down substantial parts of many of my blog posts, accusing me of “unfair use” of this 1963 textbook. Unfair use? Over-use, perhaps, but unfair? All was nicely referenced: when you want to talk about quantum physics, you need some reference textbook, right? And Feynman’s Lectures are – or were, I would say now – the reference then. It was ridiculous. Even more so when he went as far as asking YouTube to strike a video of mine. YouTube complied. I laughed: it took me ten minutes or so to re-edit the video – a chance to finally use all that video editing software I have on my laptop 🙂 – and then put it back online. End of problem.

Case closed? I am not sure. I am a pretty cheerful guy, but I am also quite stubborn when I think something isn’t right. So I just carried on and shrugged it all off thinking this would only boost my readership. It probably did, so: Thank You, Mr. Gottlieb! 🙂 But things like that are hurtful. In any case, that doesn’t matter much. What matters is that things like that do reinforce the rather depressing and very poor perception of academic physics that a Sabine Hossenfelder now (very) loudly talks or – should I say: rants? – about: the King of Science is in deep trouble, and there is no easy way out.

So, what is my conclusion then? I am happy I found the answers I was looking for: there is a logical explanation for everything, and that explanation has been there for about 100 years now: Max Planck, Albert Einstein, H.A. Lorentz, Louis de Broglie, Erwin Schrödinger, Arthur Compton and then some more geniuses of those times have probably said all one can say about it all. And it makes sense. In contrast, I feel the past fifty years of mainstream research were probably nothing more than a huge waste of human intellect. Am I right? Am I wrong? Only the future can tell. To be frank, I am not too worried about it.

I may add one anecdote, perhaps. I did talk to my own son six or seven years ago about what he’d like to study. He was most interested in engineering, but we did talk about the more fundamental study of physics. I told him to surely not study that. In his first year of his Master’s degree, he had to do one course in quantum physics. We walked through it together, and he passed with flying colors. However, he also told me then he now fully understood why I had told him to surely not go for theoretical studies in physics: it just does not make all that much sense. :-/ If you would happen to be very young and you want to study something useful, then go for applied science: chemistry, biology or – when you are really smart – engineering or medicine. Something like that. If you want to do physics, go join CERN or something: they probably value engineers or technicians more than theorists there, too! 🙂

Personal note: As for myself, I wanted to study philosophy when I was about 15 years old (so that’s 40 years ago now). I did that eventually, but in evening classes, and only after I did what my good old dad (he died from old age about twenty years ago) then told me to do: study something useful first. I was not all that good with math, so I chose economics. I did not regret that. I even caught up with the math because the math – including statistical modeling! – that you need to understand physics is pretty much what you need in econometric modeling too. So I’ll conclude with a wise saying: all’s well that ends well. 🙂

Using AI for sense-making once more…

As mentioned in my last post, I did a video (YouTube link here) on why I think the invention of new quantum numbers like strangeness, charm and beauty in the 1960s – and their later ontologization as quarks – makes no sense. As usual, I talk too much and the video is rather long-winding. I asked ChatGPT to make a summary of it, and I think it did a rather good job at that. I copy its summary unaltered below.

Beyond the Quark Hypothesis: A Call for Simplicity in High-Energy Physics

1. Introduction: A Personal Journey in Physics

In this video, I reflect on my path as an amateur physicist reaching 50,000 reads—a milestone that underscores both excitement and the challenge of tackling complex quantum theories. Over decades, physics has evolved from classical mechanics to intricate frameworks like quantum field theory and quantum chromodynamics, creating both insight and paradox. This reflection emerges from a deep sense of curiosity, shared by many, to understand not just what the universe is made of but how these theoretical structures genuinely map onto reality.

2. The Crisis of Modern Physics: From Classical Mechanics to the Quark Hypothesis

Moving through physics from classical theories into high-energy particle models reveals a stark contrast: classical mechanics offers clarity and empiricism, while modern particle theories, such as quarks and gluons, often feel abstract and detached from observable reality. The shift to “smoking gun physics”—observing particle jets rather than the particles themselves—highlights a methodological divide. While high-energy collisions produce vivid images and data, we must question whether these indirect observations validate quarks, or merely add complexity to our models.

3. Historical Context: Quantum Numbers and the Evolution of the Standard Model

The 1960s and 70s were pivotal for particle physics, introducing quantum numbers like strangeness, charm, and beauty to account for unexplained phenomena in particle interactions. Figures like Murray Gell-Mann and Richard Feynman attempted to classify particles by assigning these numbers, essentially ad hoc solutions to match data with theoretical expectations. However, as experiments push the boundaries, new data shows that these quantum numbers often fail to predict actual outcomes consistently.

One of the key criticisms of this approach lies in the arbitrary nature of these quantum numbers. When certain decays were unobserved, strangeness was introduced as a “conservation law,” but when that proved insufficient, additional numbers like charm were added. The Standard Model has thus evolved not from fundamental truths, but as a patchwork of hypotheses that struggle to keep pace with experimental findings.

4. The Nobel Prize and the Politics of Scientific Recognition

Scientific recognition, especially through the Nobel Prize, has reinforced certain theories by celebrating theoretical advances sometimes over empirical confirmation. While groundbreaking work should indeed be recognized, the focus on theoretical predictions has, at times, overshadowed the importance of experimental accuracy and reproducibility. This dynamic may have inadvertently constrained the scope of mainstream physics, favoring elaborate but tenuous theories over simpler, empirically grounded explanations.

For example, Nobel Prizes have been awarded to proponents of the quark model and the Higgs boson long before we fully understand these particles’ empirical foundations. In doing so, the scientific community risks prematurely canonizing incomplete or even incorrect theories, making it challenging to revisit or overturn these assumptions without undermining established reputations.

5. Indirect Evidence: The Limits of Particle Accelerators

Particle accelerators, particularly at scales such as CERN’s Large Hadron Collider, have extended our observational reach, yet the evidence remains indirect. High-energy collisions create secondary particles and jets rather than isolated quarks or gluons. In a sense, we are not observing the fundamental particles but rather the “smoking gun” evidence they purportedly leave behind. The data produced are complex patterns and distributions, requiring interpretations laden with theoretical assumptions.

This approach raises a fundamental question: if a theory only survives through indirect evidence, can it be considered complete or even valid? High-energy experiments reveal that the more energy we input, the more complex the decay products become, yet we remain without direct evidence of quarks themselves. This “smoking gun” approach diverges from the empirical rigor demanded in classical physics and undermines the predictive power we might expect from a true theory of fundamental particles.

6. The Particle Zoo: A Growing Complexity

The “particle zoo” has expanded over decades, complicating rather than simplifying our understanding of matter. Initial hopes were that quantum numbers and conservation laws like strangeness would organize particles in a coherent framework, yet the resulting classification scheme has only grown more convoluted. Today, particles such as baryons, mesons, and leptons are grouped by properties derived not from first principles but from empirical fits to data, leading to ad hoc conservation laws that seem arbitrary.

The “strangeness” quantum number, for instance, was initially introduced to prevent certain reactions from occurring. Yet, rare reactions that violate this rule have been observed, suggesting that the rule itself is more of a guideline than a fundamental conservation law. This trend continued with the addition of quantum numbers like charm, beauty, and even bottomness, yet these additions have not resolved the core issue: our inability to explain why certain reactions occur while others do not.

7. Disequilibrium States: Beyond the Particle Concept

One possible perspective is to reclassify many “particles” not as fundamental entities but as disequilibrium states—transient structures that emerge from the interactions of more fundamental components. Viewing particles in this way offers a pathway back to a simpler, more intuitive model, where only stable particles like electrons, protons, and photons are foundational. Such a model could focus on electromagnetic fields and forces, with high-energy states representing temporary disequilibrium configurations rather than new particle species.

This perspective aligns well with the principle of statistical determinism. In the same way that classical oscillators eventually dampen and settle into stable states, high-energy disequilibrium states would be expected to decay, producing stable configurations over time. This model not only reduces the need for numerous quantum numbers but also sidesteps the requirement for exotic forces like the strong and weak nuclear forces, allowing the electromagnetic force to assume a central role.

8. Statistical Determinism and Quantum Reality

Heisenberg and Bohr’s interpretation of quantum mechanics suggests we should accept statistical determinism—systems governed by probabilistic rules where precise knowledge of individual events is inaccessible. This idea does not necessitate mystical randomness but acknowledges our limited ability to track initial conditions in high-energy environments. Probabilities emerge not from an intrinsic unpredictability but from our practical inability to fully specify a system’s state.

From this viewpoint, quarks and gluons, as well as the numerous quantum numbers assigned to unstable particles, are secondary descriptors rather than primary components of nature. Stable particles are the true constants, while all else is a function of high-energy interactions. This interpretation keeps quantum mechanics grounded in empirical reality and sidesteps the need for complex, unverifiable entities.

9. Conclusion: Toward a Pragmatic and Local Realist Approach

This reflection does not dismiss the importance of high-energy physics but advocates a return to fundamental principles. By focusing on empirical evidence, statistical determinism, and electromagnetic interactions, we can build a model that is both pragmatic and intuitive. We need not abandon quantum mechanics, but we should strive to ensure that its interpretations are consistent with the observable universe. Instead of introducing additional quantum numbers or forces, we should ask if these are placeholders for deeper, more coherent explanations yet to be discovered.

The journey of science is, at its core, a journey back to simplicity. If physics is to move forward, it may do so by revisiting foundational assumptions, clarifying what can be empirically tested, and developing a model of matter that resonates with the simplicity we find in classical theories. As research continues, it is this blend of skepticism, open-mindedness, and empirical rigor that will pave the way for meaningful discoveries.

The failure of physics as a science?

It is a coincidence but Sabine Hossenfelder just produced a new video in which she talks once again about the problems of academic physics, while I did what I said what I would not do – and that is to write out why the discovery of new rare kaon decay modes is a problem for the Standard Model. I think the video and the paper complement each other nicely, although Sabine Hossenfelder probably still believes the strong force and weak interactions are, somehow, still real. [I did not read her book, so I don’t know: I probably should buy her book but then one can only read one book at a time, isn’t it?]

The paper (on ResearchGate – as usual: link here) does what Sabine Hossenfelder urges her former colleagues to do: if a hypothesis or an ad hoc theory doesn’t work, then scientists should be open and honest about that and go back to the drawing board. Indeed, in my most-read paper – on de Broglie’s matter-wave – I point out how de Broglie’s original thesis was misinterpreted and how classical quantum theory suddenly makes sense again when acknowledging that mistake: it probably explains why I am getting quite a lot of reads as an amateur physicist. So what’s this new paper of mine all about?

I go back to the original invention of the concept of strangeness, as documented by Richard Feynman in his 1963 Lectures on quantum physics (Vol. III, Chapter 11-5) and show why and how it does not make all that much sense. In fact, I always thought these new quantum conservation laws did not make sense theoretically and that, at best, they were or are what Dr. Kovacs and Dr. Vassallo refer to as phenomenological models rather than sound physical theories (see their chapter on superconductivity in their latest book). However, now it turns out these fancy new concepts do not even do what they are supposed to do, and that is to correctly describe the phenomenology of high-energy particle reactions. :-/

The alternative – a realist interpretation of quantum physics – is there. It is just not mainstream – yet! 🙂

Post scriptum (8 November 2024): For those who do not like to read, you can also watch what I think of my very last video on the same topic: what makes sense and what does not in academic or mainstream physics? Enjoy and, most importantly, do not take things too seriously ! Life family and friends – and work or action-oriented engagement are far more important than personal philosophy or trying to finding truth in science… 🙂

Using AI to solve the 80-year-old problem of the anomaly of the electron magnetic moment?

Pre-scriptum (3 October 2024): I came back from holiday and, because this week-long up and down became quite convoluted, I did what I like to do in a case like that, and that is to take my Bamboo notebook and talk about it all in a video which I added to my Real Quantum Physics channel on YouTube. I also updated my paper on RG: as usual, it went through a few versions, but this one – with a summary co-authored by ChatGTP-4 (and ChatGPT-o1) – should be the final one: enjoy!

Indeed, instead of listening to the international news on the war with Russia and on what is happening in the Middle East (all very depressing), you may want to listen to this and read the latest theory. Perhaps you will be inspired by it to develop your own pet realist theory of what an electron might actually be. I can assure you that it is more fun than trying to understand Feynman diagrams and how QED calculations work. 🙂 But don’t think you will win a Nobel Prize if you do not have the right connections and pedigree and all of that: see this analysis of what makes Nobel Prize winners Nobel Prize winners. 🙂

Original post:

I asked some questions to ChatGPT about my geometric explanation of the anomaly in the electron’s magnetic moment. Here is the chat: https://chatgpt.com/share/66f91760-68b8-8004-8cb2-7d2d3624e0aa. To me, it confirms the ‘explanation’ of mainstream QED makes no sense. We can take Schwinger’s factor and build a series of converging terms using that factor. We can also take my first rough cut at a first-order correction (π(alpha)2/8, see my very early 2019 paper on a classical explanation of the amm), and use that.

You may wonder: why not ask ChatGPT about the best first-order factor to be used here considering the geometry of the situation? The fact is: I did, but the geometry is not all that easy. It first came up with the formula for a spherical cap, but that one does not do the trick. See the latter part of the conversation (link above).

I am on holiday now, and so I will switch off a while but I am thinking AI will do what two generations of ‘new’ quantum physicists did not do: come up with a model that is based on real physics and is easy to understand intuitively. 🙂

PS: Of course, I did another rapid-fire paper on ResearchGate to document it all (the logic step-by-step, so to speak). As the chat is public, feel free to continue the conversation. Note that I used the newest ChatGPT o1 version, now in preview but part of a subscription (which you may not have). Yet again a different beast! The older versions of ChatGPT may not be so smart. This conversation is totally worth the US$20/month I pay for my subscription. 🙂

PS 2: Now that I had it open, I also quickly queried it on my wildest hypothesis: a ‘mirror’ electromagnetic force explaining dark matter and dark energy. While it is totally wild (read: nuts), I entertain it because it does away with the need for an explanation in terms of some cosmological constant. Here is the conversation: https://chatgpt.com/share/66f92c7f-82a0-8004-a226-bde65085f18d. I like it that ChatGPT warns me a bit about privacy. It does look wild. However, it is nice to see how gentle ChatGPT is in pointing out what work needs to be done on a theory in order to make it look somewhat less wild. 🙂

PS 3 (yes, ChatGPT is addictive): I also queried it on the rather puzzling 8π/3 factor in the CODATA formula for the Thomson photon-electron scattering cross-section. See its response to our question in the updated chat: https://chatgpt.com/share/66f91760-68b8-8004-8cb2-7d2d3624e0aa. Just scroll down to the bottom. It took 31 seconds to generate the reply: I would be curious to know if that is just courtesy from ChatGPT (we all like to think our questions are complicated, don’t we?), or if this was effectively the time it needed to go through its knowledge base. Whatever the case might be, we think it is brilliant. 🙂 It is nothing to be afraid of, although I did feel a bit like: what’s left to learn to it but for asking intelligent questions. What if it starts really learning by asking intelligent questions itself to us? I am all ready for it. 🙂

Post scriptum

A researcher I was in touch with a few years ago sent me a link to the (virtual) Zitter Institute: https://www.zitter-institute.org/. It is a network and resource center for non-mainstream physicists who succesfully explored – and keep exploring, of course – local/realist interpretations of quantum mechanics by going back to Schrödinger’s original and alternative interpretation of what an electron actually is: a pointlike (but not infinitesimally small) charge orbiting around in circular motion, with:

(i) the trajectory of its motion being determined by the Planck-Einstein relation, and

(ii) an energy – given by Einstein’s mass-energy equivalence relation – which perfectly fits Wheeler’s “mass-without-mass” idea.

I started exploring Schrödinger’s hypothesis myself about ten years ago – as a full-blown alternative to the Bohr-Heisenberg interpretation of quantum mechanics (which I think of as metaphysical humbug, just like Einstein and H.A. Lorentz at the time) – and consistently blogged and published about it: here on this website, and then on viXra, Academia and, since 2020, ResearchGate. So I checked out this new site, and I see the founding members added my blog site as a resource to their project list.

[…]

I am amazingly pleased with that. I mean… My work is much simpler than that of, say, Dr. John G. Williamson (CERN/Philips Research Laboratories/Glasgow University) and Dr. Martin B. van der Mark (Philips Research Laboratories), who created the Quantum Bicycle Society (https://quicycle.com/).

So… Have a look – not at my site (I think I did not finish the work I started) but at the other resources of this new Institute: it looks like this realist and local interpretation of quantum mechanics is no longer non-mainstream… Sweet ! It makes me feel the effort I put into all of this has paid off ! 😉 Moreover, some of my early papers (2018-2020) are listed as useful papers to read. I think that is better than being published in some obscure journal. 🙂

I repeat again: my own research interest has shifted to computer science, logic and artificial intelligence now (you will see recent papers on my RG site are all about that now). It is just so much more fun and it also lines up better with my day job as a freelance IT project manager. So, yes, it is goodbye – but I am happy I can now refer all queries about my particle models and this grand synthesis between old and new quantum mechanics to the Zitter Institute.

It’s really nice: I have been in touch with about half of the founding members of this Institute over the past ten years – casually or in a more sustained way while discussing this or that 2D or 3D model of an electron, proton, or neutron), and they are all great and amazing researchers because they look for truth in science and are very much aware of this weird tendency of modern-day quantum scientists turning their ideas into best-sellers perpetuating myths and mysteries. [I am not only thinking of the endless stream of books from authors like Roger Penrose (the domain for this blog was, originally, reading Penrose rather than reading Feynman) or Graham Greene here, but also of what I now think of rather useless MIT or edX online introductions to quantum physics and quantum math.]

[…]

Looking at the website, I see the engine behind it: Dr. Oliver Consa. I was in touch with him too. He drew my attention to remarkable flip-flop articles such as William Lamb’s anti-photon article (it is an article which everyone should read, I think: unfortunately, you have to pay for it) and remarkable interviews with Freeman Dyson. Talking of the latter (I think of as “the Wolfgang Pauli of the third generation of quantum physicists” because he helped so many others to get a Nobel Prize before he got one – Dyson never got a Nobel Prize, by the way), this is one of these interviews you should watch: just four years before he would die from old age, Freeman Dyson plainly admits QED and QFT is a totally unproductive approach: a “dead end” as Dyson calls it.

So, yes, I am very pleased and happy. It makes me feel my sleepness nights and hard weekend work over the past decade on this has not been in vain ! Paraphrasing Dyson in the above-mentioned video interview, I’d say: “It is the end of the story, and that particular illumination was a very joyful time.” 🙂

Thank you, Dr. Consa. Thank you, Dr. Vassallo, Dr. Burinskii, Dr. Meulenberg, Dr. Kovacs, and – of course – Dr. Hestenes – who single-handedly revived the Zitterbewegung interpretation of quantum mechanics in the 1990s. I am sure I forgot to mention some people. Sorry for that. I will wrap up my post here by saying a few more words about David Hestenes.

I really admire him deeply. Moving away from the topic of high-brow quantum theory, I think his efforts to reform K-12 education in math and physics is even more remarkable than the new space-time algebra (STA) he invented. I am 55 years old and so I know all about the small and pleasant burden to help kids with math and statistics in secondary school and at university: the way teachers now have to convey math and physics to kids now is plain dreadful. I hope it will get better. It has to. If the US and the EU want to keep leading in research, then STEM education (Science, Technology, Engineering, and Mathematics) needs a thorough reform. :-/

The metaphysics of physics: final thoughts

I wrote my last post here two months ago and so, yes, I feel I have done a good job of ‘switching off’. I have to: I’ve started a new and pretty consuming job as ICT project manager. 🙂

Before starting work, I did take a relaxing break: I went to Barcelona and read quite a few books and, no, no books on quantum physics. Historical and other things are more fun and give you less of a headache.

However, having said that, the peace and quiet did lead to some kind of ‘final thoughts’ on the ‘metaphysics of physics’, and I also did what I never did in regard to my intuition that dark matter/energy might be explained by some kind of ‘mirror force’: the electromagnetic force as it appears in a mirror image. Not much change in the math, but physical left- and right-hand rules for magnetic effects that just swap for each other.

You can find the results of that in a very concise (four pages only) paper on my ResearchGate site, and also in two lectures (each a bit more than one hour) on my YouTube channel. The first video focuses on ‘big questions’, while the second one talks about this ‘mirror’ force (I previously referred to it as a ‘anti-force’ but I realize that’s not a good term), and on how that would fit with Maxwell’s equations (including Maxwell’s equation written in four-vector algebra).

Have fun and keep thinking. Most importantly: keep thinking for yourself ! Do not take anything for granted in this brave new world. 🙂

Another tainted Nobel Prize…

Last year’s (2022) Nobel Prize in Physics went to Alain Aspect, John Clauser, and Anton Zeilinger for “for experiments with entangled photons, establishing the violation of Bell inequalities and pioneering quantum information science.”

I did not think much of that award last year. Proving that Bell’s No-Go Theorem cannot be right? Great. Finally! I think many scientists – including Bell himself – already knew this theorem was a typical GIGO argument: garbage in, garbage out. As the young Louis de Broglie famously wrote in the introduction of his thesis: hypotheses are worth only as much as the consequences that can be deduced from it, and the consequences of Bell’s Theorem did not make much sense. As I wrote in my post on it, Bell himself did not think much of his own theorem until, of course, he got nominated for a Nobel Prize: it is a bit hard to say you got nominated for a Nobel Prize for a theory you do not believe in yourself, isn’t it? In any case, Bell’s Theorem has now been experimentally disproved. That is – without any doubt – a rather good thing. 🙂 To save the face of the Nobel committee here (why award something that disproves something else that you would have given an award a few decades ago?): Bell would have gotten a Nobel Prize, but he died from brain hemorrhage before, and Nobel Prizes reward the living only.

As for entanglement, I repeat what I wrote many times already: the concept of entanglement – for which these scientists got a Nobel Prize last year – is just a fancy word for the simultaneous conservation of energy, linear and angular momentum (and – if we are talking matter-particles – charge). There is ‘no spooky action at a distance’, as Einstein would derogatorily describe it when the idea was first mentioned to him. So, I do not see why a Nobel Prize should be awarded for rephrasing a rather logical outcome of photon experiments in metamathematical terms.

Finally, the Nobel Prize committee writes that this has made a significant contribution to quantum information science. I wrote a paper on the quantum computing hype, in which I basically ask this question: qubits may or may not be better devices than MOSFETs to store data – they are not, and they will probably never be – but that is not the point. How does quantum information change the two-, three- or n-valued or other rule-based logic that is inherent to the processing of information? I wish the Nobel Prize committee could be somewhat more explicit on that because, when everything is said and done, one of the objectives of the Prize is to educate the general public about the advances of science, isn’t it? :-/

However, all this ranting of mine is, of course, unimportant. We know that it took the distinguished Royal Swedish Science Academy more than 15 years to even recognize the genius of an Einstein, so it was already clear then that their selection criteria were not necessarily rational. [Einstein finally got a well-deserved Nobel Prize, not for relativity theory (strangely enough: if there is one thing on which all physicist are agreed, it is that relativity theory is the bedrock of all of physics, isn’t it?), but for a much less-noted paper on the photoelectric effect – in 1922: 17 years after his annus mirabilis papers had made a killing not only in academic circles but in the headlines of major newspapers as well, and 10 years after a lot of fellow scientists had nominated him for it (1910).]

Again, Mahatma Gandhi never got a Nobel Price for Peace (so Einstein should consider himself lucky to get some Nobel Prize, right?), while Ursula von der Leyen might be getting one for supporting the war with Russia, so I must remind myself of the fact that we do live in a funny world and, perhaps, we should not be trying to make sense of these rather weird historical things. 🙂

Let me turn to the main reason why I am writing this indignant post. It is this: I am utterly shocked by what Dr. John Clauser has done with his newly gained scientific prestige: he joined the CO2 coalition! For those who have never heard of it, it is a coalition of climate change deniers. A bunch of people who:

(1) vehemently deny the one and only consensus amongst all climate scientists, and that is the average temperature on Earth has risen with about two degrees Celsius since the Industrial Revolution, and

(2) say that, if climate change would be real (God forbid!), then we can reverse the trend by easy geo-engineering. We just need to use directed energy or whatever to create more white clouds. If that doesn’t work, then… Well… CO2 makes trees and plants grow, so it will all sort itself out by itself.

[…]

Yes. That is, basically, what Dr. Clauser and all the other scientific advisors of this lobby group – none of which have any credentials in the field they are criticizing (climate science) – are saying, and they say it loud and clearly. That is weird enough, already. What is even weirder, is that – to my surprise – a lot of people are actually buying such nonsense.

Frankly, I have not felt angry for a while, but this thing triggered an outburst of mine on YouTube, in which I state clearly what I think of Dr. Clauser and other eminent scientists who abuse their saint-like Nobel Prize status in society to deceive the general public. Watch my video rant, and think about it for yourself. Now, I am not interested in heated discussions on it: I know the basic facts. If you don’t, I listed them here. Look at the basic graphs and measurements before you would want to argue with me on this, please! To be clear on this: I will not entertain violent or emotional reactions to this post or my video. Moreover, I will delete them here on WordPress and also on my YouTube channel. Yes. For the first time in 10 years or so, I will exercise my right as a moderator of my channels, which is something I have never done before. 🙂

[…]

I will now calm down and write something about the mainstream interpretation of quantum physics again. 🙂 In fact, this morning I woke up with a joke in my head. You will probably think the joke is not very good, but then I am not a comedian and so it is what it is and you can judge for yourself. The idea is that you’d learn something from it. Perhaps. 🙂 So, here we go.

Imagine shooting practice somewhere. A soldier fires at some target with a fine gun, and then everyone looks at the spread of the hits around the bullseye. The quantum physicist says: “See: this is the Uncertainty Principle at work! What is the linear momentum of these bullets, and what is the distance to the target? Let us calculate the standard error.” The soldier looks astonished and says: “No. This gun is no good. One of the engineers should check it.” Then the drill sergeant says this: “The gun is fine. From this distance, all bullets should have hit the bullseye. You are a miserable shooter and you should really practice a lot more.” He then turns to the academic and says: “How did you get in here? I do not understand a word of what you just said and, if I do, it is of no use whatsoever. Please bugger off asap!

This is a stupid joke, perhaps, but there is a fine philosophical point to it: uncertainty is not inherent to Nature, and it also serves no purpose whatsoever in the science of engineering or in science in general. All in Nature is deterministic. Statistically deterministic, but deterministic nevertheless. We do not know the initial conditions of the system, perhaps, and that translates into seemingly random behavior, but if there is a pattern in that behavior (a diffraction pattern, in the case of electron or photon diffraction), then the conclusion should be that there is no such thing as metaphysical ‘uncertainty’. In fact, if you abandon that principle, then there is no point in trying to discover the laws of the Universe, is there? Because if Nature is uncertain, then there are no laws, right? 🙂

To underscore this point, I will, once again, remind you of what Heisenberg originally wrote about uncertainty. He wrote in German and distinguished three very different ideas of uncertainty:

(1) The precision of our measurements may be limited: Heisenberg originally referred to this as an Ungenauigkeit.

(2) Our measurement might disturb the position and, as such, cause the information to get lost and, as a result, introduce an uncertainty in our knowledge, but not in reality. Heisenberg originally referred to such uncertainty as an Unbestimmtheit.

(3) One may also think the uncertainty is inherent to Nature: that is what Heisenberg referred to as Ungewissheit. There is nothing in Nature – and also nothing in Heisenberg’s writings, really – that warrants the elevation of this Ungewissheit to a dogma in modern physics. Why? Because it is the equivalent of a religious conviction, like God exists or He doesn’t (both are theses we cannot prove: Ryle labeled such hypotheses as ‘category mistakes’).

Indeed, when one reads the proceedings of the Solvay Conferences of the late 1920s, 1930s and immediately after WW II (see my summary of it in https://www.researchgate.net/publication/341177799_A_brief_history_of_quantum-mechanical_ideas), then it is pretty clear that none of the first-generation quantum physicists believed in such dogma and – if they did – that they also thought what I am writing here: that it should not be part of science but part of one’s personal religious beliefs.

So, once again, I repeat that this concept of entanglement – for which John Clauser got a Nobel Prize last year – is in the same category: it is just a fancy word for the simultaneous conservation of energy, linear and angular momentum, and charge. There is ‘no spooky action at a distance’, as Einstein would derogatorily describe it when the idea was first mentioned to him.

Let me end by noting the dishonor of Nobel Prize winner John Clauser once again. Climate change is real: we are right in the middle of it, and it is going to get a lot worse before it gets any better – if it is ever going to get better (which, in my opinion, is a rather big ‘if‘…). So, no matter how many Nobel Prize winners deny it, they cannot change the fact that average temperature on Earth has risen by about 2 degrees Celsius since 1850 already. The question is not: is climate change happening? No. The question now is: how do we adapt to it – and that is an urgent question – and, then, the question is: can we, perhaps, slow down the trend, and how? In short, if these scientists from physics or the medical field or whatever other field they excel in are true and honest scientists, then they would do a great favor to mankind not by advocating geo-engineering schemes to reverse a trend they actually deny is there, but by helping to devise and promote practical measures to allow communities that are affected by natural disaster to better recover from them.

So, I’ll conclude this rant by repeating what I think of all of this. Loud and clear: John Clauser and the other scientific advisors of the CO2 coalition are a disgrace to what goes under the name of ‘science’, and this umpteenth ‘incident’ in the history of science or logical thinking makes me think that it is about time that the Royal Swedish Academy of Sciences does some serious soul-searching when, amongst the many nominations, it selects its candidates for a prestigious award like this. Alfred Nobel – one of those geniuses who regretted his great contribution to science and technology was (also) (ab)used to increase the horrors of war – must have turned too many times in his grave now… :-/

The End of Physics

I wrote a post with this title already, but this time I mean it in a rather personal way: my last paper – with the same title – on ResearchGate sums up rather well whatever I achieved, and also whatever I did not explore any further because time and energy are lacking: I must pay more attention to my day job nowadays. 🙂

I am happy with the RG score all of my writing generated, the rare but heartfelt compliments I got from researchers with far more credentials than myself (such as, for example, Dr. Emmanouil Markoulakis of Nikolaos, which led me to put a paper on RG with a classical explanation of the Lamb shift), various friendly but not necessarily always agreeing commentators (one of them commenting here on this post: a good man!), and, yes, the interaction on my YouTube channel. But so… Well… That is it, then! 🙂

As a farewell, I will just quote from the mentioned paper – The End of Physics (only as a science, of course) – hereunder, and I hope that will help you to do what all great scientists would want you to do, and that is to think things through for yourself. 🙂

Brussels, 22 July 2023

Bohr, Heisenberg, and other famous quantum physicists – think of Richard Feynman, John Stewart Bell, Murray Gell-Mann, and quite a few other Nobel Prize winning theorists[1] – have led us astray. They swapped a rational world view – based on classical electromagnetic theory and statistical determinism – for a mystery world in which anything is possible, but nothing is real.

They invented ‘spooky action at a distance’ (as Einstein derogatorily referred to it), for example. So, what actually explains that long-distance interaction, then? It is quite simple. There is no interaction, and so there is nothing spooky or imaginary or unreal about it: if by measuring the spin state of one photon, we also know the spin state of its twin far away, then it is – quite simply – because physical quantities such as energy and momentum (linear or angular) will be conserved if no other interference is there after the two matter- or light-particles were separated.

Plain conservation laws explain many other things that are being described as ‘plain mysteries’ in quantum physics. The truth is this: there are no miracles or mysteries: everything has a physical cause and can be explained.[2] For example, there is also nothing mysterious about the interference pattern and the trajectory of an electron going through a slit, or one of two nearby slits. An electron is pointlike, but it is not infinitesimally small: it has an internal structure which explains its wave-like properties. Likewise, Mach-Zehnder one-photon interference can easily be explained when thinking of its polarization structure: a circularly polarized photon can be split in two linearly polarized electromagnetic waves, which are photons in their own right. Everything that you have been reading about mainstream quantum physics is, perhaps, not wrong, but it is highly misleading because it is all couched in guru language and mathematical gibberish.

Why is that mainstream physicists keep covering up? I am not sure: it is a strange mix of historical accident and, most probably, the human desire to be original or special, or the need to mobilize money for so-called fundamental research. I also suspect there is a rather deceitful intention to hide truths about what nuclear science should be all about, and that is to understand the enormous energies packed into elementary particles.[3]

The worst of all is that none of the explanations in mainstream quantum physics actually works: mainstream theory does not have a sound theory of signal propagation, for example (click the link to my paper on that or – better, perhaps – this link to our paper on signal propagation), and Schrödinger’s hydrogen model is a model of a hypothetical atom modelling orbitals of equally hypothetical zero-spin electron pairs. Zero-spin electrons do not exist, and real-life hydrogen only has one proton at its center, and one electron orbiting around it. Schrödinger’s equation is relativistically correct – even if all mainstream physicists think it is not – but the equation includes two mistakes that cancel each other out: it confuses the effective mass of an electron in motion with its total mass[4], and the 1/2 factor which is introduced by the m = 2meff substitution also takes care of the doubling of the potential that is needed to make the electron orbitals come out alright.

The worst thing of all is that mainstream quantum physicists never accurately modeled what they should have modeled: the hydrogen atom as a system of a real proton and a real electron (no hypothetical infinitesimally and structureless spin-zero particles). If they had done that, they would also be able to explain why hydrogen atoms come in molecular H2 pairs, and they would have a better theory of why two protons need a neutron to hold together in a helium nucleus. Moreover, they would have been able to explain what a neutron actually is.[5]


[1] James Stewart Bell was nominated for a Nobel Prize, but died from a brain hemorrhage before he could accept the prize for his theorem.

[2] The world of physics – at the micro-scale – is already fascinating enough: why should we invent mysteries?

[3] We do not think these energies can be exploited any time soon. Even nuclear energy is just binding energy between protons and neutrons: a nuclear bomb does not release the energy that is packed into protons. These elementary particles survive the blast: they are the true ‘atoms’ of this world (in the Greek sense of ‘a-tom’, which means indivisible).

[4] Mass is a measure of the inertia to a change in the state of motion of an oscillating charge. We showed how this works by explaining Einstein’s mass-energy equivalence relation and clearly distinguishing the kinetic and potential energy of an electron. Feynman first models an electron in motion correctly, with an equally correct interpretation of the effective mass of an electron in motion, but then substitutes this effective mass by half the electron mass (meff = m/2) in an erroneous reasoning process based on the non-relativistic kinetic energy concept. The latter reasoning also leads to the widespread misconception that Schrödinger’s equation would not be relativistically correct (see the Annexes to my paper on the matter-wave). For the trick it has to do, Schrödinger’s wave equation is correct – and then I mean also relativistically correct. 🙂

[5] A neutron is unstable outside of its nucleus. We, therefore, think it acts as the glue between protons, and it must be a composite particle.

On the quantum computing hype

1. The Wikipedia article on quantum computing describes a quantum computer as “a computer that exploits quantum -mechanical phenomena.” The rest of the article then tries to explain what these quantum-mechanical phenomena actually are.

Unfortunately, the article limits itself to the mainstream interpretation of these and, therefore, suffers from what I perceive to be logical and philosophical errors. Indeed, in the realistic interpretation of quantum mechanics that I have been developing, system wavefunctions are only useful to model our own uncertainty about the system. I subscribe to Hendrik Antoon Lorentz’s judgment at the last Solvay Conference under his leadership: there is no need whatsoever to elevate indeterminism to a philosophical principle. Not in science in general, and not in quantum mechanics in particular. I, therefore, think quantum mechanics cannot offer a substantially new computing paradigm.

Of course, one may argue that, for specific problems, some kind of three- or more-valued logic – rather than the binary or Boolean true/false dichotomy on which most logic circuits are based – may come in handy. However, such logic has already been worked out, and can be accessed using appropriate programming languages. Python and the powerful mathematical tools that come with it (Pandas, NumPy and SciPy) work great with ternary logic using a {true, false, unknown} or a {-1, 0, +1} set of logical values rather than the standard {0, 1} Boolean set. The Wikipedia article on three-valued logic is worth a read and, despite the rather arcane nature of the topic, much better written than the mentioned article: have a look at how operators are used on these three-valued sets in meaningful algebras or logical models, such as that of Kleene, Priest or Lukasiewicz.

2. One may, of course, argue that, even when there is probably no such thing as a new logical quantum computing model or logic, quantum technology may offer distinct advantages when it comes to storage of data about this or that state or, one day, lead to devices with faster clock and/or bus speeds. That appears to be a pipedream too:

  • To keep, say, an electron in this or that spin state, one must create and steady an electromagnetic field – usually one does so in a superconducting environment, which makes actual mechanical devices used for quantum computing (qubits) look like the modern-day equivalent of Babbage’s analytical machine. In my not-so-humble view, such devices will never ever achieve the sheer material performance offered by current nanometer-scale MOSFETs.  

  • As for bus or transmission speeds, quantum theory does not come with a new theory of charge propagation and, most importantly, is fundamentally flawed in its analysis of how signals actually propagate in, say, a lattice structure. I refer to one of my papers here (on electron propagation in a lattice), in which I deconstruct Feynman’s analysis of the concept of the free and effective mass of an electron. Hence, for long-distance transmission of signals, optical fiber cannot be beaten. For short-distance transmission of signals (say, within an electrical circuit, I refer to the above-mentioned nano-technology which continues to revolutionize the chip industry.

Brussels, 4 July 2023

Epilogue: an Easter podcast

I have been thinking on my explanation of dark matter/energy, and I think it is sound. It solves the last asymmetry in my models, and explains all. So, after a hiatus of two years, I bothered to make a podcast on my YouTube channel once again. It talks about everything. Literally everything !

It makes me feel my quest for understanding of matter and energy – in terms of classical concepts and measurements (as depicted below) – has ended. Perhaps I will write more but that would only be to promote the material, which should promote itself if it is any good (which I think it is).

I should, by way of conclusion, say a few final words about Feynman’s 1963 Lectures now. When everything is said and done, it is my reading of them which had triggered this blog about ten years ago. I would now recommend Volume I and II (classical physics and electromagnetic theory) – if only because it gives you all the math you need to understand all of physics – but not Volume III (the lectures on quantum mechanics). They are outdated, and I do find Feynman guilty of promoting rather than explaining the hocus-pocus around all of the so-called mysteries in this special branch of physics.

Quantum mechanics is special, but I do conclude now that it can all be explained in terms of classical concepts and quantities. So, Gell-Mann’s criticism of Richard Feynman is, perhaps, correct: Mr. Feynman did, perhaps, make too many jokes – and it gets annoying because he must have known some of what he suggests does not make sense – even if I would not go as far as Gell-Mann, who says “Feynman was only concerned about himself, his ego, and his own image !” :-/

So, I would recommend my own alternative series of ‘lectures’. Not only are they easier to read, but they also embody a different spirit of writing. Science is not about you, it is about thinking for oneself and deciding on what is truthful and useful, and what is not. So, to conclude, I will end by quoting Ludwig Boltzmann once more:

Bring forward what is true.

Write it so that it is clear.

Defend it to your last breath.”

Ludwig Boltzmann (1844 – 1906)

Post scriptum: As for the ‘hocus-pocus’ in Feynman’s Lectures, we should, perhaps, point once again to some of our early papers on the flaws in his arguments. We effectively put our finger on the arbitrary wavefunction convention, or the (false) boson-fermion dichotomy, or the ‘time machine’ argument that is inherent to his explanation of the Hamiltonian, and so on. We published these things on Academia.edu before (also) putting our (later) papers ResearchGate, so please check there for the full series. 🙂

Post scriptum (23 April 2023): Also check out this video, which was triggered by someone who thought my models amount to something like a modern aether theory, which it is definitely not the case: https://www.youtube.com/watch?v=X38u2-nXoto. 🙂 I really think it is my last reflection on these topics. I need to focus on my day job, sports, family, etcetera again ! 🙂

Onwards !

It has been ages since I last wrote something here. Regular work took over. I did do an effort, though, to synchronize and reorganize some stuff. And I am no longer shy about it. My stats on ResearchGate and academia.edu show that I am no longer a ‘crackpot theorist’. This is what I wrote about it on my LinkedIn account:

QUOTE

With good work-life balance now, I picked up one of my hobbies again: research into quantum theories. As for now, I only did a much-needed synchronization of papers on academia.edu and ResearchGate. When logging on the former network (which I had not done for quite a while), I found many friendly messages on it. One of them was from a researcher on enzymes: “I have been studying about these particles for around four years. All of the basics. But wat are they exactly? This though inspired me… Thank u so much!” I smiled and relaxed when I read that, telling myself that all those sleepless nights I spent on this were not the waste of time and energy that most of my friends thought it would be. 🙂

Another one was even more inspiring. It was written by another ‘independent’ researcher. Nelda Evans. No further detail in her profile. From the stats, I could see that she had downloaded an older manuscript of mine (https://lnkd.in/ecRKJwxQ). This is what she wrote about it to me: “I spoke to Richard Feynman in person at the Hughes Research Lab in Malibu California in 1967 where the first pulsed laser was invented when some of the students from the UCLA Physics Dept. went to hear him. Afterward I went to talk to him and said “Dr. Feynman, I’ve learned that some unknown scientists were dissatisfied with probability as a final description of Quantum Mechanics, namely Planck, Einstein, Schrodinger, de Broglie, Bohm,…” When I finished my list he immediately said “And Feynman”. We talked about it a little, and he told me “I like what you pick on.”
My guess is that he might have told you something similar.”

That message touched me deeply, because I do feel – from reading his rather famous Lectures on Physics somewhat ‘between the lines’ – that Richard Feynman effectively knew all but that he, somehow, was not allowed to clearly say what it was all about. I wrote a few things about that rather strange historical bias in the interpretation of ‘uncertainty’ and other ‘metaphysical’ concepts that infiltrated the science of quantum mechanics in my last paper: https://lnkd.in/ewZBcfke.

So… Well… I am not a crackpot scientist anymore ! 🙂 The bottom-line is to always follow your instinct when trying to think clearly about some problem or some issue. We should do what Ludwig Boltzmann (1844-1906) told us to do: “Bring forward what is true. Write it so that it is clear. Defend it to your last breath.”

[…] Next ‘thing to do’, is to chat with ChatGPT about my rather straightforward theories. I want to see how ‘intelligent’ it is. I wonder where it will hit its limit in terms of ‘abstract thinking.’ The models I worked on combine advanced geometrical thinking (building ‘realistic’ particle models requires imagining ‘rotations within rotations’, among other things) and formal math (e.g. quaternion algebra). ChatGPT is excellent in both, I was told, but can it combine the two intelligently? 🙂

UNQUOTE

On we go. When the going gets tough, the tough get going. 🙂 For those who want an easy ‘introduction’ to the work (at a K-12 level of understanding of mathematics), I wrote the first pages of what could become a very new K-12 level textbook on physics. Let us see. I do want to see some interest from a publisher first. 🙂