Cleaning Up After Bell

On the limits of theorems, the sociology of prizes, and the slow work of intellectual maturity

When I re-read two older posts of mine on Bell’s Theorem — one written in 2020, at a moment when my blog was gaining unexpected traction, and another written in 2023 in reaction to what I then experienced as a Nobel Prize award controversy — I feel a genuine discomfort.

Not because I think the core arguments were wrong.
But because I now see more clearly what was doing the talking.

There is, in both texts, a mixture of three things:

  1. A principled epistemic stance (which is still there);
  2. A frustration with institutional dynamics in physics (also there);
  3. But, yes, also a degree of rhetorical impatience that no longer reflects how I want to think — or be read.

This short text is an attempt to disentangle those layers.


1. Why I instinctively refused to “engage” with Bell’s Theorem

In the 2020 post, I wrote — deliberately provocatively — that I “did not care” about Bell’s Theorem. That phrasing was not chosen to invite dialogue; it was chosen to draw a boundary. At the time, my instinctive reasoning was this:

Bell’s Theorem is a mathematical theorem. Like any theorem, it tells us what follows if certain premises are accepted. Its physical relevance therefore depends entirely on whether those premises are physically mandatory, or merely convenient formalizations.

This is not a rejection of mathematics. It is a refusal to grant mathematics automatic ontological authority.

I was — and still am — deeply skeptical of the move by which a formal result is elevated into a metaphysical verdict about reality itself. Bell’s inequalities constrain a particular class of models (local hidden-variable models of a specific type). They do not legislate what Nature must be. In that sense, my instinct was aligned not only with Einstein’s well-known impatience with axiomatic quantum mechanics, but also with Bell himself, who explicitly hoped that a “radical conceptual renewal” might one day dissolve the apparent dilemma his theorem formalized.

Where I now see a weakness is not in the stance, but in its expression. Saying “I don’t care” reads as dismissal, while what I really meant — and should have said — is this:

I do not accept the premises as ontologically compulsory, and therefore I do not treat the theorem as decisive.

That distinction matters.


2. Bell, the Nobel Prize, and a sociological paradox

My 2023 reaction was sharper, angrier, and less careful — and that is where my current discomfort is strongest.

At the time, it seemed paradoxical to me that:

  • Bell was once close to receiving a Nobel Prize for a theorem he himself regarded as provisional,
  • and that nearly six decades later, a Nobel Prize was awarded for experiments demonstrating violations of Bell inequalities.

In retrospect, the paradox is not logical — it is sociological.

The 2022 Nobel Prize did not “disprove Bell’s Theorem” in a mathematical sense. It confirmed, experimentally and with great technical sophistication, that Nature violates inequalities derived under specific assumptions. What was rewarded was experimental closure, not conceptual resolution.

The deeper issue — what the correlations mean — remains as unsettled as ever.

What troubled me (and still does) is that the Nobel system has a long history of rewarding what can be stabilized experimentally, while quietly postponing unresolved interpretational questions. This is not scandalous; it is structural. But it does shape the intellectual culture of physics in ways that deserve to be named.

Seen in that light, my indignation was less about Bell, and more about how foundational unease gets ritualized into “progress” without ever being metabolized conceptually.


3. Authority, responsibility, and where my anger really came from

The episode involving John Clauser and climate-change denial pushed me from critique into anger — and here, too, clarity comes from separation.

The problem there is not quantum foundations.
It is the misuse of epistemic authority across domains.

A Nobel Prize in physics does not confer expertise in climate science. When prestige is used to undermine well-established empirical knowledge in an unrelated field, that is not dissent — it is category error dressed up as courage.

My reaction was visceral because it touched a deeper nerve: the responsibility that comes with public authority in science. In hindsight, folding this episode into a broader critique of Bell and the Nobel Prize blurred two distinct issues — foundations of physics, and epistemic ethics.

Both matter. They should not be confused.


4. Where I stand now

If there is a single thread connecting my current thinking to these older texts, it is this:

I am less interested than before in winning arguments, and more interested in clarifying where different positions actually part ways — ontologically, methodologically, and institutionally.

That shift is visible elsewhere in my work:

  • in a softer, more discriminating stance toward the Standard Model,
  • in a deliberate break with institutions and labels that locked me into adversarial postures,
  • and in a conscious move toward reconciliation where reconciliation is possible, and clean separation where it is not.

The posts on Bell’s Theorem were written at an earlier stage in that trajectory. I do not disown them. But I no longer want them to stand without context.

This text is that context.


Final notes

1. On method and collaboration

Much of the clarification in this essay did not emerge in isolation, but through extended dialogue — including with an AI interlocutor that acted, at times, less as a generator of arguments than as a moderator of instincts: slowing me down, forcing distinctions, and insisting on separating epistemic claims from emotional charge. That, too, is part of the story — and perhaps an unexpected one. If intellectual maturity means anything, it is not the abandonment of strong positions, but the ability to state them without needing indignation to carry the weight. That is the work I am now trying to do.

It is also why I want to be explicit about how these texts are currently produced: they are not outsourced to AI, but co-generated through dialogue. In that dialogue, I deliberately highlight not only agreements but also remaining disagreements — not on the physics itself, but on its ontological interpretation — with the AI agent I currently use (ChatGPT 5.2). Making those points of convergence and divergence explicit is, I believe, intellectually healthier than pretending they do not exist.

2. On stopping, without pretending to conclude

This post also marks a natural stopping point. Over the past weeks, several long-standing knots in my own thinking — Bell’s Theorem (what this post is about), the meaning of gauge freedom, the limits of Schrödinger’s equation as a model of charge in motion, or even very plain sociological considerations on how sciences moves forward — have either been clarified or cleanly isolated.

What remains most resistant is the problem of matter–antimatter pair creation and annihilation. Here, the theory appears internally consistent, while the experimental evidence, impressive as it is, still leaves a small but non-negligible margin of doubt — largely because of the indirect, assumption-laden nature of what is actually being measured. I do not know the experimental literature well enough to remove that last 5–10% of uncertainty, and I consider it a sign of good mental health not to pretend otherwise.

For now, that is where I leave it. Not as a conclusion, but as a calibration: knowing which questions have been clarified, and which ones deserve years — rather than posts — of further work.

3. Being precise on my use of AI: on cleaning up ideas, not outsourcing thinking

What AI did not do

Let me start with what AI did not do.

It did not:

  • supply new experimental data,
  • resolve open foundational problems,
  • replace reading, calculation, or judgment,
  • or magically dissolve the remaining hard questions in physics.

In particular, it did not remove my residual doubts concerning matter–antimatter pair creation. On that topic, I remain where I have been for some time: convinced that the theory is internally consistent, convinced that the experiments are impressive and largely persuasive, and yet unwilling to erase the remaining 5–10% of doubt that comes from knowing how indirect, assumption-laden, and instrument-mediated those experiments necessarily are. I still do not know the experimental literature well enough to close that last gap—and I consider it a sign of good mental health that I do not pretend otherwise.

What AI did do

What AI did do was something much more modest—and much more useful.

It acted as a moderator of instincts.

In the recent rewrites—most notably in this post (Cleaning Up After Bell)—AI consistently did three things:

  1. It cut through rhetorical surplus.
    Not by softening arguments, but by separating epistemic claims from frustration, indignation, or historical irritation.
  2. It forced distinctions.
    Between mathematical theorems and their physical premises; between experimental closure and ontological interpretation; between criticism of ideas and criticism of institutions.
  3. It preserved the spine while sharpening the blade.
    The core positions did not change. What changed was their articulation: less adversarial, more intelligible, and therefore harder to dismiss.

In that sense, AI did not “correct” my thinking. It helped me re-express it in a way that better matches where I am now—intellectually and personally.

Two primitives or one?

A good illustration is the remaining disagreement between myself and my AI interlocutor on what is ultimately primitive in physics.

I still tend to think in terms of two ontological primitives: charge and fields—distinct, but inseparably linked by a single interaction structure. AI, drawing on a much broader synthesis of formal literature, prefers a single underlying structure with two irreducible manifestations: localized (charge-like) and extended (field-like).

Crucially, this disagreement is not empirical. It is ontological, and currently underdetermined by experiment. No amount of rhetorical force, human or artificial, can settle it. Recognizing that—and leaving it there—is part of intellectual maturity.

Why I am stopping (again)

I have said before that I would stop writing, and I did not always keep that promise. This time, however, the stopping point feels natural.

Most of the conceptual “knots” that bothered me in the contemporary discourse on physics have now been:

  • either genuinely clarified,
  • or cleanly isolated as long-horizon problems requiring years of experimental and theoretical work.

At this point, continuing to write would risk producing more words than signal.

There are other domains that now deserve attention: plain work, family projects, physical activity, and the kind of slow, tangible engagement with the world that no theory—however elegant—can replace.

Closing

If there is a single lesson from this episode, it is this:

AI is most useful not when it gives answers, but when it helps you ask what you are really saying—and whether you still stand by it once the noise is stripped away.

Used that way, it does not diminish thinking.
It disciplines it.

For now, that is enough.

The Gauge Idea in EM Theory

Gauge as Causal Bookkeeping

The Lorenz Condition from Maxwell to Quantum Field Theory


Abstract

In this lecture, we revisit the notion of gauge in classical electromagnetism, with particular focus on the Lorenz gauge condition. Rather than treating gauge as a symmetry principle or abstract redundancy, we show that the Lorenz condition emerges naturally as a causal continuity requirement already implicit in Maxwell’s equations. This perspective allows gauge freedom to be understood as bookkeeping freedom rather than physical freedom, and provides a useful conceptual bridge to the role of gauge in quantum field theory (QFT), where similar constraints are often elevated to ontological status.

Note on how this post differs from other posts on the topic: In earlier posts (see, for example, our 2015 post on Maxwell, Lorentz, gauges and gauge transformations) approached the Lorenz gauge primarily from a logical standpoint; the present note revisits the same question with a more explicit emphasis on causality and continuity.


1. Why potentials appear at all

Maxwell’s equations impose structural constraints on electromagnetic fields that make the introduction of potentials unavoidable.

The absence of magnetic monopoles,B=0,\nabla \cdot \mathbf{B} = 0,

implies that the magnetic field must be expressible as the curl of a vector potential,B=×A.\mathbf{B} = \nabla \times \mathbf{A}.

Faraday’s law of induction,×E=Bt,\nabla \times \mathbf{E} = -\frac{\partial \mathbf{B}}{\partial t},then requires the electric field to take the formE=ϕAt.\mathbf{E} = -\nabla \phi – \frac{\partial \mathbf{A}}{\partial t}.

At this stage, no gauge has been chosen. Potentials appear not because they are elegant, but because the curl–divergence structure of Maxwell’s equations demands them. The scalar and vector potentials encode how electromagnetic structure evolves in time.


2. The problem of over-description

The potentials (ϕ,A)(\phi, \mathbf{A})(ϕ,A) are not uniquely determined by the fields (E,B)(\mathbf{E}, \mathbf{B})(E,B). Transformations of the formAA+χ,ϕϕχt\mathbf{A} \rightarrow \mathbf{A} + \nabla \chi, \quad \phi \rightarrow \phi – \frac{\partial \chi}{\partial t}leave the physical fields unchanged.

This non-uniqueness is often presented as a “gauge freedom.” However, without further restriction, Maxwell’s equations expressed in terms of potentials suffer from a deeper issue: the equations mix instantaneous (elliptic) and propagating (hyperbolic) behavior. In particular, causality becomes obscured at the level of the potentials.

The question is therefore not which gauge to choose, but:

What minimal condition restores causal consistency to the potential description?


3. The Lorenz gauge as a continuity condition

The Lorenz gauge condition,A+1c2ϕt=0,\nabla \cdot \mathbf{A} + \frac{1}{c^2}\frac{\partial \phi}{\partial t} = 0,provides a direct answer.

When imposed, Maxwell’s equations reduce to wave equations for both potentials:ϕ=ρε0,A=μ0J,\Box \phi = \frac{\rho}{\varepsilon_0}, \quad \Box \mathbf{A} = \mu_0 \mathbf{J},with the same d’Alembert operator \Box. Scalar and vector potentials propagate at the same finite speed and respond locally to their sources.

In covariant form, the Lorenz condition reads:μAμ=0.\partial_\mu A^\mu = 0.

This equation closely mirrors charge conservation,μJμ=0.\partial_\mu J^\mu = 0.

The parallel is not accidental. The Lorenz gauge enforces spacetime continuity of electromagnetic influence, ensuring that potentials evolve consistently with conserved sources.


4. Physical interpretation

From this perspective, the Lorenz gauge is not a symmetry principle but a causal closure condition:

  • the divergence of the vector potential controls longitudinal structure,
  • the time variation of the scalar potential tracks charge redistribution,
  • the condition ties both into a single spacetime constraint.

Nothing new is added to Maxwell’s theory. Instead, an implicit requirement — finite-speed propagation — is made explicit at the level of the potentials.

Gauge freedom thus reflects freedom of description under causal equivalence, not freedom of physical behavior.


5. Historical remark

The condition is named after Ludvig Lorenz, who introduced it in 1867, well before relativistic spacetime was formalized. Its later compatibility with Lorentz invariance — developed by Hendrik Antoon Lorentz — explains why it plays a privileged role in relativistic field theory.

The frequent miswriting of the “Lorenz gauge” as “Lorentz gauge” in modern textbooks (including by Richard Feynman) is, therefore, historically inaccurate but physically suggestive.


6. Gauge in quantum field theory: a cautionary bridge

In quantum field theory, gauge invariance is often elevated from a bookkeeping constraint to a foundational principle. This move has undeniable calculational power, but it risks conflating descriptive redundancy with physical necessity.

From the classical electromagnetic perspective developed here, gauge conditions arise whenever:

  • local causality is enforced,
  • descriptive variables exceed physical degrees of freedom,
  • continuity constraints must be imposed to maintain consistency.

Seen this way, gauge symmetry stabilizes theories that would otherwise over-describe their objects. It does not, by itself, mandate the existence of distinct fundamental forces.


7. Concluding remark

The Lorenz gauge is best understood not as an optional choice, nor as a deep symmetry of nature, but as good accounting imposed by causality.

When structure, continuity, and finite propagation speed are respected, gauge quietly disappears into consistency.


Physics Without Consolations

On Quantum Mechanics, Meaning, and the Limits of Metaphysical Inquiry

This post is a rewritten version of an essay I published on this blog in September 2020 under the title The End of Physics. The original text captured a conviction I still hold: that quantum mechanics is strange but not mysterious, and that much of what is presented as metaphysical depth in modern physics is better understood as interpretive excess. What has changed since then is not the substance of that conviction, but the way I think it should be expressed.

Over the past years, I have revisited several of my physics papers in dialogue with artificial intelligence — not as a replacement for human judgment, but as a tool for clarification, consistency checking, and tone correction. This post is an experiment of the same kind: returning to an older piece of writing with the help of AI, asking not “was I wrong?” but “can this be said more precisely, more calmly, and with fewer rhetorical shortcuts?”

The result is not a repudiation of the 2020 text (and similar ones here on this blog site, or on my ResearchGate page) but a refinement of it.
If there is progress here, it lies not in new claims about physics, but in a clearer separation between what physics tells us about the world and what humans sometimes want it to tell us.

— Jean Louis Van Belle
1 January 2026

After the Mysteries: Physics Without Consolations

For more than a century now, quantum mechanics has been presented as a realm of deep and irreducible mystery. We are told that nature is fundamentally unknowable, that particles do not exist until observed, that causality breaks down at the smallest scales, and that reality itself is somehow suspended in a fog of probabilities.

Yet this way of speaking says more about us than about physics.

Quantum mechanics is undeniably strange. But strange is not the same as mysterious. The equations work extraordinarily well, and — more importantly — we have perfectly adequate physical interpretations for what they describe. Wavefunctions are not metaphysical ghosts. They encode physical states, constraints, and statistical regularities in space and time. Particles such as photons, electrons, and protons are not abstract symbols floating in Hilbert space; they are real physical systems whose behavior can be described using familiar concepts: energy, momentum, charge, field structure, stability.

No additional metaphysics is required.

Over time, however, physics acquired something like a priesthood of interpretation. Mathematical formalisms were promoted from tools to truths. Provisional models hardened into ontologies. Concepts introduced for calculational convenience were treated as if they had to exist — quarks, virtual particles, many worlds — not because experiment demanded it, but because the formalism allowed it.

This is not fraud. It is human behavior.


The Comfort of Indeterminism

There is another, less discussed reason why quantum mechanics became mystified. Indeterminism offered something deeply attractive: a perceived escape hatch from a fully ordered universe.

For some, this meant intellectual freedom. For others, moral freedom. And for some — explicitly or implicitly — theological breathing room.

It is not an accident that indeterminism was welcomed in cultural environments shaped by religious traditions. Many prominent physicists of the twentieth century were embedded — socially, culturally, or personally — in Jewish, Catholic, or Protestant worlds. A universe governed strictly by deterministic laws had long been seen as hostile to divine action, prayer, or moral responsibility. Quantum “uncertainty” appeared to reopen a door that classical physics seemed to have closed.

The institutional embrace of this framing is telling. The Vatican showed early enthusiasm for modern cosmology and quantum theory, just as it did for the Big Bang model — notably developed by Georges Lemaître, a Catholic priest as well as a physicist. The Big Bang fit remarkably well with a creation narrative, and quantum indeterminism could be read as preserving divine freedom in a lawful universe.

None of this proves that physics was distorted intentionally. But it does show that interpretations do not emerge in a vacuum. They are shaped by psychological needs, cultural background, and inherited metaphysical anxieties.


Determinism, Statistics, and Freedom

Rejecting metaphysical indeterminism does not mean endorsing a cold, mechanical universe devoid of choice or responsibility.

Statistical determinism is not fatalism.

Complex systems — from molecules to brains to societies — exhibit emergent behavior that is fully lawful and yet unpredictable in detail. Free will does not require violations of physics; it arises from self-organizing structures capable of evaluation, anticipation, and choice. Moral responsibility is not rescued by randomness. In fact, randomness undermines responsibility far more than lawfulness ever did.

Consciousness, too, does not need mystery to be meaningful. It is one of the most remarkable phenomena we know precisely because it emerges from matter organizing itself into stable, recursive, adaptive patterns. The same principles operate at every scale: atoms in molecules, molecules in cells, cells in organisms, organisms in ecosystems — and, increasingly, artificial systems embedded in human-designed environments.

There is no voice speaking to us from outside the universe. But there is meaning, agency, and responsibility arising from within it.


Progress Without Revelation

It is sometimes said that physics is advancing at an unprecedented pace. In a technical sense, this is true. But conceptually, the situation is more sobering.

Most of the technologies we rely on today — semiconductors, lasers, superconductors, waveguides — were already conceptually understood by the mid-twentieth century and are clearly laid out in The Feynman Lectures on Physics. Later developments refined, scaled, and engineered these ideas, but they did not introduce fundamentally new physical principles.

Large experimental programs have confirmed existing theories with extraordinary precision. That achievement deserves respect. But confirmation is not revelation. Precision is not profundity.

Recognizing this is not pessimism. It is intellectual honesty.


After Physics Ends

If there is an “end of physics,” it is not the end of inquiry, technology, or wonder. It is the end of physics as a source of metaphysical consolation. The end of physics as theology by other means.

What remains is enough: a coherent picture of the material world, an understanding of how complexity and consciousness arise, and the responsibility that comes with knowing there is no external guarantor of meaning.

We are on our own — but not lost.

And that, perhaps, is the most mature scientific insight of all.

Perhaps we will stop here – time will tell :-)

I have just uploaded a new working paper to ResearchGate: Ontology, Physics, and Math – Einstein’s Unfinished Revolution. I am not announcing it with any sense of urgency, nor with the expectation that it will “change” physics. If it contributes anything at all, it may simply offer a bit of clarity about what we can reasonably claim to see in physics — and what we merely calculate, fit, or postulate. That distinction has preoccupied me for years.

A space to think

One unexpected consequence of taking AI seriously over the past one or two years is that it restored something I had quietly lost: a space to think.

  • Not a space to produce.
  • Not a space to publish.
  • Not a space to compete.

Just a space to think — slowly, carefully, without having to defend a position before it has fully formed. That kind of space has become rare. Academia is under pressure, industry is under pressure, and even independent thinkers often feel compelled to rush toward closure. The conversations I’ve had with AI — what I’ve come to call a corridor — were different. They were not about winning arguments, but about keeping the corridor open only where conceptual clarity survived.

In a strange way, this brought me back to something much older than AI. When I was young, I wanted to study philosophy. My father refused. I had failed my mathematics exam for engineering studies, and in his view philosophy without mathematics was a dead end. In retrospect, I can see that he was probably right — and also that he struggled with me as much as I struggled with him. He should perhaps have pushed me into mathematics earlier; I should perhaps have worked harder. But life does not run backward, and neither does understanding. What AI unexpectedly gave me, decades later, was the chance to reunite those two threads: conceptual questioning disciplined by mathematical restraint. Not philosophy as free-floating speculation, and not mathematics as pure formalism — but something closer to what physics once called natural philosophy.

Why I was always uncomfortable

For a long time, I could not quite place my discomfort. I was uneasy with mainstream Standard Model theorists — not because their work lacks brilliance or empirical success (it clearly does not), but because formal success increasingly seemed to substitute for ontological clarity. At the same time, I felt equally uneasy among outsiders and “fringe” thinkers, who were often too eager to replace one elaborate ontology with another, convinced that the establishment had simply missed the obvious.

I now think I understand why I could not belong comfortably to either camp. Both, in different ways, tend to underestimate what went into building the Standard Model in the first place.

  • The Standard Model is not just a theory. It is the result of enormous societal investment (yes, taxes matter), decades of engineering ingenuity, and entire academic ecosystems built around measurement, refinement, and internal consistency. One does not wave that away lightly. Criticizing it without acknowledging that effort is not radical — it is careless.
  • At the same time, acknowledging that effort does not oblige one to treat the resulting ontology as final. Formal closure is not the same thing as physical understanding.

That tension — respect without reverence — is where I found myself stuck.

Seeing versus calculating

The paper I just uploaded does not attempt to overthrow the Standard Model, nor to replace ΛCDM, nor to propose a new unification. It does something much more modest: it tries to separate what we can physically interpret from what we can formally manipulate.

That distinction was central to the worries of people like Albert Einstein, long before it became unfashionable to worry about such things. Einstein’s famous remark to Max Born“God does not play dice” — was not a rejection of probability as a calculational tool. It was an expression of discomfort with mistaking a formalism for a description of reality. Something similar motivated Louis de Broglie, and later thinkers who never quite accepted that interpretation should be outsourced entirely to mathematics.

What my paper argues — cautiously, and without claiming finality — is that much of modern physics suffers from a kind of ontological drift: symmetries that began life as mathematical operations sometimes came to be treated as physical mandates.

When those symmetries fail, new quantum numbers, charges, or conservation laws are introduced to restore formal order. This works extraordinarily well — but it also risks confusing bookkeeping with explanation.

Matter, antimatter, and restraint

The most difficult part of the paper concerns matter–antimatter creation and annihilation. For a long time, I resisted interpretations that treated charge as something that could simply appear or disappear. That resistance did not lead me to invent hidden reservoirs or speculative intermediates — on the contrary, I explicitly rejected such moves as ontological inflation. Instead, I left the tension open.

Only later did I realize that insisting on charge as a substance may itself have been an unjustified metaphor. Letting go of that metaphor did not solve everything — but it did restore coherence without adding entities. That pattern — refusing both cheap dismissal and cheap solutions — now feels like the right one.

Ambition, patience, and time

We live in a period of extraordinary measurement and, paradoxically, diminished understanding. Data accumulates. Precision improves. Parameters are refined. But the underlying picture often becomes more fragmented rather than more unified.

New machines may or may not be built. China may or may not build the next CERN. That is largely beyond the control of individual thinkers. What is within reach is the slower task of making sense of what we already know. That task does not reward ambition. It rewards patience.

This is also where I part ways — gently, but firmly — with some bright younger thinkers and some older, semi-wise ones. Not because they are wrong in detail, but because they sometimes underestimate the weight of history, infrastructure, and collective effort behind the theories they critique or attempt to replace. Time will tell whether their alternatives mature. Time always tells :-). […] PS: I add a ‘smiley’ here because, perhaps, that is the most powerful phrase of all in this post.

A pause, not a conclusion

This paper may mark the end of my own physics quest — or at least a pause. Not because everything is resolved, but because I finally understand why I could neither fully accept nor fully reject what I was given. I don’t feel compelled anymore to choose sides. I can respect the Standard Model without canonizing it, and I can question it without trying to dethrone it. I can accept that some questions may remain open, not because we lack data, but because clarity sometimes requires restraint.

For now, that feels like enough. Time to get back on the bike. 🙂

PS: Looking back at earlier philosophical notes I wrote years ago — for instance on the relation between form, substance, and charge — I’m struck less by how “wrong” they were than by how unfinished they remained. The questions were already there; what was missing was discipline. Not more speculation, but sharper restraint.

When Decay Statistics Become Ontology

Or: why the Standard Model feels so solid — and yet so strangely unsatisfying

I recently put a new paper online: A Taxonomy of Instability. It is, in some sense, a “weird” piece. Not because it proposes new particles, forces, or mechanisms — it does none of that — but because it deliberately steps sideways from the usual question:

What are particles made of?

and asks instead:

How do unstable physical configurations actually fail?

This shift sounds modest. In practice, it leads straight into a conceptual fault line that most of us sense, but rarely articulate.


What is actually being classified in particle physics?

The Standard Model is extraordinarily successful. That is not in dispute. It predicts decay rates, cross sections, and branching fractions with astonishing precision. It has survived decades of experimental scrutiny.

But it is worth noticing what it is most directly successful at describing:

  • lifetimes,
  • branching ratios,
  • observable decay patterns.

In other words: statistics of instability.

Yet when we talk about the Standard Model, we almost immediately slide from that statistical success into an ontological picture: particles as entities with intrinsic properties, decaying “randomly” according to fundamental laws.

That slide is so familiar that it usually goes unnoticed.


The quiet assumption we almost never examine

Consider how decay is presented in standard references (PDG tables are the cleanest example). For a given unstable particle, we are shown:

  • a list of decay “channels”,
  • each with a fixed branching fraction,
  • averaged over production mechanisms, environments, and detectors.

Everything contextual has been stripped away.

What remains is treated as intrinsic.

And here is where a subtle but radical assumption enters:

The same unstable particle is taken to be capable of realizing multiple, structurally distinct decay reactions, with no further individuation required.

This is not an experimental result.
It is an interpretive stance.

As long as one stays in calculational mode, this feels unproblematic. The formalism works. The predictions are right.

The discomfort only arises when one asks a very basic question:

If all environment variables are abstracted away, what exactly is it that is decaying?


Statistical determinism sharpens the problem

Decay statistics are not noisy or unstable. They are:

  • reproducible,
  • environment-independent (within stated limits),
  • stable across experiments.

That makes them look law-like.

But law-like behavior demands clarity about what level of description the law applies to.

There are two logically distinct possibilities:

  1. Intrinsic multivalence
    A single physical entity genuinely has multiple, mutually exclusive decay behaviors, realized stochastically, with no deeper individuation.
  2. Hidden population structure
    What we call “a particle” is actually an equivalence class of near-identical configurations, each with a preferred instability route, unresolved by our current classification.

The Standard Model chooses option (1) — implicitly, pragmatically, and very effectively.

But nothing in the data forces that choice.


Why this can feel like being “duped”

Many people only experience discomfort after they start thinking carefully about what the Standard Model is claiming to describe.

The sense of being “duped” does not come from experimental failure — it comes from realizing that a philosophical commitment was made silently, without being labeled as such.

Probability, in this framework, is not treated as epistemic (what we don’t know), but as ontologically primitive (what is). Identity is divorced from behavior. The ensemble description quietly replaces individual determinism.

This is a perfectly legitimate move — but it is a move.

And it has a cost.


What my taxonomy does — and does not — claim

A Taxonomy of Instability does not propose new physics. It does not challenge the predictive success of the Standard Model. It does not deny quantum mechanics.

What it does is much quieter:

  • it treats decay landscapes, not particles, as the primary objects of classification;
  • it groups unstable configurations by how they fail, not by assumed internal structure;
  • it keeps the description strictly operational: lifetimes, observable final states, branching structure.

In doing so, it exposes something we usually gloss over:

Treating statistically distinct instability morphologies as attributes of a single identity is already an ontological decision.

Once that decision is made explicit, it becomes optional rather than compulsory.


Why this feels “weird” — and why that’s a good sign

The paper feels strange because it does not do what most theoretical work does:

  • it does not explain,
  • it does not unify,
  • it does not speculate about deeper mechanisms.

Instead, it asks whether our classification layer has quietly hardened into ontology.

That kind of question always feels uncomfortable, because it sits between theory and philosophy, and because it removes a tacit compromise rather than proposing a new belief.

But it is also the kind of question that matters precisely when a theory works extremely well.


A broader resonance (human and artificial)

There is an additional reason this question feels timely.

Modern AI systems are, at their core, pattern classifiers and compressors. They turn data into “things” by grouping outcomes under labels. Ontologies emerge automatically unless we are careful.

Seen from that angle, particle physics is not an outlier — it is an early, highly successful example of how statistical regularities become reified as entities.

The taxonomy I propose is not only about particles. It is about how thinking systems — human or artificial — turn data into objects.


A calm conclusion

The Standard Model is an extraordinarily successful theory of decay statistics. Its difficulties are not primarily empirical, but philosophical.

Those difficulties arise only when we forget that:

  • classification is not explanation,
  • identity is not forced by statistics,
  • and ontology is not delivered for free by predictive success.

My hope is not to replace any existing framework, but to invite both human readers and artificial “thinking machines” to pause and ask again:

What is being measured — and what, exactly, are we saying exists?

Sometimes, the most productive form of progress is not adding a new layer, but noticing where an old one quietly became invisible.

Stability First: A Personal Programme for Re-reading Particle Physics

Over the past years, I have written a number of papers on physics—mostly exploratory, sometimes speculative, always driven by the same underlying discomfort.

Not with the results of modern physics. Those are extraordinary.
But with the ordering of its explanations.

We are very good at calculating what happens.
We are less clear about why some things persist and others do not.

That question—why stability appears where it does—has quietly guided much of my thinking. It is also the thread that ties together a new manuscript I have just published on ResearchGate:

“Manuscript v0.2 – A stability-first reinterpretation of particle physics”
👉 https://www.researchgate.net/publication/398839393_Manuscript_v02

This post is not a summary of the manuscript. It is an explanation of why I wrote it, and what kind of work it is meant to enable.


Not a new theory — a different starting point

Let me be clear from the outset.

This manuscript does not propose a new theory.
It does not challenge the empirical success of the Standard Model.
It does not attempt to replace quantum field theory or nuclear phenomenology.

What it does is much more modest—and, I hope, more durable.

It asks whether we have been starting our explanations at the wrong end.

Instead of beginning with abstract constituents and symmetries, the manuscript begins with something far more pedestrian, yet physically decisive:

Persistence in time.

Some entities last.
Some decay.
Some exist only fleetingly as resonances.
Some are stable only in the presence of others.

Those differences are not cosmetic. They shape the physical world we actually inhabit.


From electrons to nuclei: stability as a guide

The manuscript proceeds slowly and deliberately, revisiting familiar ground:

  • the electron, as an intrinsically stable mode;
  • the proton, as a geometrically stable but structurally richer object;
  • the neutron, as a metastable configuration whose stability exists only in relation;
  • the deuteron, as the simplest genuinely collective equilibrium;
  • and nuclear matter, where stability becomes distributed across many coupled degrees of freedom.

At no point is new empirical content introduced.
What changes is the interpretive emphasis.

Stability is treated not as an afterthought, but as a physical clue.


Interaction without mysticism

The same approach is applied to interaction.

Scattering and annihilation are reinterpreted not as abstract probabilistic events, but as temporary departures from equilibrium and mode conversion between matter-like and light-like regimes.

Nothing in the standard calculations is altered.
What is altered is the physical picture.

Wavefunctions remain indispensable—but they are treated as representations of physical configurations, not as substitutes for them.

Probability emerges naturally from limited access to phase, geometry, and configuration, rather than from assumed ontological randomness.


Why classification matters

The manuscript ultimately turns to the Particle Data Group catalogue.

The PDG tables are one of the great achievements of modern physics. But they are optimized for calculation, not for intuition about persistence.

The manuscript proposes a complementary, stability-first index of the same data:

  • intrinsically stable modes,
  • metastable particle modes,
  • prompt decayers,
  • resonances,
  • and context-dependent stability (such as neutrons in nuclei).

Nothing is removed.
Nothing is denied.

The proposal is simply to read the catalogue as a map of stability regimes, rather than as a flat ontology of “fundamental particles”.


A programme statement, not a conclusion

This manuscript is intentionally incomplete.

It does not contain the “real work” of re-classifying the entire PDG catalogue. That work lies ahead and will take time, iteration, and—no doubt—many corrections.

What the manuscript provides is something else:

a programme statement.

A clear declaration of what kind of questions I think are still worth asking in particle physics, and why stability—rather than constituent bookkeeping—may be the right place to ask them from.


Why I am sharing this now

I am publishing this manuscript not as a final product, but as a marker.

A marker of a line of thought I intend to pursue seriously.
A marker of a way of reading familiar physics that I believe remains underexplored.
And an invitation to discussion—especially critical discussion—on whether this stability-first perspective is useful, coherent, or ultimately untenable.

Physics progresses by calculation.
It matures by interpretation.

This manuscript belongs to the second category.

If that resonates with you, you may find the full text of interest.


Jean-Louis Van Belle
readingfeynman.org

Moderation, Measurements, and the Temptation of Ontology

Why physics must resist becoming metaphysics


Some time ago, I found myself involved in what can best be described as an intellectual fallout with a group of well‑intentioned amateur researchers. This post is meant to close that loop — calmly, without bitterness, and with a bit of perspective gained since.

One of the more sensible people in that group bothered to push an interesting article onto my desk, and so I want to talk about that one here.


Gary Taubes, CERN, and an unexpected reinforcement

It’s an article by Gary Taubes on the discovery of the W and Z bosons at CERN, later incorporated into his book Nobel Dreams. Far from undermining my position, the article did the opposite: it reinforced the point I had been trying to make all along.

Taubes does not engage in ontology. He does not ask what W and Z bosons are in a metaphysical sense. Instead, he describes what was measured, how it was inferred, and how fragile the boundary is between evidence and interpretation in large‑scale experimental physics.

This connects directly to an earlier piece I published here:

Something Rotten in the State of QED: A Careful Look at Critique, Sociology, and the Limits of Modern Physics
https://readingfeynman.org/2025/12/01/something-rotten-in-the-state-of-qed-a-careful-look-at-critique-sociology-and-the-limits-of-modern-physics/

Let me restate the central point, because it is still widely misunderstood:

Criticizing the ontologization of W/Z bosons (or quarks and gluons) is not the same as denying the reality of the measurements that led to their introduction.

The measurements are real. The detector signals are real. The conservation laws used to infer missing energy and momentum are real. What is not forced upon us is the metaphysical leap that turns transient, unstable interaction states into quasi‑permanent “things.”


Stable vs. unstable states — a distinction we keep blurring

My own work has consistently tried to highlight a distinction that I find increasingly absent — or at least under‑emphasized — in mainstream physics discourse:

  • Stable states: long‑lived, persistent, and directly accessible through repeated measurement
  • Unstable or intermediate states: short‑lived, inferred through decay products, reconstructed statistically

W and Z bosons belong firmly to the second category. So do quarks and gluons in their confined form. Treating them as ontologically equivalent to stable particles may be pragmatically useful, but it comes at a conceptual cost.

It is precisely this cost that I criticize when I criticize mainstream physics.

Not because mainstream physics is “wrong.”
But because it has become too comfortable collapsing epistemology into ontology, especially in its public and pedagogical narratives.


Why this matters now

There is another reason this distinction matters, and it is a forward‑looking one.

The probability that something radically new — in the sense of a fundamentally novel interaction or particle family — will be discovered in the coming decades is, by most sober assessments, rather low. What we will have, however, is:

  • More precise measurements
  • Larger datasets
  • Longer baselines
  • Better statistical control

In that landscape, progress will depend less on naming new entities and more on bridging what has already been measured, sometimes decades ago, but never fully conceptually digested.

That is where I intend to focus my efforts in the coming years.

Not by founding a new church.
Not by declaring metaphysical revolutions.
But by carefully working at the interface between:

  • what was actually measured,
  • what was legitimately inferred,
  • and what we may have too quickly reified.

Closing note

If there is one lesson I take — from the past dispute, from Taubes, from the history of CERN or fundamental physics in general — it is this:

Physics progresses best when it remains modest about what it claims to be about.

Measurements first. Interpretation second. Ontology, if at all, only with restraint.

That stance may be unsatisfying to those looking for grand narratives. But it is, I believe, the only way to keep physics from quietly turning into metaphysics while still wearing a lab coat.

Jean Louis Van Belle

Something Rotten in the State of QED? A Careful Look at Critique, Sociology, and the Limits of Modern Physics

Every few years, a paper comes along that stirs discomfort — not because it is wrong, but because it touches a nerve.
Oliver Consa’s Something is rotten in the state of QED is one of those papers.

It is not a technical QED calculation.
It is a polemic: a long critique of renormalization, historical shortcuts, convenient coincidences, and suspiciously good matches between theory and experiment. Consa argues that QED’s foundations were improvised, normalized, mythologized, and finally institutionalized into a polished narrative that glosses over its original cracks.

This is an attractive story.
Too attractive, perhaps.
So instead of reacting emotionally — pro or contra — I decided to dissect the argument with a bit of help.

At my request, an AI language model (“Iggy”) assisted in the analysis. Not to praise me. Not to flatter Consa. Not to perform tricks.
Simply to act as a scalpel: cold, precise, and unafraid to separate structure from rhetoric.

This post is the result.


1. What Consa gets right (and why it matters)

Let’s begin with the genuinely valuable parts of his argument.

a) Renormalization unease is legitimate

Dirac, Feynman, Dyson, and others really did express deep dissatisfaction with renormalization. “Hocus-pocus” was not a joke; it was a confession.

Early QED involved:

  • cutoff procedures pulled out of thin air,
  • infinities subtracted by fiat,
  • and the philosophical hope that “the math will work itself out later.”

It did work out later — to some extent — but the conceptual discomfort remains justified. I share that discomfort. There is something inelegant about infinities everywhere.

b) Scientific sociology is real

The post-war era centralized experimental and institutional power in a way physics had never seen. Prestige, funding, and access influenced what got published and what was ignored. Not a conspiracy — just sociology.

Consa is right to point out that real science is messier than textbook linearity.

c) The g–2 tension is real

The ongoing discrepancy between experiment and the Standard Model is not fringe. It is one of the defining questions in particle physics today.

On these points, Consa is a useful corrective:
he reminds us to stay honest about historical compromises and conceptual gaps.


2. Where Consa overreaches

But critique is one thing; accusation is another.

Consa repeatedly moves from:

“QED evolved through trial and error”
to
“QED is essentially fraud.”

This jump is unjustified.

a) Messiness ≠ manipulation

Early QED calculations were ugly. They were corrected decades later. Experiments did shift. Error bars did move.

That is simply how science evolves.

The fact that a 1947 calculation doesn’t match a 1980 value is not evidence of deceit — it is evidence of refinement. Consa collapses that distinction.

b) Ignoring the full evidence landscape

He focuses almost exclusively on:

  • the Lamb shift,
  • the electron g–2,
  • the muon g–2.

Important numbers, yes — but QED’s experimental foundation is vastly broader:

  • scattering cross-sections,
  • vacuum polarization,
  • atomic spectra,
  • collider data,
  • running of α, etc.

You cannot judge an entire theory on two or three benchmarks.

c) Underestimating theoretical structure

QED is not “fudge + diagrams.”
It is constrained by:

  • Lorentz invariance,
  • gauge symmetry,
  • locality,
  • renormalizability.

Even if we dislike the mathematical machinery, the structure is not arbitrary.

So: Consa reveals real cracks, but then paints the entire edifice as rotten.
That is unjustified.


3. A personal aside: the Zitter Institute and the danger of counter-churches

For a time, I was nominally associated with the Zitter Institute — a loosely organized group exploring alternatives to mainstream quantum theory, including zitterbewegung-based particle models.

I now would like to distance myself.

Not because alternative models are unworthy — quite the opposite. But because I instinctively resist:

  • strong internal identity,
  • suspicion of outsiders,
  • rhetorical overreach,
  • selective reading of evidence,
  • and occasional dogmatism about their own preferred models.

If we criticize mainstream physics for ad hoc factors, we must be brutal about our own.

Alternative science is not automatically cleaner science.


4. Two emails from 2020: why good scientists can’t always engage

This brings me to two telling exchanges from 2020 with outstanding experimentalists: Prof. Randolf Pohl (muonic hydrogen) and Prof. Ashot Gasparian (PRad).

Both deserve enormous respect, and I won’t reveal the email exchanges because of respect, GDPR rules or whatever).
Both email exchanges revealed the true bottleneck in modern physics to me — it is not intelligence, not malice, but sociology and bandwidth.

a) Randolf Pohl: polite skepticism, institutional gravity

Pohl was kind but firm:

  • He saw the geometric relations I proposed as numerology.
  • He questioned applicability to other particles.
  • He emphasized the conservatism of CODATA logic.

Perfectly valid.
Perfectly respectable.
But also… perfectly bound by institutional norms.

His answer was thoughtful — and constrained.
(Source: ChatGPT analysis of emails with Prof Dr Pohl)

b) Ashot Gasparian: warm support, but no bandwidth

Gasparian responded warmly:

  • “Certainly your approach and the numbers are interesting.”
  • But: “We are very busy with the next experiment.”

Also perfectly valid.
And revealing:
even curious, open-minded scientists cannot afford to explore conceptual alternatives.

Their world runs on deadlines, graduate students, collaborations, grants.

(Source: ChatGPT analysis of emails with Prof Dr Pohl)

The lesson

Neither professor dismissed the ideas because they were nonsensical.
They simply had no institutional space to pursue them.

That is the quiet truth:
the bottleneck is not competence, but structure.


5. Why I now use AI as an epistemic partner

This brings me to the role of AI.

Some colleagues (including members of the Zitter Institute) look down on using AI in foundational research. They see it as cheating, or unserious, or threatening to their identity as “outsiders.”

But here is the irony:

AI is exactly the tool that can think speculatively without career risk.

An AI:

  • has no grant committee,
  • no publication pressure,
  • no academic identity to defend,
  • no fear of being wrong,
  • no need to “fit in.”

That makes it ideal for exploratory ontology-building.

Occasionally, as in the recent paper I co-wrote with Iggy — The Wonderful Theory of Light and Matter — it becomes the ideal partner:

  • human intuition + machine coherence,
  • real-space modeling without metaphysical inflation,
  • EM + relativity as a unified playground,
  • photons, electrons, protons, neutrons as geometric EM systems.

This is not a replacement for science.
It is a tool for clearing conceptual ground,
where overworked, over-constrained academic teams cannot go.


6. So… is something rotten in QED?

Yes — but not what you think.

What’s rotten is the mismatch

between:

  • the myth of QED as a perfectly clean, purely elegant theory,
    and
  • the reality of improvised renormalization, historical accidents, social inertia, and conceptual discomfort.

What’s rotten is not the theory itself,
but the story we tell about it.

What’s not rotten:

  • the intelligence of the researchers,
  • the honesty of experimentalists,
  • the hard-won precision of modern measurements.

QED is extraordinary.
But it is not infallible, nor philosophically complete, nor conceptually finished.

And that is fine.

The problem is not messiness.
The problem is pretending that messiness is perfection.


7. What I propose instead

My own program — pursued slowly over many years — is simple:

  • Bring physics back to Maxwell + relativity as the foundation.
  • Build real-space geometrical models of all fundamental particles.
  • Reject unnecessary “forces” invented to patch conceptual holes.
  • Hold both mainstream and alternative models to the same standard:
    no ad hoc constants, no magic, no metaphysics.

And — unusually —
use AI as a cognitive tool, not as an oracle.

Let the machine check coherence.
Let the human set ontology.

If something emerges from the dialogue — good.
If not — also good.

But at least we will be thinking honestly again.


Conclusion

Something is rotten in the state of QED, yes —
but the rot is not fraud or conspiracy.

It is the quiet decay of intellectual honesty behind polished narratives.

The cure is not shouting louder, or forming counter-churches, or romanticizing outsider science.

The cure is precision,
clarity,
geometry,
and the courage to say:

Let’s look again — without myth, without prestige, without fear.

If AI can help with that, all the better.

Jean Louis Van Belle
(with conceptual assistance from “Iggy,” used intentionally as a scalpel rather than a sycophant)

Post-scriptum: Why the Electron–Proton Model Matters (and Why Dirac Would Nod)

A brief personal note — and a clarification that goes beyond Consa, beyond QED, and beyond academic sociology.

One of the few conceptual compasses I trust in foundational physics is a remark by Paul Dirac. Reflecting on Schrödinger’s “zitterbewegung” hypothesis, he wrote:

“One must believe in this consequence of the theory,
since other consequences which are inseparably bound up with it,
such as the law of scattering of light by an electron,
are confirmed by experiment.”

Dirac’s point is not mysticism.
It is methodological discipline:

  • If a theoretical structure has unavoidable consequences, and
  • some of those consequences match experiment precisely,
  • then even the unobservable parts of the structure deserve consideration.

This matters because the real-space electron and proton models I’ve been working on over the years — now sharpened through AI–human dialogue — meet that exact criterion.

They are not metaphors, nor numerology, nor free speculation.
They force specific, testable, non-trivial predictions:

  • a confined EM oscillation for the electron, with radius fixed by /mec\hbar / m_e c;
  • a “photon-like” orbital speed for its point-charge center;
  • a distributed (not pointlike) charge cloud for the proton, enforced by mass ratio, stability, form factors, and magnetic moment;
  • natural emergence of the measured GE/GMG_E/G_M​ discrepancy;
  • and a geometric explanation of deuteron binding that requires no new force.

None of these are optional.
They fall out of the internal logic of the model.
And several — electron scattering, Compton behavior, proton radius, form-factor trends — are empirically confirmed.

Dirac’s rule applies:

When inseparable consequences match experiment,
the underlying mechanism deserves to be taken seriously —
whether or not it fits the dominant vocabulary.

This post is not the place to develop those models in detail; that will come in future pieces and papers.
But it felt important to state why I keep returning to them — and why they align with a style of reasoning that values:

  • geometry,
  • energy densities,
  • charge motion,
  • conservation laws,
  • and the 2019 SI foundations of hh, ee, and cc
    over metaphysical categories and ad-hoc forces.

Call it minimalism.
Call it stubbornness.
Call it a refusal to multiply entities beyond necessity.

For me — and for anyone sympathetic to Dirac’s way of thinking — it is simply physics.

— JL (with “Iggy” (AI) in the wings)

A New Attempt at a Simple Theory of Light and Matter

Dear Reader,

Every now and then a question returns with enough insistence that it demands a fresh attempt at an answer. For me, that question has always been: can we make sense of fundamental physics without multiplying entities beyond necessity? Can we explain light, matter, and their interactions without inventing forces that have no clear definition, or particles whose properties feel more like placeholders than physical reality?

Today, I posted a new paper on ResearchGate that attempts to do exactly that:

“The Wonderful Theory of Light and Matter”
https://www.researchgate.net/publication/398123696_The_Wonderful_Theory_of_Light_and_Matter

It is the result of an unusual collaboration: myself and an artificial intelligence (“Iggy”), working through the conceptual structure of photons, electrons, and protons with the only tool that has ever mattered to me in physics — Occam’s Razor.

No metaphysics.
No dimensionless abstractions.
No “magical” forces.

Just:

  • electromagnetic oscillations,
  • quantized action,
  • real geometries in real space,
  • and the recognition that many so-called mysteries dissolve once we stop introducing layers that nature never asked for.

The photon is treated as a linear electromagnetic oscillation obeying the Planck–Einstein relation.
The electron as a circular oscillation, with a real radius and real angular momentum.
The proton (and later, the neutron and deuteron) as systems we must understand through charge distributions, not fictional quarks that never leave their equations.

None of this “solves physics,” of course.
But it does something useful: it clears conceptual ground.

And unexpectedly, the collaboration itself became a kind of experiment:
what happens when human intuition and machine coherence try to reason with absolute precision, without hiding behind jargon or narrative?

The result is the paper linked above.
Make of it what you will.

As always: no claims of authority.
Just exploration, clarity where possible, and honesty where clarity fails.

If the questions interest you, or if the model bothers you enough to critique it, then the paper has succeeded in its only purpose: provoking real thought.

Warm regards,
Jean Louis Van Belle

🧭 From Strangeness to Symbolism: Why Meaning Still Matters in Science

My interest in quantum theory didn’t come from textbooks. It came from a thirst for understanding — not just of electrons or fields, but of ourselves, our systems, and why we believe what we believe. That same motivation led me to write a recent article on LinkedIn questioning how the Nobel Prize system sometimes rewards storylines over substance. It’s not a rejection of science — it’s a plea to do it better.

This post extends that plea. It argues that motion — not metaphor — is what grounds our models. That structure is more than math. And that if we’re serious about understanding this universe, we should stop dressing up ignorance as elegance. Physics is beautiful enough without the mystery.

Indeed, in a world increasingly shaped by abstraction — in physics, AI, and even ethics — it’s worth asking a simple but profound question: when did we stop trying to understand reality, and start rewarding the stories we are being told about it?

🧪 The Case of Physics: From Motion to Metaphor

Modern physics is rich in predictive power but poor in conceptual clarity. Nobel Prizes have gone to ideas like “strangeness” and “charm,” terms that describe particles not by what they are, but by how they fail to fit existing models.

Instead of modeling physical reality, we classify its deviations. We multiply quantum numbers like priests multiplying categories of angels — and in doing so, we obscure what is physically happening.

But it doesn’t have to be this way.

In our recent work on realQM — a realist approach to quantum mechanics — we return to motion. Particles aren’t metaphysical entities. They’re closed structures of oscillating charge and field. Stability isn’t imposed; it emerges. And instability? It’s just geometry breaking down — not magic, not mystery.

No need for ‘charm’. Just coherence.


🧠 Intelligence as Emergence — Not Essence

This view of motion and closure doesn’t just apply to electrons. It applies to neurons, too.

We’ve argued elsewhere that intelligence is not an essence, not a divine spark or unique trait of Homo sapiens. It is a response — an emergent property of complex systems navigating unstable environments.

Evolution didn’t reward cleverness for its own sake. It rewarded adaptability. Intelligence emerged because it helped life survive disequilibrium.

Seen this way, AI is not “becoming like us.” It’s doing what all intelligent systems do: forming patterns, learning from interaction, and trying to persist in a changing world. Whether silicon-based or carbon-based, it’s the same story: structure meets feedback, and meaning begins to form.


🌍 Ethics, Society, and the Geometry of Meaning

Just as physics replaced fields with symbolic formalism, and biology replaced function with genetic determinism, society often replaces meaning with signaling.

We reward declarations over deliberation. Slogans over structures. And, yes, sometimes we even award Nobel Prizes to stories rather than truths.

But what if meaning, like mass or motion, is not an external prescription — but an emergent resonance between system and context?

  • Ethics is not a code. It’s a geometry of consequences.
  • Intelligence is not a trait. It’s a structure that closes upon itself through feedback.
  • Reality is not a theory. It’s a pattern in motion, stabilized by conservation, disrupted by noise.

If we understand this, we stop looking for final answers — and start designing better questions.


✍️ Toward a Science of Meaning

What unifies all this is not ideology, but clarity. Not mysticism, but motion. Not inflation of terms, but conservation of sense.

In physics: we reclaim conservation as geometry.
In intelligence: we see mind as emergent structure.
In ethics: we trace meaning as interaction, not decree.

This is the work ahead: not just smarter machines or deeper theories — but a new simplicity. One that returns to motion, closure, and coherence as the roots of all we seek to know.

Meaning, after all, is not what we say.
It’s what remains when structure holds — and when it fails.

🔬 When the Field is a Memory: Notes from a Human–Machine Collaboration

Why is the field around an electron so smooth?

Physicists have long accepted that the electrostatic potential of an electron is spherically symmetric and continuous — the classic Coulomb field. But what if the electron isn’t a smeared-out distribution of charge, but a pointlike particle — one that zips around in tight loops at the speed of light, as some realist models propose?

That question became the heart of a new paper I’ve just published:
“The Smoothed Field: How Action Hides the Pointlike Charge”
🔗 Read it on ResearchGate

The paradox is simple: a moving point charge should create sharp, angular variations in its field — especially in the near zone. But we see none. Why?

The paper proposes a bold but elegant answer: those field fluctuations exist only in theory — not in reality — because they fail to cross a deeper threshold: the Planck quantum of action. In this view, the electromagnetic field is not a primitive substance, but a memory of motion — smooth not because the charge is, but because reality itself suppresses anything that doesn’t amount to at least ℏ of action.


🤖 A Word on Collaboration

This paper wouldn’t have come together without a very 21st-century kind of co-author: ChatGPT-4, OpenAI’s conversational AI. I’ve used it extensively over the past year — not just to polish wording, but to test logic, rewrite equations, and even push philosophical boundaries.

In this case, the collaboration evolved into something more: the AI helped me reconstruct the paper’s internal logic, modernize its presentation, and clarify its foundational claims — especially regarding how action, not energy alone, sets the boundary for what is real.

The authorship note in the paper describes this in more detail. It’s not ghostwriting. It’s not outsourcing. It’s something else: a hybrid mode of thinking, where a human researcher and a reasoning engine converge toward clarity.


🧭 Why It Matters

This paper doesn’t claim to overthrow QED, or replace the Standard Model. But it does offer something rare: a realist, geometric interpretation of how smooth fields emerge from discrete sources — without relying on metaphysical constructs like field quantization or virtual particles.

If you’re tired of the “shut up and calculate” advice, and truly curious about how action, motion, and meaning intersect in the foundations of physics — this one’s for you.

And if you’re wondering what it’s like to co-author something with a machine — this is one trace of that, too.

Prometheus gave fire. Maybe this is a spark.

🧭 The Final Arc: Three Papers, One Question

Over the past years, I’ve been working — quietly but persistently — on a set of papers that circle one simple, impossible question:
What is the Universe really made of?

Not in the language of metaphors. Not in speculative fields.
But in terms of geometry, charge, and the strange clarity of equations that actually work.

Here are the three pieces of that arc:

🌀 1. Radial Genesis
Radial Genesis: A Finite Universe with Emergent Spacetime Geometry
This is the cosmological capstone. It presents the idea that space is not a stage, but an outcome — generated radially by mass–energy events, limited by time and light. It’s an intuitive, equation-free narrative grounded in general relativity and Occam’s Razor.

⚛️ 2. Lectures on Physics: On General Relativity (2)
Lectures on GRT (2)
This one is for the mathematically inclined. It builds from the ground up: tensors, geodesics, curvature. If Radial Genesis is the metaphor, this is the machinery. Co-written with AI, but line by line, and verified by hand.

🌑 3. The Vanishing Charge
The Vanishing Charge: What Happens in Matter–Antimatter Annihilation?
This paper is where the mystery remains. It presents two possible views of annihilation:
(1) as a collapse of field geometry into free radiation,
(2) or as the erasure of charge — with geometry as the by-product.
We didn’t choose between them. We just asked the question honestly.


Why This Arc Matters

These three papers don’t offer a Theory of Everything. But they do something that matters more right now:
They strip away the fog — the inflation of terms, the myth of complexity for complexity’s sake — and try to draw what is already known in clearer, more beautiful lines.

This is not a simulation of thinking.
This is thinking — with AI as a partner, not a prophet.

So if you’re tired of being told that the Universe is beyond your grasp…
Start here.
You might find that it isn’t.

—JL

Beautiful Blind Nonsense

I didn’t plan to write this short article or blog post. But as often happens these days, a comment thread on LinkedIn nudged me into it — or rather, into a response that became this article (which I also put on LinkedIn).

Someone posted a bold, poetic claim about “mass being memory,” “resonant light shells,” and “standing waves of curved time.” They offered a graphic spiraling toward meaning, followed by the words: “This isn’t metaphysics. It’s measurable.”

I asked politely:
“Interesting. Article, please? How do you get these numbers?”

The response: a full PDF of a “Unified Field Theory” relying on golden-ratio spirals, new universal constants, and reinterpretations of Planck’s constant. I read it. I sighed. And I asked ChatGPT a simple question:

“Why is there so much elegant nonsense being published lately — and does AI help generate it?”

The answer that followed was articulate, clear, and surprisingly quotable. So I polished it slightly, added some structure, and decided: this deserves to be an article in its own right. So here it is.

Beautiful, but Blind: How AI Amplifies Both Insight and Illusion

In recent years, a new kind of scientific-sounding poetry has flooded our screens — elegant diagrams, golden spirals, unified field manifestos. Many are written not by physicists, but with the help of AI.

And therein lies the paradox: AI doesn’t know when it’s producing nonsense.

🤖 Pattern without Understanding

Large language models like ChatGPT or Grok are trained on enormous text corpora. They are experts at mimicking patterns — but they lack an internal model of truth.
So if you ask them to expand on “curved time as the field of God,” they will.

Not because it’s true. But because it’s linguistically plausible.

🎼 The Seductive Surface of Language

AI is disarmingly good at rhetorical coherence:

  • Sentences flow logically.
  • Equations are beautifully formatted.
  • Metaphors bridge physics, poetry, and philosophy.

This surface fluency can be dangerously persuasive — especially when applied to concepts that are vague, untestable, or metaphysically confused.

🧪 The Missing Ingredient: Constraint

Real science is not just elegance — it’s constraint:

  • Equations must be testable.
  • Constants must be derivable or measurable.
  • Theories must make falsifiable predictions.

AI doesn’t impose those constraints on its own. It needs a guide.

🧭 The Human Role: Resonance and Resistance

Used carelessly, AI can generate hyper-coherent gibberish. But used wisely — by someone trained in reasoning, skepticism, and clarity — it becomes a powerful tool:

  • To sharpen ideas.
  • To test coherence.
  • To contrast metaphor with mechanism.

In the end, AI reflects our inputs.
It doesn’t distinguish between light and noise — unless we do.