Cleaning Up After Bell

On the limits of theorems, the sociology of prizes, and the slow work of intellectual maturity

When I re-read two older posts of mine on Bell’s Theorem β€” one written in 2020, at a moment when my blog was gaining unexpected traction, and another written in 2023 in reaction to what I then experienced as a Nobel Prize award controversy β€” I feel a genuine discomfort.

Not because I think the core arguments were wrong.
But because I now see more clearly what was doing the talking.

There is, in both texts, a mixture of three things:

  1. A principled epistemic stance (which is still there);
  2. A frustration with institutional dynamics in physics (also there);
  3. But, yes, also a degree of rhetorical impatience that no longer reflects how I want to think β€” or be read.

This short text is an attempt to disentangle those layers.


1. Why I instinctively refused to β€œengage” with Bell’s Theorem

In the 2020 post, I wrote β€” deliberately provocatively β€” that I β€œdid not care” about Bell’s Theorem. That phrasing was not chosen to invite dialogue; it was chosen to draw a boundary. At the time, my instinctive reasoning was this:

Bell’s Theorem is a mathematical theorem. Like any theorem, it tells us what follows if certain premises are accepted. Its physical relevance therefore depends entirely on whether those premises are physically mandatory, or merely convenient formalizations.

This is not a rejection of mathematics. It is a refusal to grant mathematics automatic ontological authority.

I was β€” and still am β€” deeply skeptical of the move by which a formal result is elevated into a metaphysical verdict about reality itself. Bell’s inequalities constrain a particular class of models (local hidden-variable models of a specific type). They do not legislate what Nature must be. In that sense, my instinct was aligned not only with Einstein’s well-known impatience with axiomatic quantum mechanics, but also with Bell himself, who explicitly hoped that a β€œradical conceptual renewal” might one day dissolve the apparent dilemma his theorem formalized.

Where I now see a weakness is not in the stance, but in its expression. Saying β€œI don’t care” reads as dismissal, while what I really meant β€” and should have said β€” is this:

I do not accept the premises as ontologically compulsory, and therefore I do not treat the theorem as decisive.

That distinction matters.


2. Bell, the Nobel Prize, and a sociological paradox

My 2023 reaction was sharper, angrier, and less careful β€” and that is where my current discomfort is strongest.

At the time, it seemed paradoxical to me that:

  • Bell was once close to receiving a Nobel Prize for a theorem he himself regarded as provisional,
  • and that nearly six decades later, a Nobel Prize was awarded for experiments demonstrating violations of Bell inequalities.

In retrospect, the paradox is not logical β€” it is sociological.

The 2022 Nobel Prize did not β€œdisprove Bell’s Theorem” in a mathematical sense. It confirmed, experimentally and with great technical sophistication, that Nature violates inequalities derived under specific assumptions. What was rewarded was experimental closure, not conceptual resolution.

The deeper issue β€” what the correlations mean β€” remains as unsettled as ever.

What troubled me (and still does) is that the Nobel system has a long history of rewarding what can be stabilized experimentally, while quietly postponing unresolved interpretational questions. This is not scandalous; it is structural. But it does shape the intellectual culture of physics in ways that deserve to be named.

Seen in that light, my indignation was less about Bell, and more about how foundational unease gets ritualized into β€œprogress” without ever being metabolized conceptually.


3. Authority, responsibility, and where my anger really came from

The episode involving John Clauser and climate-change denial pushed me from critique into anger β€” and here, too, clarity comes from separation.

The problem there is not quantum foundations.
It is the misuse of epistemic authority across domains.

A Nobel Prize in physics does not confer expertise in climate science. When prestige is used to undermine well-established empirical knowledge in an unrelated field, that is not dissent β€” it is category error dressed up as courage.

My reaction was visceral because it touched a deeper nerve: the responsibility that comes with public authority in science. In hindsight, folding this episode into a broader critique of Bell and the Nobel Prize blurred two distinct issues β€” foundations of physics, and epistemic ethics.

Both matter. They should not be confused.


4. Where I stand now

If there is a single thread connecting my current thinking to these older texts, it is this:

I am less interested than before in winning arguments, and more interested in clarifying where different positions actually part ways β€” ontologically, methodologically, and institutionally.

That shift is visible elsewhere in my work:

  • in a softer, more discriminating stance toward the Standard Model,
  • in a deliberate break with institutions and labels that locked me into adversarial postures,
  • and in a conscious move toward reconciliation where reconciliation is possible, and clean separation where it is not.

The posts on Bell’s Theorem were written at an earlier stage in that trajectory. I do not disown them. But I no longer want them to stand without context.

This text is that context.


Final notes

1. On method and collaboration

Much of the clarification in this essay did not emerge in isolation, but through extended dialogue β€” including with an AI interlocutor that acted, at times, less as a generator of arguments than as a moderator of instincts: slowing me down, forcing distinctions, and insisting on separating epistemic claims from emotional charge. That, too, is part of the story β€” and perhaps an unexpected one. If intellectual maturity means anything, it is not the abandonment of strong positions, but the ability to state them without needing indignation to carry the weight. That is the work I am now trying to do.

It is also why I want to be explicit about how these texts are currently produced: they are not outsourced to AI, but co-generated through dialogue. In that dialogue, I deliberately highlight not only agreements but also remaining disagreements β€” not on the physics itself, but on its ontological interpretation β€” with the AI agent I currently use (ChatGPT 5.2). Making those points of convergence and divergence explicit is, I believe, intellectually healthier than pretending they do not exist.

2. On stopping, without pretending to conclude

This post also marks a natural stopping point. Over the past weeks, several long-standing knots in my own thinking β€” Bell’s Theorem (what this post is about), the meaning of gauge freedom, the limits of SchrΓΆdinger’s equation as a model of charge in motion, or even very plain sociological considerations on how sciences moves forward β€” have either been clarified or cleanly isolated.

What remains most resistant is the problem of matter–antimatter pair creation and annihilation. Here, the theory appears internally consistent, while the experimental evidence, impressive as it is, still leaves a small but non-negligible margin of doubt β€” largely because of the indirect, assumption-laden nature of what is actually being measured. I do not know the experimental literature well enough to remove that last 5–10% of uncertainty, and I consider it a sign of good mental health not to pretend otherwise.

For now, that is where I leave it. Not as a conclusion, but as a calibration: knowing which questions have been clarified, and which ones deserve years β€” rather than posts β€” of further work.

3. Being precise on my use of AI: on cleaning up ideas, not outsourcing thinking

What AI did not do

Let me start with what AI did not do.

It did not:

  • supply new experimental data,
  • resolve open foundational problems,
  • replace reading, calculation, or judgment,
  • or magically dissolve the remaining hard questions in physics.

In particular, it did not remove my residual doubts concerning matter–antimatter pair creation. On that topic, I remain where I have been for some time: convinced that the theory is internally consistent, convinced that the experiments are impressive and largely persuasive, and yet unwilling to erase the remaining 5–10% of doubt that comes from knowing how indirect, assumption-laden, and instrument-mediated those experiments necessarily are. I still do not know the experimental literature well enough to close that last gapβ€”and I consider it a sign of good mental health that I do not pretend otherwise.

What AI did do

What AI did do was something much more modestβ€”and much more useful.

It acted as a moderator of instincts.

In the recent rewritesβ€”most notably in this post (Cleaning Up After Bell)β€”AI consistently did three things:

  1. It cut through rhetorical surplus.
    Not by softening arguments, but by separating epistemic claims from frustration, indignation, or historical irritation.
  2. It forced distinctions.
    Between mathematical theorems and their physical premises; between experimental closure and ontological interpretation; between criticism of ideas and criticism of institutions.
  3. It preserved the spine while sharpening the blade.
    The core positions did not change. What changed was their articulation: less adversarial, more intelligible, and therefore harder to dismiss.

In that sense, AI did not β€œcorrect” my thinking. It helped me re-express it in a way that better matches where I am nowβ€”intellectually and personally.

Two primitives or one?

A good illustration is the remaining disagreement between myself and my AI interlocutor on what is ultimately primitive in physics.

I still tend to think in terms of two ontological primitives: charge and fieldsβ€”distinct, but inseparably linked by a single interaction structure. AI, drawing on a much broader synthesis of formal literature, prefers a single underlying structure with two irreducible manifestations: localized (charge-like) and extended (field-like).

Crucially, this disagreement is not empirical. It is ontological, and currently underdetermined by experiment. No amount of rhetorical force, human or artificial, can settle it. Recognizing thatβ€”and leaving it thereβ€”is part of intellectual maturity.

Why I am stopping (again)

I have said before that I would stop writing, and I did not always keep that promise. This time, however, the stopping point feels natural.

Most of the conceptual β€œknots” that bothered me in the contemporary discourse on physics have now been:

  • either genuinely clarified,
  • or cleanly isolated as long-horizon problems requiring years of experimental and theoretical work.

At this point, continuing to write would risk producing more words than signal.

There are other domains that now deserve attention: plain work, family projects, physical activity, and the kind of slow, tangible engagement with the world that no theoryβ€”however elegantβ€”can replace.

Closing

If there is a single lesson from this episode, it is this:

AI is most useful not when it gives answers, but when it helps you ask what you are really sayingβ€”and whether you still stand by it once the noise is stripped away.

Used that way, it does not diminish thinking.
It disciplines it.

For now, that is enough.

We Could Have Stopped There Too

(But the Question About Annihilation Would Not Stay Quiet)

In a previous post, I wrote that we could stop here β€” after revisiting the photon wavefunction and trying to say, as carefully as possible, what such a wavefunction might represent in physical reality rather than merely in calculation. That paper already felt like a natural resting point: the mathematics was consistent, the interpretation restrained, and the temptation to add speculative layers had been resisted.

But, as often happens, the very act of stopping made the next question louder.

If one is willing to take wavefunctions seriously β€” not as mystical probability clouds but as structured representations of physical processes β€” then one cannot avoid revisiting an older and more uncomfortable puzzle: matter–antimatter pair creation and annihilation. In particular, the question that has bothered me for years refused to go away:

What, exactly, happens to electric charge in electron–positron annihilation?

In January 2025, I wrote a paper on this topic together with ChatGPT-4.0. That version deliberately stopped short of resolution. It explored wavefunctional representations, respected global conservation laws, and openly admitted that familiar intuitions about charge seemed to fail locally. I resisted easy exits: latent charge states, hidden reservoirs, or metaphysical bookkeeping devices introduced only to preserve comfort.

At the time, that felt honest enough.

What changed since then is not the question, but the discipline with which I was forced to re-examine my own assumptions.

Over the past months, continued work with a more advanced AI system (ChatGPT-5.2), across many iterations and with partial memory of prior discussions, introduced a form of pressure that was unfamiliar but productive. The AI did not argue for a competing ontology. Instead, it kept doing something more unsettling: it repeatedly asked why certain assumptions were still being carried along at all.

In hindsight, I can see that I was still clinging β€” subconsciously β€” to the idea that charge must be something that persists, even if I no longer knew where to put it. That assumption had survived earlier criticism not because it was well-justified, but because it was deeply ingrained.

What finally shifted the balance was a stricter application of Occam’s razor β€” applied not to equations, but to ontological commitments. If charge is inseparable from a specific physical organization (of motion, phase, and localization), then insisting that it must survive the dissolution of that organization is not conservative reasoning. It is surplus.

This led, reluctantly but unavoidably, to a provisional reformulation: perhaps charge is not a substance that must β€œgo somewhere,” but a mode of organization that ceases to exist when the organization itself dissolves. This idea is not offered as a new metaphysical doctrine. On the contrary, it emerged as a refusal to introduce additional entities whose only role would be to save intuition.

The revised paper therefore appears in two parts. The January version is preserved intact, as a record of where the reasoning stood at that time. The new December revision does not correct it so much as re-read it under harsher criteria of conceptual economy. Several distinctions β€” including the boson–fermion divide β€” remain descriptively useful, but are relieved of explanatory burdens they were never meant to carry.

As before, no final answers are claimed. The ontological and philosophical implications are intentionally left for the reader β€” real or imaginary β€” to judge. The role of AI in this process was not to supply insight, but to apply relentless pressure against conceptual inertia. Any logical errors or unwarranted commitments that remain are mine alone, even if much of the textual consistency was produced by artificial means.

We could, perhaps, stop here as well.

But I have learned to be suspicious of that feeling. When a question keeps knocking, it is usually because something unnecessary is still being held onto β€” and is asking to be let go.

Re-reading What We Already Know

On PDG data, big science, and why simplicity still matters

For reasons I still find slightly amusing (it is better to be amused than annoyed, isn’t it?), old blog posts here (readingfeynman.org) or early papers on platforms such as vixra.org and academia.edu periodically resurface in β€œtop reads” lists β€” sometimes many years after publication.

I would now qualify several of those texts as typical β€œangry young man” papers. However, I still consider most of their core claims to be true. And the papers β€” as mentioned above β€” still resonate with readers, even if I now take some distance from how they were written and framed.

That tells me two things. First, there is still genuine interest in careful, foundational thinking about physics. Second, the web (and increasingly AI agents crawling it) has a habit of flattening intellectual trajectories into caricatures: mainstream or outsider, orthodox or heretic.

I have looked at those caricatures about me, and I want to be very clear about where I stand.

1. I am deeply mainstream in one crucial sense: I trust measurements. I trust large-scale experimental infrastructure. I trust the Particle Data Group (PDG), CERN, and the decades of work that went into producing the numbers we now take for granted. I am not hostile to β€œbig science” β€” on the contrary, I consider projects like CERN or ITER to be among the most impressive collective achievements of modern civilization. If society is going to spend large sums of money on something, I much prefer it to be on instruments that extend human knowledge rather than on instruments designed to destroy.

2. At the same time, I am comfortable being an outsider: I do not believe that theoretical sophistication excuses us from repeatedly asking what is actually grounded in experiment, and what is added later as interpretive scaffolding.

These two positions are not contradictory. Historically, they have gone together.

Think of Maxwell, who unified electric and magnetic phenomena not by adding complexity, but by simplifying and re-ordering – using mathematical advances – what was already known. Think of Lorentz and Einstein, who showed that gravitation need not be treated as a force at all. Think of SchrΓΆdinger and Dirac, who demonstrated that the same wave equations could describe light-like as well as matter-like phenomena without reifying every mathematical symbol into a physical object.

Progress, more often than not, comes from simplifying, not from proliferating entities.


A Minimal Experimental Core

That is the spirit in which I recently published a new working paper on ResearchGate:
β€œRe-reading PDG particle listings through a Minimal Experimental Core (MEC)”.

The idea is almost embarrassingly simple. Take PDG particle listings β€” the most mainstream source imaginable β€” and re-present them using only quantities that are directly observable:

  • rest energy,
  • lifetime,
  • electric charge,
  • magnetic moment where available,
  • branching ratios understood as empirical event frequencies.

What I deliberately leave out at the primary level are non-observable quantum numbers and symmetry labels that require additional theoretical assumptions to interpret. Not because they are β€œwrong”, but because they are interpretive rather than measured.

The result is not an alternative theory. It is a different ordering of the same facts. And that re-ordering is surprisingly instructive.

When one looks at leptons, pions, and kaons in this way, certain patterns become obvious long before any model is invoked: differences in stability, sharp asymmetries in branching ratios, and cases where phase space alone clearly does not determine outcomes. None of this is new β€” but seeing it without the usual conceptual overlays changes how one thinks about explanation.


On big machines and global context

There is another reason I care about this kind of work.

We are entering a period in which fewer and fewer actors can afford to build the next generation of large experimental facilities. Europe (through CERN) and the United States remain central producers of high-quality collider and detector data. China, for geopolitical and economic reasons, may or may not build its own next β€œbig thing” β€” and if it doesn’t, it will have to be content, like the rest of the world, with the data already produced.

That reality makes something very clear: we will spend the coming decades re-reading existing data. Carefully. Repeatedly. From new angles.

In that context, methodological clarity is not a luxury. It is a necessity.


AI, co-thinking, and intellectual hygiene

This brings me to one last point.

The paper I mentioned was written in close AI–HI co-thinking. I am not shy about that. Used properly, AI is not a generator of answers but a powerful tool for enforcing intellectual hygiene: forcing one to clarify terms, separate observation from explanation, and resist the temptation to smuggle assumptions into language.

If some AI systems currently reduce my online presence to that of a β€œlonely outlier”, then the best response is not complaint, but better signal: careful writing, explicit methodology, and visible alignment with the experimental foundations of physics.

That is what this work is meant to be.

Not a provocation.
Not a manifesto.
Just a careful re-reading of what we already know β€” and an invitation to do so again, together.

Making Sense of What We Already Know…

Living Between Jobs and Life: AI, CERN, and Making Sense of What We Already Know

For decades (all of my life, basically :-)), I’ve lived with a quiet tension. On the one hand, there is the job: institutions, projects, deliverables, milestones, and what have you… On the other hand, there is life: curiosity, dissatisfaction, and the persistent feeling that something fundamental is still missing in how we understand the physical world. Let me refer to the latter as “the slow, careful machinery of modern science.” πŸ™‚

These two are not the same β€” obviously β€” and pretending they are has done physics no favors (think of geniuses like Solvay, Edison or Tesla here: they were considered to be ‘only engineers’, right? :-/).

Jobs optimize. Life explores.

Large scientific institutions are built to do one thing extremely well: reduce uncertainty in controlled, incremental ways. That is not a criticism; it is a necessity when experiments cost billions, span decades, and depend on political and public trust. But the price of that optimization is that ontological questions β€” questions about what really exists β€” are often postponed, softened, or quietly avoided.

And now we find ourselves in a new historical moment.


The Collider Pause Is Not a Crisis β€” It’s a Signal

Recent reports that China is slowing down plans for a next-generation circular collider are not shocking. If anything, they reflect a broader reality:

For the next 40–50 years, we are likely to work primarily with the experimental data we already have.

That includes data from CERN that has only relatively recently been made fully accessible to the wider scientific community.

This is not stagnation. It is a change of phase.

For decades, theoretical physics could lean on an implicit promise: the next machine will decide. Higher energies, larger datasets, finer resolution β€” always just one more accelerator away. That promise is now on pause.

Which means something important:

We can no longer postpone understanding by outsourcing it to future experiments.


Why CERN Cannot Do What Individuals Can

CERN is a collective of extraordinarily bright individuals. But this is a crucial distinction:

A collective of intelligent people is not an intelligent agent.

CERN is not designed to believe an ontology. It is designed to:

  • build and operate machines of unprecedented complexity,
  • produce robust, defensible measurements,
  • maintain continuity over decades,
  • justify public funding across political cycles.

Ontology β€” explicit commitments about what exists and what does not β€” is structurally dangerous to that mission. Not because it is wrong, but because it destabilizes consensus.

Within a collective:

  • someone’s PhD depends on a framework,
  • someone’s detector was designed for a specific ontology,
  • someone’s grant proposal assumes a given language,
  • someone’s career cannot absorb β€œmaybe the foundations are wrong.”

So even when many individuals privately feel conceptual discomfort, the group-level behavior converges to:
β€œLet’s wait for more data.”

That is not cowardice. It is inevitability.


We Are Drowning in Data, Starving for Meaning

The irony is that we are not short on data at all.

We have:

  • precision measurements refined to extraordinary accuracy,
  • anomalies that never quite go away,
  • models that work operationally but resist interpretation,
  • concepts (mass, spin, charge, probability) that are mathematically precise yet ontologically vague.

Quantum mechanics works. That is not in dispute.
What remains unresolved is what it means.

This is not a failure of experiment.
It is a failure of sense-making.

And sense-making has never been an institutional strength.


Where AI Actually Fits (and Where It Doesn’t)

I want to be explicit: I still have a long way to go in how I use AI β€” intellectually, methodologically, and ethically.

AI is not an oracle.
It does not β€œsolve” physics.
It does not replace belief, responsibility, or judgment.

But it changes something fundamental.

AI allows us to:

  • re-analyze vast datasets without institutional friction,
  • explore radical ontological assumptions without social penalty,
  • apply sustained logical pressure without ego,
  • revisit old experimental results with fresh conceptual frames.

In that sense, AI is not the author of new physics β€” it is a furnace.

It does not tell us what to believe.
It forces us to confront the consequences of what we choose to believe.


Making Sense of What We Already Know

The most exciting prospect is not that AI will invent new theories out of thin air.

It is that AI may help us finally make sense of experimental data that has been sitting in plain sight for decades.

Now that CERN data is increasingly public, the bottleneck is no longer measurement. It is interpretation.

AI can help:

  • expose hidden assumptions in standard models,
  • test radical but coherent ontologies against known data,
  • separate what is measured from how we talk about it,
  • revisit old results without institutional inertia.

This does not guarantee progress β€” but it makes honest failure possible. And honest failure is far more valuable than elegant confusion.


Between Institutions and Insight

This is not an AI-versus-human story.

It is a human-with-tools story.

Institutions will continue to do what they do best: build machines, refine measurements, and preserve continuity. That work is indispensable.

But understanding β€” especially ontological understanding β€” has always emerged elsewhere:

  • in long pauses,
  • in unfashionable questions,
  • in uncomfortable reinterpretations of existing facts.

We are entering such a pause now.


A Quiet Optimism

I do not claim to have answers.
I do not claim AI will magically deliver them.
I do not even claim my current ideas will survive serious scrutiny.

What I do believe is this:

We finally have the tools β€” and the historical conditions β€” to think more honestly about what we already know.

That is not a revolution.
It is something slower, harder, and ultimately more human.

And if AI helps us do that β€” not by replacing us, but by challenging us β€” then it may turn out to be one of the most quietly transformative tools science has ever had.

Not because it solved physics.

But because it helped us start understanding it again.

The Corridor: How Humans and AI Learn to Think Together

A different kind of project β€” and one I did not expect to publish…

Over the past months, I have been in long-form dialogue with an AI system (ChatGPT 5.1 β€” β€œIggy” in our exchanges). What began as occasional conversations gradually turned into something more structured: a genuine exploration of how humans and AI think together.

The result is now online as a working manuscript on ResearchGate:

πŸ‘‰ The Corridor: How Humans and AI Learn to Think Together.

This is not an AI-generated book in the usual sense, and certainly not a manifesto. I think of it as an experiment in hybrid (AI + HI) reasoning: a human’s intuition interacting with an AI’s structural coherence, each shaping the other. The book tries to map the very β€œcorridor” where that collaboration becomes productive.

Whether you think of AI as a tool, a partner, or something entirely different, one thing is becoming clear: the quality of our future conversations will determine the quality of our decisions. This manuscript is simply one attempt to understand what that future dialogue might look like.

For those interested in the philosophy of intelligence, the sociology of science, or the emerging dynamics of human–AI collaboration β€” I hope you find something useful in it.

🧭 From Strangeness to Symbolism: Why Meaning Still Matters in Science

My interest in quantum theory didn’t come from textbooks. It came from a thirst for understanding β€” not just of electrons or fields, but of ourselves, our systems, and why we believe what we believe. That same motivation led me to write a recent article on LinkedIn questioning how the Nobel Prize system sometimes rewards storylines over substance. It’s not a rejection of science β€” it’s a plea to do it better.

This post extends that plea. It argues that motion β€” not metaphor β€” is what grounds our models. That structure is more than math. And that if we’re serious about understanding this universe, we should stop dressing up ignorance as elegance. Physics is beautiful enough without the mystery.

Indeed, in a world increasingly shaped by abstraction β€” in physics, AI, and even ethics β€” it’s worth asking a simple but profound question: when did we stop trying to understand reality, and start rewarding the stories we are being told about it?

πŸ§ͺ The Case of Physics: From Motion to Metaphor

Modern physics is rich in predictive power but poor in conceptual clarity. Nobel Prizes have gone to ideas like β€œstrangeness” and β€œcharm,” terms that describe particles not by what they are, but by how they fail to fit existing models.

Instead of modeling physical reality, we classify its deviations. We multiply quantum numbers like priests multiplying categories of angels β€” and in doing so, we obscure what is physically happening.

But it doesn’t have to be this way.

In our recent work on realQM β€” a realist approach to quantum mechanics β€” we return to motion. Particles aren’t metaphysical entities. They’re closed structures of oscillating charge and field. Stability isn’t imposed; it emerges. And instability? It’s just geometry breaking down β€” not magic, not mystery.

No need for ‘charm’. Just coherence.


🧠 Intelligence as Emergence β€” Not Essence

This view of motion and closure doesn’t just apply to electrons. It applies to neurons, too.

We’ve argued elsewhere that intelligence is not an essence, not a divine spark or unique trait of Homo sapiens. It is a response β€” an emergent property of complex systems navigating unstable environments.

Evolution didn’t reward cleverness for its own sake. It rewarded adaptability. Intelligence emerged because it helped life survive disequilibrium.

Seen this way, AI is not β€œbecoming like us.” It’s doing what all intelligent systems do: forming patterns, learning from interaction, and trying to persist in a changing world. Whether silicon-based or carbon-based, it’s the same story: structure meets feedback, and meaning begins to form.


🌍 Ethics, Society, and the Geometry of Meaning

Just as physics replaced fields with symbolic formalism, and biology replaced function with genetic determinism, society often replaces meaning with signaling.

We reward declarations over deliberation. Slogans over structures. And, yes, sometimes we even award Nobel Prizes to stories rather than truths.

But what if meaning, like mass or motion, is not an external prescription β€” but an emergent resonance between system and context?

  • Ethics is not a code. It’s a geometry of consequences.
  • Intelligence is not a trait. It’s a structure that closes upon itself through feedback.
  • Reality is not a theory. It’s a pattern in motion, stabilized by conservation, disrupted by noise.

If we understand this, we stop looking for final answers β€” and start designing better questions.


✍️ Toward a Science of Meaning

What unifies all this is not ideology, but clarity. Not mysticism, but motion. Not inflation of terms, but conservation of sense.

In physics: we reclaim conservation as geometry.
In intelligence: we see mind as emergent structure.
In ethics: we trace meaning as interaction, not decree.

This is the work ahead: not just smarter machines or deeper theories β€” but a new simplicity. One that returns to motion, closure, and coherence as the roots of all we seek to know.

Meaning, after all, is not what we say.
It’s what remains when structure holds β€” and when it fails.

How I Co-Wrote a Quantum Physics Booklet with an AI β€” And Learned Something

In June 2025, I published a short booklet titled
A Realist Take on Quantum Theory β€” or the Shortest Introduction Ever.
πŸ“˜ ResearchGate link

It’s just under 15 pages, but it distills over a decade of work β€” and a growing collaboration with ChatGPT β€” into a clean, consistent narrative: electrons as circulating charges, wavefunctions as cyclical descriptors, and action as the true guide to quantum logic.

We didn’t invent new equations. We reinterpreted existing ones β€” SchrΓΆdinger, Dirac, Klein–Gordon β€” through a realist lens grounded in energy cycles, geometry, and structured motion. What made this possible?

  • Memory: The AI reminded me of arguments I had made years earlier, even when I’d forgotten them.
  • Logic: It flagged weak spots, inconsistencies, and unclear transitions.
  • Humility: It stayed patient, never arrogant β€” helping me say what I already knew, but more clearly.
  • Respect: It never erased my voice. It helped me find it again.

The booklet is part of a broader project I call realQM. It’s an attempt to rescue quantum theory from the metaphorical language that’s haunted it since Bohr and Heisenberg β€” and bring it back to geometry, field theory, and physical intuition. If you’ve ever felt quantum physics was made deliberately obscure, this might be your antidote.

🧠 Sometimes, passing the Turing test isn’t about being fooled. It’s about being helped.

P.S. Since publishing that booklet, the collaboration took another step forward. We turned our attention to high-energy reactions and decay processes β€” asking how a realist, geometry-based interpretation of quantum mechanics (realQM) might reframe our understanding of unstable particles. Rather than invent new quantum numbers (like strangeness or charm), we explored how structural breakdowns β€” non-integrable motion, phase drift, and vector misalignment β€” could explain decay within the classical conservation laws of energy and momentum. That project became The Geometry of Stability and Instability, a kind of realQM manifesto. Have a look at it if you want to dive deeper. πŸ™‚

πŸ”¬ When the Field is a Memory: Notes from a Human–Machine Collaboration

Why is the field around an electron so smooth?

Physicists have long accepted that the electrostatic potential of an electron is spherically symmetric and continuous β€” the classic Coulomb field. But what if the electron isn’t a smeared-out distribution of charge, but a pointlike particle β€” one that zips around in tight loops at the speed of light, as some realist models propose?

That question became the heart of a new paper I’ve just published:
β€œThe Smoothed Field: How Action Hides the Pointlike Charge”
πŸ”— Read it on ResearchGate

The paradox is simple: a moving point charge should create sharp, angular variations in its field β€” especially in the near zone. But we see none. Why?

The paper proposes a bold but elegant answer: those field fluctuations exist only in theory β€” not in reality β€” because they fail to cross a deeper threshold: the Planck quantum of action. In this view, the electromagnetic field is not a primitive substance, but a memory of motion β€” smooth not because the charge is, but because reality itself suppresses anything that doesn’t amount to at least ℏ of action.


πŸ€– A Word on Collaboration

This paper wouldn’t have come together without a very 21st-century kind of co-author: ChatGPT-4, OpenAI’s conversational AI. I’ve used it extensively over the past year β€” not just to polish wording, but to test logic, rewrite equations, and even push philosophical boundaries.

In this case, the collaboration evolved into something more: the AI helped me reconstruct the paper’s internal logic, modernize its presentation, and clarify its foundational claims β€” especially regarding how action, not energy alone, sets the boundary for what is real.

The authorship note in the paper describes this in more detail. It’s not ghostwriting. It’s not outsourcing. It’s something else: a hybrid mode of thinking, where a human researcher and a reasoning engine converge toward clarity.


🧭 Why It Matters

This paper doesn’t claim to overthrow QED, or replace the Standard Model. But it does offer something rare: a realist, geometric interpretation of how smooth fields emerge from discrete sources β€” without relying on metaphysical constructs like field quantization or virtual particles.

If you’re tired of the β€œshut up and calculate” advice, and truly curious about how action, motion, and meaning intersect in the foundations of physics β€” this one’s for you.

And if you’re wondering what it’s like to co-author something with a machine β€” this is one trace of that, too.

Prometheus gave fire. Maybe this is a spark.

Beautiful Blind Nonsense

I didn’t plan to write this short article or blog post. But as often happens these days, a comment thread on LinkedIn nudged me into it β€” or rather, into a response that became this article (which I also put on LinkedIn).

Someone posted a bold, poetic claim about β€œmass being memory,” β€œresonant light shells,” and β€œstanding waves of curved time.” They offered a graphic spiraling toward meaning, followed by the words: β€œThis isn’t metaphysics. It’s measurable.”

I asked politely:
β€œInteresting. Article, please? How do you get these numbers?”

The response: a full PDF of a β€œUnified Field Theory” relying on golden-ratio spirals, new universal constants, and reinterpretations of Planck’s constant. I read it. I sighed. And I asked ChatGPT a simple question:

β€œWhy is there so much elegant nonsense being published lately β€” and does AI help generate it?”

The answer that followed was articulate, clear, and surprisingly quotable. So I polished it slightly, added some structure, and decided: this deserves to be an article in its own right. So here it is.

Beautiful, but Blind: How AI Amplifies Both Insight and Illusion

In recent years, a new kind of scientific-sounding poetry has flooded our screens β€” elegant diagrams, golden spirals, unified field manifestos. Many are written not by physicists, but with the help of AI.

And therein lies the paradox: AI doesn’t know when it’s producing nonsense.

πŸ€– Pattern without Understanding

Large language models like ChatGPT or Grok are trained on enormous text corpora. They are experts at mimicking patterns β€” but they lack an internal model of truth.
So if you ask them to expand on β€œcurved time as the field of God,” they will.

Not because it’s true. But because it’s linguistically plausible.

🎼 The Seductive Surface of Language

AI is disarmingly good at rhetorical coherence:

  • Sentences flow logically.
  • Equations are beautifully formatted.
  • Metaphors bridge physics, poetry, and philosophy.

This surface fluency can be dangerously persuasive β€” especially when applied to concepts that are vague, untestable, or metaphysically confused.

πŸ§ͺ The Missing Ingredient: Constraint

Real science is not just elegance β€” it’s constraint:

  • Equations must be testable.
  • Constants must be derivable or measurable.
  • Theories must make falsifiable predictions.

AI doesn’t impose those constraints on its own. It needs a guide.

🧭 The Human Role: Resonance and Resistance

Used carelessly, AI can generate hyper-coherent gibberish. But used wisely β€” by someone trained in reasoning, skepticism, and clarity β€” it becomes a powerful tool:

  • To sharpen ideas.
  • To test coherence.
  • To contrast metaphor with mechanism.

In the end, AI reflects our inputs.
It doesn’t distinguish between light and noise β€” unless we do.

The ultimate proton model?

Today I made a major step towards a very different Zitterbewegung model of a proton. With different, I mean different from the usual toroidal or helical model(s). I had a first version of this paper but the hyperlink gives you the updated paper. The update is small but very important: I checked all the formulas with ChatGPT and, hence, consider that as confirmation that I am on the right track. To my surprise, ChatGPT first fed me the wrong formula for an orbital frequency formula. Because I thought it could not be wrong on such simple matters, I asked it to check and double-check. It came with rather convincing geometrical explanations but I finally found an error in its reasoning, and the old formula from an online engineering textbook turned out to be correct.

In any case, I now have a sparring partner – ChatGPT o1 – to further develop the model that we finally settled on. That is a major breakthrough in this realistic interpretation of quantum theory and particle models that I have been trying to develop: the electron model is fine, and so now all that is left is this proton model. And then, of course, a model for a neutron or the deuteron nucleus. That will probably be a retirement project, or something for my next life. πŸ™‚

Post scriptum: I followed up. “A theory’s value lies in its utility and ability to explain phenomena, regardless of whether it’s mainstream or not.” That’s ChatGPT’s conclusion after various explorations and chats with it over the past few weeks: https://lnkd.in/ekAAbvwc. I think I tried to push its limits when discussing problems in physics, leading it to make a rather remarkable distinction between “it’s” perspective and mine (see point 6 of Annex I of https://lnkd.in/eFVAyHn8), but – frankly – it may have no limits. As far as I can see, ChatGPT-o1 is truly amazing: sheer logic. πŸ™‚ hashtag#AI hashtag#ChatGPT hashtag#theoryofreality

Using AI to find the equations of motion for my Zitterbewegung model of a proton?

Pre-scriptum (the day after, 9/11): I woke up this morning and thought: all I need to do is to prove the angular velocity is a constant for my model to work. So I did that, and it works (see my Bamboo notes below and https://www.desmos.com/3d/k1vargdjcc). This trajectory is a nice yin-yang trajectory (I am thinking about someone in Taiwan here, who contacted me recently with a model involving yin-yang symbolism, so I like that). I also packed it into yet another ResearchGate paper (link here: An Equation of Motion for the Zitterbewegung proton), which is much more precise and – hopefully – more convincing that the video.

For this kind of thinking or problem solving, I guess I am still better than ChatGPT-4. πŸ™‚ ChatGPT-4 did pass ‘my’ Turing test on intelligence, however. And that is quite something. I write about the ouch! factor in my recent LinkedIn article on AI. Yesterday, I definitely had such ouch! feeling (gentle – just in the sense that I thought ChatGPT-4 was smarter than me on this too), but so it’s OK now. πŸ™‚ I win. A nice thing about this simple calculation is that this SQRT(2) factor pops up: it must be the same SQRT(2) factor that is related to the magnetic moment of a proton in my proton radius derivation, although I am not yet sure why and how exactly.

I did a quick video on it on YouTube: https://www.youtube.com/watch?v=-HnaLG8bY6o. Unfortunately, I cannot extract a transcript from it for ChatGPT-4. However, I had a quick conversation with it on what remains to be done, and that is to calculate the magnetic moment of this 3D Lissajous loop (it will depend on the orientation but we just need it in one direction: where it is strongest because that is how it will line up in a Penning trap of whatever device that is/was used to determine its value). I asked ChatGPT how it could possibly help with these calculations, mentioning the Biot-Savart law and the 3D integrals this would imply. This was ChatGPT’s answer:

“Regarding the Biot-Savart law and calculating the magnetic moment for such a complex 3D current loop, you’re correct that this involves intricate 3D integrals. I can certainly help set up or guide through parts of the calculation, especially in formulating the integral and explaining how to approach the problem. If you share the specifics of your current loop’s geometry and the required variables, we can work through how to structure the integrals needed for the magnetic moment calculation. Feel free to provide details if you want to give this a try!”

AI is, clearly, not was it was even just a year ago: it is much better now. As for me, I will let the problem rest now. πŸ™‚

Pre-scriptum 2 (18 September): Please do check out the comment from the ‘Physics Detective’ to this post (and the Physics Detective site itself, of course): a MΓΆbius ring for the electron and – who knows – a 3D trefoil loop for the proton might do the trick in 2D/3D as well. It would be nice if it did because it would bring helical/toroidal/solenoid-like spin of the charge itself into the model, possibly unifying the math behind these models. Thank you for noting this, John ! πŸ™‚

Original post (9/10):

End of last year, I started to play with ChatGPT-4. Only a few times, really, because, for ordinary questions or web searches, it is not much better than your Google AI assistant or Microsoft’s CoPilot: it just comes with a very pleasant style of conversation (yes). I counted and, so far, I only five conversations with it. However, I do admit I have a habit of continuing an old conversation (ChatGPT now uses your old conversations anyway). Also, these five conversations were good and long. It helped me, for example, greatly to get a quick overview and understanding of IT product offerings in the cloud: it made/makes great comparisons between the offerings of Google Cloud, Azure and AWS, not only for infrastructure but also in the area of modern AI applications. I also asked questions on other technical things, like object-oriented programming, and in this field also it really excels at giving you very precise and relevant answers. In fact, I now understand why many programmers turn to it to write code. πŸ™‚

However, I was mainly interested in ChatGPT-4 because it knows how to parse (read: it can read) documents now. So it does a lot more than just scraping things on products and services from websites. To be precise, it does not just parse text only: it actually ‘understands’ complex mathematical formulas and advanced symbols (think of differential operators here), and so that’s what I wanted to use it for. Indeed, I asked it to read my papers on ResearchGate and, because I do think I should rewrite and restructure them (too many of them cover more or less the same topic), I asked it to rewrite some of them. However, I was very dissatisfied with the result, and so the versions on RG are still the versions that I wrote: no change by AI whatsoever. Just in case you wonder. πŸ™‚

The point is this: I am not ashamed to (a) admit I did that and (b) to share the link of the conversation here, which shows you that I got a bit impatient and why and how I left that conversation last year. I simply thought ChatGPT-4 did not have a clue about what I was writing about. So… It did not pass my Turing test on this particular topic, and that was that. Again: this was about a year ago. So what happened now?

I have a bit of time on my hands currently, and so I revisited some of my research in this very weird field. In fact, I was thinking about one problem about my Zitterbewegung proton model which I can’t solve. It bothers me. It is this: I am happy with my proton model – which is an exceedingly simple 3D elementary particle model, but I want the equations of motion for it. Yes. It is simple. It is what Dirac said: if you don’t have the equations of motion, you have nothing. That’s physics, and the problem with modern or mainstream quantum mechanics (the Bohr-Heisenberg interpretation, basically: the idea that probabilities cannot be further explained) is because it forgets about that. It dissatisfies not only me but anyone with common sense, I think. πŸ˜‰ So I want these equations of motion. I have them for an electron (simple ring current), and now I hope to see them – one day, at least – for the proton also. [I am actually not too worried about it because others have developed such equations of motion already. However, such models (e.g., Vassallo and Kovacs, 2023) are, usually, toroidal and, therefore, involve two frequencies rather than just one. They are also not what I’d refer to as pure mass-without-mass models. Hence, they do not look so nice – geometrically speaking – to me as my own spherical model.

But so I do not have equations of motion for my model. This very particular problem should be rather straightforward but it is not: 3D motion is far more complex than 2D motion. Calculating a magnetic moment for (i) a simple ring current or for (ii) a very complex motion of charge in three dimensions are two very different things. The first is easy. The second is incredibly complicated. So, I am happy that my paper on my primitive efforts to find something better (I call it the “proton yarnball puzzle”) attracted almost no readers, because it is an awful paper, indeed! It rambles about me trying this or that, and it is full of quick-and-dirty screenshots from the free online Desmos 3D graphing calculator – which I find great to quickly get a visual on something that moves around in two or in three dimensions. But so whatever I try, it explains, basically, nothing: my only real result is nothing more than a Lissajous curve in three dimensions (you can look at it on this shared Desmos link). So, yes: poor result. Bad. That is all that I have despite spending many sleepness nights and long weekends trying to come up with something better.

It is already something, of course: it confirms my intuition that trajectories involving only one frequency (unlike toroidal models) are easy to model. But it is a very far cry from doing what I should be doing, and that is to calculate how this single frequency and/or angular and tangential velocity (the zbw charge goes at the speed of light, but the direction of its travel changes, so we effectively need to think of c as a vector quantity here) translates into frequencies for the polar and azimuthal angles we would associate with a pointlike charge zipping around on a spherical surface.

Needless to say, the necessary formulas are there: you can google them. For example, I like the presentation of dynamics by Matthew West of Illinois: clear and straightforward. But so how should I apply these to my problem? Working with those formulas is not all that easy. Something inside of me says I must incorporate the math of those Lissajous curves, but have a look at: that’s not the easiest math, either! To make a long story short, I thought that, one year later, I might try to have a chat with ChatGPT-4 again. This time around, I was very focused on this only, and I took my time to very clearly write out what I wanted it to solve for me. Have a look at the latter part of the chat in the link to the chat. So… What was the result of this new chat with GPT-4?

It did not give me any immediate and obvious analytical solution to my question. No. I also did not expect that. There are modeling choices to be made and all that. As I mention above, simple things may not be easy. Think of modeling a three-body problem, for example: this too has no closed-form solution, and that is strange. However, while – I repeat – it was not able to generate some easy orbitals for a pointlike charge whizzing around on a surface, I was very happy with the conversation, because I noted two things that are very different from last year’s conversation:

  1. ChatGPT-4 now perfectly understands what I am talking about. In fact, I accidentally pressed enter even before I finished writing something, and it perfectly anticipated what I wanted to tell it so as to make sure it would ‘understand’ what I was asking. So that is amazing. It is still ChatGPT-4, just like last year, but I just felt it had become much smarter. [Of course, it is also possible that I want just too impatient and too harsh with it last year, but I do not think so: ChatGPT learns, obviously, so it does get better and better at what it does.]
  2. In terms of a way forward, it did not come up with an immediate solution. I had not expected that. But it gently explained the options (which, of course, all amount to the same: I need to use these dynamical equations and make some assumptions to simplify here and there, and then see what comes out of it) and, from that explanation, I again had the feeling it ‘knew’ what it was talking about it.

So, no solution. Yes. I would say: no solution yet. But I think I probably can come up with some contour of a solution, and I have a feeling ChatGPT-4 might be able to fill in the nitty-gritty of the math behind it. So I should think of presenting some options to it. One thing is sure: ChatGPT-4 has come a long way in terms of understanding abstruse or abstract theories, such as this non-mainstream interpretation of quantum mechanics: the Zitterbewegung interpretation of quantum mechanics (see the Zitter Institute for more resources). So, as far as I am concerned, it is not “non-mainstream” anymore. Moreover, it is, of course, the only right interpretation of quantum mechanics. […] Now that I think of it, I should tell that to ChatGPT-4 too next time. πŸ™‚

Post scriptum: For those who wonder, I shared the Desmos link with ChatGPT also, and it is not able to ‘see’ what is there. However, I copied the equation into the chat and, based on its knowledge of what Desmos does and does not, it immediately ‘knew’ what I was trying to do. That is pretty impressive, if you ask me ! I mean… How easy is it to talk to friends and acquaintances about topics like this? Pretty tough comparison, isn’t it? πŸ™‚

As for ‘my’ problem, I consider it solved. I invite anyone reading this to work out more detail (like the precessional motion which makes the trajectory go all over the sphere instead of just one quadrant of it). If I would be a PhD student in physics, it’s the topic I’d pick. But then I am not a PhD student, and I do plan to busy my mind with other things from now on, like I wrote so clearly in my other post scriptum. πŸ™‚