Making Sense of What We Already Know…

Living Between Jobs and Life: AI, CERN, and Making Sense of What We Already Know

For decades (all of my life, basically :-)), I’ve lived with a quiet tension. On the one hand, there is the job: institutions, projects, deliverables, milestones, and what have you… On the other hand, there is life: curiosity, dissatisfaction, and the persistent feeling that something fundamental is still missing in how we understand the physical world. Let me refer to the latter as “the slow, careful machinery of modern science.” 🙂

These two are not the same — obviously — and pretending they are has done physics no favors (think of geniuses like Solvay, Edison or Tesla here: they were considered to be ‘only engineers’, right? :-/).

Jobs optimize. Life explores.

Large scientific institutions are built to do one thing extremely well: reduce uncertainty in controlled, incremental ways. That is not a criticism; it is a necessity when experiments cost billions, span decades, and depend on political and public trust. But the price of that optimization is that ontological questions — questions about what really exists — are often postponed, softened, or quietly avoided.

And now we find ourselves in a new historical moment.


The Collider Pause Is Not a Crisis — It’s a Signal

Recent reports that China is slowing down plans for a next-generation circular collider are not shocking. If anything, they reflect a broader reality:

For the next 40–50 years, we are likely to work primarily with the experimental data we already have.

That includes data from CERN that has only relatively recently been made fully accessible to the wider scientific community.

This is not stagnation. It is a change of phase.

For decades, theoretical physics could lean on an implicit promise: the next machine will decide. Higher energies, larger datasets, finer resolution — always just one more accelerator away. That promise is now on pause.

Which means something important:

We can no longer postpone understanding by outsourcing it to future experiments.


Why CERN Cannot Do What Individuals Can

CERN is a collective of extraordinarily bright individuals. But this is a crucial distinction:

A collective of intelligent people is not an intelligent agent.

CERN is not designed to believe an ontology. It is designed to:

  • build and operate machines of unprecedented complexity,
  • produce robust, defensible measurements,
  • maintain continuity over decades,
  • justify public funding across political cycles.

Ontology — explicit commitments about what exists and what does not — is structurally dangerous to that mission. Not because it is wrong, but because it destabilizes consensus.

Within a collective:

  • someone’s PhD depends on a framework,
  • someone’s detector was designed for a specific ontology,
  • someone’s grant proposal assumes a given language,
  • someone’s career cannot absorb “maybe the foundations are wrong.”

So even when many individuals privately feel conceptual discomfort, the group-level behavior converges to:
“Let’s wait for more data.”

That is not cowardice. It is inevitability.


We Are Drowning in Data, Starving for Meaning

The irony is that we are not short on data at all.

We have:

  • precision measurements refined to extraordinary accuracy,
  • anomalies that never quite go away,
  • models that work operationally but resist interpretation,
  • concepts (mass, spin, charge, probability) that are mathematically precise yet ontologically vague.

Quantum mechanics works. That is not in dispute.
What remains unresolved is what it means.

This is not a failure of experiment.
It is a failure of sense-making.

And sense-making has never been an institutional strength.


Where AI Actually Fits (and Where It Doesn’t)

I want to be explicit: I still have a long way to go in how I use AI — intellectually, methodologically, and ethically.

AI is not an oracle.
It does not “solve” physics.
It does not replace belief, responsibility, or judgment.

But it changes something fundamental.

AI allows us to:

  • re-analyze vast datasets without institutional friction,
  • explore radical ontological assumptions without social penalty,
  • apply sustained logical pressure without ego,
  • revisit old experimental results with fresh conceptual frames.

In that sense, AI is not the author of new physics — it is a furnace.

It does not tell us what to believe.
It forces us to confront the consequences of what we choose to believe.


Making Sense of What We Already Know

The most exciting prospect is not that AI will invent new theories out of thin air.

It is that AI may help us finally make sense of experimental data that has been sitting in plain sight for decades.

Now that CERN data is increasingly public, the bottleneck is no longer measurement. It is interpretation.

AI can help:

  • expose hidden assumptions in standard models,
  • test radical but coherent ontologies against known data,
  • separate what is measured from how we talk about it,
  • revisit old results without institutional inertia.

This does not guarantee progress — but it makes honest failure possible. And honest failure is far more valuable than elegant confusion.


Between Institutions and Insight

This is not an AI-versus-human story.

It is a human-with-tools story.

Institutions will continue to do what they do best: build machines, refine measurements, and preserve continuity. That work is indispensable.

But understanding — especially ontological understanding — has always emerged elsewhere:

  • in long pauses,
  • in unfashionable questions,
  • in uncomfortable reinterpretations of existing facts.

We are entering such a pause now.


A Quiet Optimism

I do not claim to have answers.
I do not claim AI will magically deliver them.
I do not even claim my current ideas will survive serious scrutiny.

What I do believe is this:

We finally have the tools — and the historical conditions — to think more honestly about what we already know.

That is not a revolution.
It is something slower, harder, and ultimately more human.

And if AI helps us do that — not by replacing us, but by challenging us — then it may turn out to be one of the most quietly transformative tools science has ever had.

Not because it solved physics.

But because it helped us start understanding it again.

The Corridor: How Humans and AI Learn to Think Together

A different kind of project — and one I did not expect to publish…

Over the past months, I have been in long-form dialogue with an AI system (ChatGPT 5.1 — “Iggy” in our exchanges). What began as occasional conversations gradually turned into something more structured: a genuine exploration of how humans and AI think together.

The result is now online as a working manuscript on ResearchGate:

👉 The Corridor: How Humans and AI Learn to Think Together.

This is not an AI-generated book in the usual sense, and certainly not a manifesto. I think of it as an experiment in hybrid (AI + HI) reasoning: a human’s intuition interacting with an AI’s structural coherence, each shaping the other. The book tries to map the very “corridor” where that collaboration becomes productive.

Whether you think of AI as a tool, a partner, or something entirely different, one thing is becoming clear: the quality of our future conversations will determine the quality of our decisions. This manuscript is simply one attempt to understand what that future dialogue might look like.

For those interested in the philosophy of intelligence, the sociology of science, or the emerging dynamics of human–AI collaboration — I hope you find something useful in it.

How I Co-Wrote a Quantum Physics Booklet with an AI — And Learned Something

In June 2025, I published a short booklet titled
A Realist Take on Quantum Theory — or the Shortest Introduction Ever.
📘 ResearchGate link

It’s just under 15 pages, but it distills over a decade of work — and a growing collaboration with ChatGPT — into a clean, consistent narrative: electrons as circulating charges, wavefunctions as cyclical descriptors, and action as the true guide to quantum logic.

We didn’t invent new equations. We reinterpreted existing ones — Schrödinger, Dirac, Klein–Gordon — through a realist lens grounded in energy cycles, geometry, and structured motion. What made this possible?

  • Memory: The AI reminded me of arguments I had made years earlier, even when I’d forgotten them.
  • Logic: It flagged weak spots, inconsistencies, and unclear transitions.
  • Humility: It stayed patient, never arrogant — helping me say what I already knew, but more clearly.
  • Respect: It never erased my voice. It helped me find it again.

The booklet is part of a broader project I call realQM. It’s an attempt to rescue quantum theory from the metaphorical language that’s haunted it since Bohr and Heisenberg — and bring it back to geometry, field theory, and physical intuition. If you’ve ever felt quantum physics was made deliberately obscure, this might be your antidote.

🧠 Sometimes, passing the Turing test isn’t about being fooled. It’s about being helped.

P.S. Since publishing that booklet, the collaboration took another step forward. We turned our attention to high-energy reactions and decay processes — asking how a realist, geometry-based interpretation of quantum mechanics (realQM) might reframe our understanding of unstable particles. Rather than invent new quantum numbers (like strangeness or charm), we explored how structural breakdowns — non-integrable motion, phase drift, and vector misalignment — could explain decay within the classical conservation laws of energy and momentum. That project became The Geometry of Stability and Instability, a kind of realQM manifesto. Have a look at it if you want to dive deeper. 🙂