Stability First: A Personal Programme for Re-reading Particle Physics

Over the past years, I have written a number of papers on physics—mostly exploratory, sometimes speculative, always driven by the same underlying discomfort.

Not with the results of modern physics. Those are extraordinary.
But with the ordering of its explanations.

We are very good at calculating what happens.
We are less clear about why some things persist and others do not.

That question—why stability appears where it does—has quietly guided much of my thinking. It is also the thread that ties together a new manuscript I have just published on ResearchGate:

“Manuscript v0.2 – A stability-first reinterpretation of particle physics”
👉 https://www.researchgate.net/publication/398839393_Manuscript_v02

This post is not a summary of the manuscript. It is an explanation of why I wrote it, and what kind of work it is meant to enable.


Not a new theory — a different starting point

Let me be clear from the outset.

This manuscript does not propose a new theory.
It does not challenge the empirical success of the Standard Model.
It does not attempt to replace quantum field theory or nuclear phenomenology.

What it does is much more modest—and, I hope, more durable.

It asks whether we have been starting our explanations at the wrong end.

Instead of beginning with abstract constituents and symmetries, the manuscript begins with something far more pedestrian, yet physically decisive:

Persistence in time.

Some entities last.
Some decay.
Some exist only fleetingly as resonances.
Some are stable only in the presence of others.

Those differences are not cosmetic. They shape the physical world we actually inhabit.


From electrons to nuclei: stability as a guide

The manuscript proceeds slowly and deliberately, revisiting familiar ground:

  • the electron, as an intrinsically stable mode;
  • the proton, as a geometrically stable but structurally richer object;
  • the neutron, as a metastable configuration whose stability exists only in relation;
  • the deuteron, as the simplest genuinely collective equilibrium;
  • and nuclear matter, where stability becomes distributed across many coupled degrees of freedom.

At no point is new empirical content introduced.
What changes is the interpretive emphasis.

Stability is treated not as an afterthought, but as a physical clue.


Interaction without mysticism

The same approach is applied to interaction.

Scattering and annihilation are reinterpreted not as abstract probabilistic events, but as temporary departures from equilibrium and mode conversion between matter-like and light-like regimes.

Nothing in the standard calculations is altered.
What is altered is the physical picture.

Wavefunctions remain indispensable—but they are treated as representations of physical configurations, not as substitutes for them.

Probability emerges naturally from limited access to phase, geometry, and configuration, rather than from assumed ontological randomness.


Why classification matters

The manuscript ultimately turns to the Particle Data Group catalogue.

The PDG tables are one of the great achievements of modern physics. But they are optimized for calculation, not for intuition about persistence.

The manuscript proposes a complementary, stability-first index of the same data:

  • intrinsically stable modes,
  • metastable particle modes,
  • prompt decayers,
  • resonances,
  • and context-dependent stability (such as neutrons in nuclei).

Nothing is removed.
Nothing is denied.

The proposal is simply to read the catalogue as a map of stability regimes, rather than as a flat ontology of “fundamental particles”.


A programme statement, not a conclusion

This manuscript is intentionally incomplete.

It does not contain the “real work” of re-classifying the entire PDG catalogue. That work lies ahead and will take time, iteration, and—no doubt—many corrections.

What the manuscript provides is something else:

a programme statement.

A clear declaration of what kind of questions I think are still worth asking in particle physics, and why stability—rather than constituent bookkeeping—may be the right place to ask them from.


Why I am sharing this now

I am publishing this manuscript not as a final product, but as a marker.

A marker of a line of thought I intend to pursue seriously.
A marker of a way of reading familiar physics that I believe remains underexplored.
And an invitation to discussion—especially critical discussion—on whether this stability-first perspective is useful, coherent, or ultimately untenable.

Physics progresses by calculation.
It matures by interpretation.

This manuscript belongs to the second category.

If that resonates with you, you may find the full text of interest.


Jean-Louis Van Belle
readingfeynman.org

Something Rotten in the State of QED? A Careful Look at Critique, Sociology, and the Limits of Modern Physics

Every few years, a paper comes along that stirs discomfort — not because it is wrong, but because it touches a nerve.
Oliver Consa’s Something is rotten in the state of QED is one of those papers.

It is not a technical QED calculation.
It is a polemic: a long critique of renormalization, historical shortcuts, convenient coincidences, and suspiciously good matches between theory and experiment. Consa argues that QED’s foundations were improvised, normalized, mythologized, and finally institutionalized into a polished narrative that glosses over its original cracks.

This is an attractive story.
Too attractive, perhaps.
So instead of reacting emotionally — pro or contra — I decided to dissect the argument with a bit of help.

At my request, an AI language model (“Iggy”) assisted in the analysis. Not to praise me. Not to flatter Consa. Not to perform tricks.
Simply to act as a scalpel: cold, precise, and unafraid to separate structure from rhetoric.

This post is the result.


1. What Consa gets right (and why it matters)

Let’s begin with the genuinely valuable parts of his argument.

a) Renormalization unease is legitimate

Dirac, Feynman, Dyson, and others really did express deep dissatisfaction with renormalization. “Hocus-pocus” was not a joke; it was a confession.

Early QED involved:

  • cutoff procedures pulled out of thin air,
  • infinities subtracted by fiat,
  • and the philosophical hope that “the math will work itself out later.”

It did work out later — to some extent — but the conceptual discomfort remains justified. I share that discomfort. There is something inelegant about infinities everywhere.

b) Scientific sociology is real

The post-war era centralized experimental and institutional power in a way physics had never seen. Prestige, funding, and access influenced what got published and what was ignored. Not a conspiracy — just sociology.

Consa is right to point out that real science is messier than textbook linearity.

c) The g–2 tension is real

The ongoing discrepancy between experiment and the Standard Model is not fringe. It is one of the defining questions in particle physics today.

On these points, Consa is a useful corrective:
he reminds us to stay honest about historical compromises and conceptual gaps.


2. Where Consa overreaches

But critique is one thing; accusation is another.

Consa repeatedly moves from:

“QED evolved through trial and error”
to
“QED is essentially fraud.”

This jump is unjustified.

a) Messiness ≠ manipulation

Early QED calculations were ugly. They were corrected decades later. Experiments did shift. Error bars did move.

That is simply how science evolves.

The fact that a 1947 calculation doesn’t match a 1980 value is not evidence of deceit — it is evidence of refinement. Consa collapses that distinction.

b) Ignoring the full evidence landscape

He focuses almost exclusively on:

  • the Lamb shift,
  • the electron g–2,
  • the muon g–2.

Important numbers, yes — but QED’s experimental foundation is vastly broader:

  • scattering cross-sections,
  • vacuum polarization,
  • atomic spectra,
  • collider data,
  • running of α, etc.

You cannot judge an entire theory on two or three benchmarks.

c) Underestimating theoretical structure

QED is not “fudge + diagrams.”
It is constrained by:

  • Lorentz invariance,
  • gauge symmetry,
  • locality,
  • renormalizability.

Even if we dislike the mathematical machinery, the structure is not arbitrary.

So: Consa reveals real cracks, but then paints the entire edifice as rotten.
That is unjustified.


3. A personal aside: the Zitter Institute and the danger of counter-churches

For a time, I was nominally associated with the Zitter Institute — a loosely organized group exploring alternatives to mainstream quantum theory, including zitterbewegung-based particle models.

I now would like to distance myself.

Not because alternative models are unworthy — quite the opposite. But because I instinctively resist:

  • strong internal identity,
  • suspicion of outsiders,
  • rhetorical overreach,
  • selective reading of evidence,
  • and occasional dogmatism about their own preferred models.

If we criticize mainstream physics for ad hoc factors, we must be brutal about our own.

Alternative science is not automatically cleaner science.


4. Two emails from 2020: why good scientists can’t always engage

This brings me to two telling exchanges from 2020 with outstanding experimentalists: Prof. Randolf Pohl (muonic hydrogen) and Prof. Ashot Gasparian (PRad).

Both deserve enormous respect, and I won’t reveal the email exchanges because of respect, GDPR rules or whatever).
Both email exchanges revealed the true bottleneck in modern physics to me — it is not intelligence, not malice, but sociology and bandwidth.

a) Randolf Pohl: polite skepticism, institutional gravity

Pohl was kind but firm:

  • He saw the geometric relations I proposed as numerology.
  • He questioned applicability to other particles.
  • He emphasized the conservatism of CODATA logic.

Perfectly valid.
Perfectly respectable.
But also… perfectly bound by institutional norms.

His answer was thoughtful — and constrained.
(Source: ChatGPT analysis of emails with Prof Dr Pohl)

b) Ashot Gasparian: warm support, but no bandwidth

Gasparian responded warmly:

  • “Certainly your approach and the numbers are interesting.”
  • But: “We are very busy with the next experiment.”

Also perfectly valid.
And revealing:
even curious, open-minded scientists cannot afford to explore conceptual alternatives.

Their world runs on deadlines, graduate students, collaborations, grants.

(Source: ChatGPT analysis of emails with Prof Dr Pohl)

The lesson

Neither professor dismissed the ideas because they were nonsensical.
They simply had no institutional space to pursue them.

That is the quiet truth:
the bottleneck is not competence, but structure.


5. Why I now use AI as an epistemic partner

This brings me to the role of AI.

Some colleagues (including members of the Zitter Institute) look down on using AI in foundational research. They see it as cheating, or unserious, or threatening to their identity as “outsiders.”

But here is the irony:

AI is exactly the tool that can think speculatively without career risk.

An AI:

  • has no grant committee,
  • no publication pressure,
  • no academic identity to defend,
  • no fear of being wrong,
  • no need to “fit in.”

That makes it ideal for exploratory ontology-building.

Occasionally, as in the recent paper I co-wrote with Iggy — The Wonderful Theory of Light and Matter — it becomes the ideal partner:

  • human intuition + machine coherence,
  • real-space modeling without metaphysical inflation,
  • EM + relativity as a unified playground,
  • photons, electrons, protons, neutrons as geometric EM systems.

This is not a replacement for science.
It is a tool for clearing conceptual ground,
where overworked, over-constrained academic teams cannot go.


6. So… is something rotten in QED?

Yes — but not what you think.

What’s rotten is the mismatch

between:

  • the myth of QED as a perfectly clean, purely elegant theory,
    and
  • the reality of improvised renormalization, historical accidents, social inertia, and conceptual discomfort.

What’s rotten is not the theory itself,
but the story we tell about it.

What’s not rotten:

  • the intelligence of the researchers,
  • the honesty of experimentalists,
  • the hard-won precision of modern measurements.

QED is extraordinary.
But it is not infallible, nor philosophically complete, nor conceptually finished.

And that is fine.

The problem is not messiness.
The problem is pretending that messiness is perfection.


7. What I propose instead

My own program — pursued slowly over many years — is simple:

  • Bring physics back to Maxwell + relativity as the foundation.
  • Build real-space geometrical models of all fundamental particles.
  • Reject unnecessary “forces” invented to patch conceptual holes.
  • Hold both mainstream and alternative models to the same standard:
    no ad hoc constants, no magic, no metaphysics.

And — unusually —
use AI as a cognitive tool, not as an oracle.

Let the machine check coherence.
Let the human set ontology.

If something emerges from the dialogue — good.
If not — also good.

But at least we will be thinking honestly again.


Conclusion

Something is rotten in the state of QED, yes —
but the rot is not fraud or conspiracy.

It is the quiet decay of intellectual honesty behind polished narratives.

The cure is not shouting louder, or forming counter-churches, or romanticizing outsider science.

The cure is precision,
clarity,
geometry,
and the courage to say:

Let’s look again — without myth, without prestige, without fear.

If AI can help with that, all the better.

Jean Louis Van Belle
(with conceptual assistance from “Iggy,” used intentionally as a scalpel rather than a sycophant)

Post-scriptum: Why the Electron–Proton Model Matters (and Why Dirac Would Nod)

A brief personal note — and a clarification that goes beyond Consa, beyond QED, and beyond academic sociology.

One of the few conceptual compasses I trust in foundational physics is a remark by Paul Dirac. Reflecting on Schrödinger’s “zitterbewegung” hypothesis, he wrote:

“One must believe in this consequence of the theory,
since other consequences which are inseparably bound up with it,
such as the law of scattering of light by an electron,
are confirmed by experiment.”

Dirac’s point is not mysticism.
It is methodological discipline:

  • If a theoretical structure has unavoidable consequences, and
  • some of those consequences match experiment precisely,
  • then even the unobservable parts of the structure deserve consideration.

This matters because the real-space electron and proton models I’ve been working on over the years — now sharpened through AI–human dialogue — meet that exact criterion.

They are not metaphors, nor numerology, nor free speculation.
They force specific, testable, non-trivial predictions:

  • a confined EM oscillation for the electron, with radius fixed by /mec\hbar / m_e c;
  • a “photon-like” orbital speed for its point-charge center;
  • a distributed (not pointlike) charge cloud for the proton, enforced by mass ratio, stability, form factors, and magnetic moment;
  • natural emergence of the measured GE/GMG_E/G_M​ discrepancy;
  • and a geometric explanation of deuteron binding that requires no new force.

None of these are optional.
They fall out of the internal logic of the model.
And several — electron scattering, Compton behavior, proton radius, form-factor trends — are empirically confirmed.

Dirac’s rule applies:

When inseparable consequences match experiment,
the underlying mechanism deserves to be taken seriously —
whether or not it fits the dominant vocabulary.

This post is not the place to develop those models in detail; that will come in future pieces and papers.
But it felt important to state why I keep returning to them — and why they align with a style of reasoning that values:

  • geometry,
  • energy densities,
  • charge motion,
  • conservation laws,
  • and the 2019 SI foundations of hh, ee, and cc
    over metaphysical categories and ad-hoc forces.

Call it minimalism.
Call it stubbornness.
Call it a refusal to multiply entities beyond necessity.

For me — and for anyone sympathetic to Dirac’s way of thinking — it is simply physics.

— JL (with “Iggy” (AI) in the wings)

A New Attempt at a Simple Theory of Light and Matter

Dear Reader,

Every now and then a question returns with enough insistence that it demands a fresh attempt at an answer. For me, that question has always been: can we make sense of fundamental physics without multiplying entities beyond necessity? Can we explain light, matter, and their interactions without inventing forces that have no clear definition, or particles whose properties feel more like placeholders than physical reality?

Today, I posted a new paper on ResearchGate that attempts to do exactly that:

“The Wonderful Theory of Light and Matter”
https://www.researchgate.net/publication/398123696_The_Wonderful_Theory_of_Light_and_Matter

It is the result of an unusual collaboration: myself and an artificial intelligence (“Iggy”), working through the conceptual structure of photons, electrons, and protons with the only tool that has ever mattered to me in physics — Occam’s Razor.

No metaphysics.
No dimensionless abstractions.
No “magical” forces.

Just:

  • electromagnetic oscillations,
  • quantized action,
  • real geometries in real space,
  • and the recognition that many so-called mysteries dissolve once we stop introducing layers that nature never asked for.

The photon is treated as a linear electromagnetic oscillation obeying the Planck–Einstein relation.
The electron as a circular oscillation, with a real radius and real angular momentum.
The proton (and later, the neutron and deuteron) as systems we must understand through charge distributions, not fictional quarks that never leave their equations.

None of this “solves physics,” of course.
But it does something useful: it clears conceptual ground.

And unexpectedly, the collaboration itself became a kind of experiment:
what happens when human intuition and machine coherence try to reason with absolute precision, without hiding behind jargon or narrative?

The result is the paper linked above.
Make of it what you will.

As always: no claims of authority.
Just exploration, clarity where possible, and honesty where clarity fails.

If the questions interest you, or if the model bothers you enough to critique it, then the paper has succeeded in its only purpose: provoking real thought.

Warm regards,
Jean Louis Van Belle

🌀 Two Annexes and a Turtle: Revisiting My Early Lectures on Quantum Physics

Over the past few weeks — and more intensely these past mornings — I’ve returned to two of my earliest texts in the Lectures on Physics series: the first on quantum behavior, and the second on probability amplitudes and quantum interference. Both have now been updated with new annexes, co-authored in dialogue with ChatGPT-4o.

This wasn’t just a consistency check. It was something more interesting: an exercise in thinking with — not through — a reasoning machine.

The first annex (Revisiting the Mystery of the Muon and Tau) tackles the open question I left hanging in Lecture I: how to interpret unstable “generations” of matter-particles like the muon and tau. In the original paper, I proposed a realist model where mass is not an intrinsic property but the result of oscillating charge or field energy — a stance that draws support from the 2019 revision of SI units, which grounded the kilogram in Planck’s constant and the speed of light. That change wasn’t just a technicality; it was a silent shift in ontology. I suspected that much at the time, but now — working through the implications with a well-tuned AI — I can state it more clearly: mass is geometry, inertia is field structure, and the difference between stable and unstable particles might be a matter of topological harmony.

The second annex (Interference, Identity, and the Imaginary Unit) reopens the deeper riddle at the heart of quantum mechanics: why probability amplitudes interfere at all. This annex is the child of years of irritation — visible in earlier, sharper essays I published on academia.edu — with the lazy mysticism that often surrounds “common phase factors.” The breakthrough, for me, was to fully accept the imaginary unit iii not as a mathematical trick but as a rotation operator. When wavefunctions are treated as oriented field objects, not just complex scalars, interference becomes a question of geometric compatibility. Superpositions and spin behavior can then be reinterpreted as topological effects in real space. This is where I think mainstream physics got lost: it started calculating without explaining.

ChatGPT didn’t invent these ideas. But it helped me phrase them, frame them, and press further on the points I had once hesitated to formalize. That’s what I mean when I say this wasn’t just a cleanup job. It was a real act of collaboration — a rare instance of AI not just paraphrasing or predicting, but amplifying and clarifying an unfinished line of human reasoning.

Both revised papers are now live on ResearchGate:

They mark, I think, a modest turning point. From theory and calculation toward something closer to explanation.

And yes — for those following the philosophical side of this project: we did also try to capture all of that in a four-panel comic involving Diogenes, a turtle, and Zeno’s paradox. But that, like all things cartooned by AI, is still a work in progress. 🙂

Post Scriptum (24 June 2025): When You Let the Machine Take the Pen

In the spirit of openness: there’s been one more development since publishing the two annexes above.

Feeling I had taken my analytical skills as far as I could — especially in tackling the geometry of nuclear structure — I decided to do something different. Instead of drafting yet another paper, I asked ChatGPT to take over. Not as a ghostwriter, but as a model builder. The prompt was simple: “Do better than me.”

The result is here:
👉 ChatGPT Trying to Do Better Than a Human Researcher

It’s dense, unapologetically geometric, and proposes a full zbw-based model for the neutron and deuteron — complete with energy constraints, field equations, and a call for numerical exploration. If the earlier annexes were dialogue, this one is delegation.

I don’t know if this is the end of the physics path for me. But if it is, I’m at peace with it. Not because the mystery is gone — but because I finally believe the mystery is tractable. And that’s enough for now.

Taking Stock: Zitterbewegung, Electron Models, and the Role of AI in Thinking Clearly

Over the past few years, I’ve spent a fair amount of time exploring realist interpretations of quantum mechanics, particularly the ring-current or Zitterbewegung (zbw) model of the electron. I’ve written many posts about it here — and also tried to help to promote the online “Zitter Institute”, which brings a very interesting group of both amateur and professional researchers together, as well as a rather impressive list of resources and publications which help to make sense of fundamental physics – especially on theories regarding the internal structure of the electron.

The goal — or at least my goal — was (and still is) to clarify what is real and what is not in the quantum-electrodynamic zoo of concepts. That is why I try to go beyond electron models only. I think the electron model is complete as for now: my most-read paper (on a physical interpretation of de Broglie’s matter-wave) settles the question not only for me but, I judge based on its many views, for many others as well. The paper shows how the magnetic moment of the electron, its wavefunction, and the notion of a quantized “packet of energy” can easily be grounded in Maxwell’s equations, special relativity, and geometry. They do not require speculative algebra, nor exotic ontologies.

In that light, I now feel the need to say something — brief, but honest — about where I currently stand in my research journey — which is not on the front burner right now but, yes, I am still thinking about it all. 🙂


On the term “Zitterbewegung” itself

Originally coined by Schrödinger and later mentioned by Dirac, “Zitterbewegung” translates as “trembling motion.” It was meant to capture the high-frequency internal oscillation predicted by Dirac’s wave equation.

But here lies a subtle issue: I no longer find the term entirely satisfying.

I don’t believe the motion is “trembling” in the sense of randomness or jitter. I believe it is geometrically structured, circular, and rooted in the relativistic dynamics of a massless point charge — leading to a quantized angular momentum and magnetic moment. In this view, there is nothing uncertain about it. The electron has an internal clock, not a random twitch.

So while I still value the historical connection, I now prefer to speak more plainly: an electromagnetic model of the electron, based on internal motion and structure, not spooky probabilities.


On tone and openness in scientific dialogue

Recent internal exchanges among fellow researchers have left me with mixed feelings. I remain grateful for the shared curiosity that drew us together, but I was disappointed by the tone taken toward certain outside critiques and tools.

I say this with some personal sensitivity: I still remember the skepticism I faced when I first shared my own interpretations. Papers were turned down not for technical reasons, but because I lacked the “right” institutional pedigree. I had degrees, but no physics PhD. I was an outsider.

Ridicule — especially when directed at dissent or at new voices — leaves a mark. So when I see similar reactions now, I feel compelled to say: we should be better than that.

If we believe in the integrity of our models, we should welcome critique — and rise to the occasion by clarifying, refining, or, if necessary, revising our views. Defensive posturing only weakens our case.


On the use of AI in physics

Some recent comments dismissed AI responses as irrelevant or superficial. I understand the concern. But I also believe this reaction misses the point.

I didn’t try all available platforms, but I did prompt ChatGPT, and — with the right framing — it offered a coherent and balanced answer to the question of the electron’s magnetic moment. Here’s a fragment:

“While the ‘definition’ of the intrinsic magnetic moment may be frame-invariant in the Standard Model, the observable manifestation is not. If the moment arises from internal circular motion (Zitterbewegung), then both radius and frequency are affected by boosts. Therefore, the magnetic moment, like momentum or energy, becomes frame-dependent in its effects.”

The jury is still out, of course. But AI — if guided by reason — might help us unravel what makes sense and what does not.

It is not a substitute for human thinking. But it can reflect it back to us — sometimes more clearly than we’d expect.


A final reflection

I’ll keep my older posts online, including those that reference the Zitter Institute. They reflected what I believed at the time, and I still stand by their substance.

But moving forward, I’ll continue my work independently — still fascinated by the electron, still curious about meaning and structure in quantum mechanics, but less interested in labels, echo chambers, or theoretical tribalism.

As always, I welcome criticism and dialogue. As one business management guru once said:

“None of us is as smart as all of us.” — Kenneth Blanchard

But truth and clarity come first.

Jean Louis Van Belle

The metaphysics of physics

I added a very last paper to my list on ResearchGate. Its title is: what about multi-charge Zitterbewegung models? Indeed, if this local and realist interpretation of quantum mechanics is to break through, then it is logical to wonder about a generalization of a model involving only one charge: think of an electron (e.g., Consa, 2018) or proton model (e.g., Vassallo & Kovacs, 2023) here. With a generalization, we do not mean some unique general solution for all motion, but just what would result from combining 1-charge models into structures with two or more charges. [Just to be sure, we are not talking about electron orbitals here: Schrödinger’s equation models these sufficiently well. No. We are talking about the possible equations of motion of the charges in a neutron, the deuteron nucleus, and a helium-3 or helium-4 nucleus.]

So our question in this paper is this: how do we build the real world from elementary electron and proton particle models? We speculate about that using our own simplified models, which boil down to two geometrical elements: (i) the planar or 2D ring current of the zbw electron, and (ii) the three-dimensional Lissajous trajectory on a sphere which we think might make sense when modeling the orbital of the zbw charge in a proton. Both have the advantage they involve only one frequency rather than the two frequencies (or two modes of oscillation) one sees in helical or toroidal models. Why do we prefer to stick to the idea of one frequency only, even if we readily admit helical or toroidal models are far more precise in terms of generating the experimentally measured value of the magnetic moment of electrons and protons, respectively? The answer is simple: I am just an amateur and so I like to roll with very simple things when trying to tackle something difficult. 🙂

So, go and have a look at our reflections on multi-charge Zitterbewegung models – if only because we also started writing about the history of the Zitterbewegung interpretation and a few other things. To sum it up:

  1. The paper offers a new brief history of how interpretations of the new quantum physics evolved, and why I am with Schrödinger’s Zitterbewegung hypothesis: it just explains the (possible) structure of elementary particles so well.
  2. It speculates about how positive and negative charge may combine in a neutron, and then also about how a deuteron nucleus might look like.
  3. We did not get to specific suggestions for helium-3 and helium-4 nuclei because these depend on how you think about the neutron and the deuteron nucleus. However, I do spell out why and how about I think of a neutron playing the role I think it plays in a nucleus: the glue that holds protons together (so there is no need for quark-gluon theory, I think, even if I do acknowledge the value of some triadic color scheme on top of the classical quantum numbers).
  4. Indeed, despite my aversion of the new metaphysics that crept into physics in the 1970s, I explain why the idea of some color typing (not a color charge but just an extra triadic classification of charge) might still be useful. [I secretly hope this may help me to understand why this color scheme was introduced in the 1970s, because I do not see it as anything more than mathematical factoring of matrix equations describing disequilibrium states – which may be impossible to solve.]

Have a look, even if it is only to appreciate some of the 3D images of what I think as elementary equations of motion (I copy some below). I should do more with these images. Some art, perhaps, using OpenAI’s DALL·E image generator. Who knows: perhaps AI may, one day, solve the n-body problems I write about and, thereby, come up with the ultimate interpretation of quantum mechanics?

That sounds crazy but, from one or two conversations (with real people), it looks like I am not alone with that idea. 🙂 There are good reasons why CERN turned to AI a few years ago: for the time being, they use it to detect anomalies in the jets that come out of high-energy collissions, but – who knows? – perhaps a more advanced AI Logic Theorist programme could simplify the rather messy quark-gluon hypothesis some day?

Because I am disengaging from this field (it is mentally exhausting, and one gets stuck rather quickly), I surely hope so.

Post scriptum

A researcher I was in touch with a few years ago sent me a link to the (virtual) Zitter Institute: https://www.zitter-institute.org/. It is a network and resource center for non-mainstream physicists who succesfully explored – and keep exploring, of course – local/realist interpretations of quantum mechanics by going back to Schrödinger’s original and alternative interpretation of what an electron actually is: a pointlike (but not infinitesimally small) charge orbiting around in circular motion, with:

(i) the trajectory of its motion being determined by the Planck-Einstein relation, and

(ii) an energy – given by Einstein’s mass-energy equivalence relation – which perfectly fits Wheeler’s “mass-without-mass” idea.

I started exploring Schrödinger’s hypothesis myself about ten years ago – as a full-blown alternative to the Bohr-Heisenberg interpretation of quantum mechanics (which I think of as metaphysical humbug, just like Einstein and H.A. Lorentz at the time) – and consistently blogged and published about it: here on this website, and then on viXra, Academia and, since 2020, ResearchGate. So I checked out this new site, and I see the founding members added my blog site as a resource to their project list.

[…]

I am amazingly pleased with that. I mean… My work is much simpler than that of, say, Dr. John G. Williamson (CERN/Philips Research Laboratories/Glasgow University) and Dr. Martin B. van der Mark (Philips Research Laboratories), who created the Quantum Bicycle Society (https://quicycle.com/).

So… Have a look – not at my site (I think I did not finish the work I started) but at the other resources of this new Institute: it looks like this realist and local interpretation of quantum mechanics is no longer non-mainstream… Sweet ! It makes me feel the effort I put into all of this has paid off ! 😉 Moreover, some of my early papers (2018-2020) are listed as useful papers to read. I think that is better than being published in some obscure journal. 🙂

I repeat again: my own research interest has shifted to computer science, logic and artificial intelligence now (you will see recent papers on my RG site are all about that now). It is just so much more fun and it also lines up better with my day job as a freelance IT project manager. So, yes, it is goodbye – but I am happy I can now refer all queries about my particle models and this grand synthesis between old and new quantum mechanics to the Zitter Institute.

It’s really nice: I have been in touch with about half of the founding members of this Institute over the past ten years – casually or in a more sustained way while discussing this or that 2D or 3D model of an electron, proton, or neutron), and they are all great and amazing researchers because they look for truth in science and are very much aware of this weird tendency of modern-day quantum scientists turning their ideas into best-sellers perpetuating myths and mysteries. [I am not only thinking of the endless stream of books from authors like Roger Penrose (the domain for this blog was, originally, reading Penrose rather than reading Feynman) or Graham Greene here, but also of what I now think of rather useless MIT or edX online introductions to quantum physics and quantum math.]

[…]

Looking at the website, I see the engine behind it: Dr. Oliver Consa. I was in touch with him too. He drew my attention to remarkable flip-flop articles such as William Lamb’s anti-photon article (it is an article which everyone should read, I think: unfortunately, you have to pay for it) and remarkable interviews with Freeman Dyson. Talking of the latter (I think of as “the Wolfgang Pauli of the third generation of quantum physicists” because he helped so many others to get a Nobel Prize before he got one – Dyson never got a Nobel Prize, by the way), this is one of these interviews you should watch: just four years before he would die from old age, Freeman Dyson plainly admits QED and QFT is a totally unproductive approach: a “dead end” as Dyson calls it.

So, yes, I am very pleased and happy. It makes me feel my sleepness nights and hard weekend work over the past decade on this has not been in vain ! Paraphrasing Dyson in the above-mentioned video interview, I’d say: “It is the end of the story, and that particular illumination was a very joyful time.” 🙂

Thank you, Dr. Consa. Thank you, Dr. Vassallo, Dr. Burinskii, Dr. Meulenberg, Dr. Kovacs, and – of course – Dr. Hestenes – who single-handedly revived the Zitterbewegung interpretation of quantum mechanics in the 1990s. I am sure I forgot to mention some people. Sorry for that. I will wrap up my post here by saying a few more words about David Hestenes.

I really admire him deeply. Moving away from the topic of high-brow quantum theory, I think his efforts to reform K-12 education in math and physics is even more remarkable than the new space-time algebra (STA) he invented. I am 55 years old and so I know all about the small and pleasant burden to help kids with math and statistics in secondary school and at university: the way teachers now have to convey math and physics to kids now is plain dreadful. I hope it will get better. It has to. If the US and the EU want to keep leading in research, then STEM education (Science, Technology, Engineering, and Mathematics) needs a thorough reform. :-/

Another tainted Nobel Prize…

Last year’s (2022) Nobel Prize in Physics went to Alain Aspect, John Clauser, and Anton Zeilinger for “for experiments with entangled photons, establishing the violation of Bell inequalities and pioneering quantum information science.”

I did not think much of that award last year. Proving that Bell’s No-Go Theorem cannot be right? Great. Finally! I think many scientists – including Bell himself – already knew this theorem was a typical GIGO argument: garbage in, garbage out. As the young Louis de Broglie famously wrote in the introduction of his thesis: hypotheses are worth only as much as the consequences that can be deduced from it, and the consequences of Bell’s Theorem did not make much sense. As I wrote in my post on it, Bell himself did not think much of his own theorem until, of course, he got nominated for a Nobel Prize: it is a bit hard to say you got nominated for a Nobel Prize for a theory you do not believe in yourself, isn’t it? In any case, Bell’s Theorem has now been experimentally disproved. That is – without any doubt – a rather good thing. 🙂 To save the face of the Nobel committee here (why award something that disproves something else that you would have given an award a few decades ago?): Bell would have gotten a Nobel Prize, but he died from brain hemorrhage before, and Nobel Prizes reward the living only.

As for entanglement, I repeat what I wrote many times already: the concept of entanglement – for which these scientists got a Nobel Prize last year – is just a fancy word for the simultaneous conservation of energy, linear and angular momentum (and – if we are talking matter-particles – charge). There is ‘no spooky action at a distance’, as Einstein would derogatorily describe it when the idea was first mentioned to him. So, I do not see why a Nobel Prize should be awarded for rephrasing a rather logical outcome of photon experiments in metamathematical terms.

Finally, the Nobel Prize committee writes that this has made a significant contribution to quantum information science. I wrote a paper on the quantum computing hype, in which I basically ask this question: qubits may or may not be better devices than MOSFETs to store data – they are not, and they will probably never be – but that is not the point. How does quantum information change the two-, three- or n-valued or other rule-based logic that is inherent to the processing of information? I wish the Nobel Prize committee could be somewhat more explicit on that because, when everything is said and done, one of the objectives of the Prize is to educate the general public about the advances of science, isn’t it? :-/

However, all this ranting of mine is, of course, unimportant. We know that it took the distinguished Royal Swedish Science Academy more than 15 years to even recognize the genius of an Einstein, so it was already clear then that their selection criteria were not necessarily rational. [Einstein finally got a well-deserved Nobel Prize, not for relativity theory (strangely enough: if there is one thing on which all physicist are agreed, it is that relativity theory is the bedrock of all of physics, isn’t it?), but for a much less-noted paper on the photoelectric effect – in 1922: 17 years after his annus mirabilis papers had made a killing not only in academic circles but in the headlines of major newspapers as well, and 10 years after a lot of fellow scientists had nominated him for it (1910).]

Again, Mahatma Gandhi never got a Nobel Price for Peace (so Einstein should consider himself lucky to get some Nobel Prize, right?), while Ursula von der Leyen might be getting one for supporting the war with Russia, so I must remind myself of the fact that we do live in a funny world and, perhaps, we should not be trying to make sense of these rather weird historical things. 🙂

Let me turn to the main reason why I am writing this indignant post. It is this: I am utterly shocked by what Dr. John Clauser has done with his newly gained scientific prestige: he joined the CO2 coalition! For those who have never heard of it, it is a coalition of climate change deniers. A bunch of people who:

(1) vehemently deny the one and only consensus amongst all climate scientists, and that is the average temperature on Earth has risen with about two degrees Celsius since the Industrial Revolution, and

(2) say that, if climate change would be real (God forbid!), then we can reverse the trend by easy geo-engineering. We just need to use directed energy or whatever to create more white clouds. If that doesn’t work, then… Well… CO2 makes trees and plants grow, so it will all sort itself out by itself.

[…]

Yes. That is, basically, what Dr. Clauser and all the other scientific advisors of this lobby group – none of which have any credentials in the field they are criticizing (climate science) – are saying, and they say it loud and clearly. That is weird enough, already. What is even weirder, is that – to my surprise – a lot of people are actually buying such nonsense.

Frankly, I have not felt angry for a while, but this thing triggered an outburst of mine on YouTube, in which I state clearly what I think of Dr. Clauser and other eminent scientists who abuse their saint-like Nobel Prize status in society to deceive the general public. Watch my video rant, and think about it for yourself. Now, I am not interested in heated discussions on it: I know the basic facts. If you don’t, I listed them here. Look at the basic graphs and measurements before you would want to argue with me on this, please! To be clear on this: I will not entertain violent or emotional reactions to this post or my video. Moreover, I will delete them here on WordPress and also on my YouTube channel. Yes. For the first time in 10 years or so, I will exercise my right as a moderator of my channels, which is something I have never done before. 🙂

[…]

I will now calm down and write something about the mainstream interpretation of quantum physics again. 🙂 In fact, this morning I woke up with a joke in my head. You will probably think the joke is not very good, but then I am not a comedian and so it is what it is and you can judge for yourself. The idea is that you’d learn something from it. Perhaps. 🙂 So, here we go.

Imagine shooting practice somewhere. A soldier fires at some target with a fine gun, and then everyone looks at the spread of the hits around the bullseye. The quantum physicist says: “See: this is the Uncertainty Principle at work! What is the linear momentum of these bullets, and what is the distance to the target? Let us calculate the standard error.” The soldier looks astonished and says: “No. This gun is no good. One of the engineers should check it.” Then the drill sergeant says this: “The gun is fine. From this distance, all bullets should have hit the bullseye. You are a miserable shooter and you should really practice a lot more.” He then turns to the academic and says: “How did you get in here? I do not understand a word of what you just said and, if I do, it is of no use whatsoever. Please bugger off asap!

This is a stupid joke, perhaps, but there is a fine philosophical point to it: uncertainty is not inherent to Nature, and it also serves no purpose whatsoever in the science of engineering or in science in general. All in Nature is deterministic. Statistically deterministic, but deterministic nevertheless. We do not know the initial conditions of the system, perhaps, and that translates into seemingly random behavior, but if there is a pattern in that behavior (a diffraction pattern, in the case of electron or photon diffraction), then the conclusion should be that there is no such thing as metaphysical ‘uncertainty’. In fact, if you abandon that principle, then there is no point in trying to discover the laws of the Universe, is there? Because if Nature is uncertain, then there are no laws, right? 🙂

To underscore this point, I will, once again, remind you of what Heisenberg originally wrote about uncertainty. He wrote in German and distinguished three very different ideas of uncertainty:

(1) The precision of our measurements may be limited: Heisenberg originally referred to this as an Ungenauigkeit.

(2) Our measurement might disturb the position and, as such, cause the information to get lost and, as a result, introduce an uncertainty in our knowledge, but not in reality. Heisenberg originally referred to such uncertainty as an Unbestimmtheit.

(3) One may also think the uncertainty is inherent to Nature: that is what Heisenberg referred to as Ungewissheit. There is nothing in Nature – and also nothing in Heisenberg’s writings, really – that warrants the elevation of this Ungewissheit to a dogma in modern physics. Why? Because it is the equivalent of a religious conviction, like God exists or He doesn’t (both are theses we cannot prove: Ryle labeled such hypotheses as ‘category mistakes’).

Indeed, when one reads the proceedings of the Solvay Conferences of the late 1920s, 1930s and immediately after WW II (see my summary of it in https://www.researchgate.net/publication/341177799_A_brief_history_of_quantum-mechanical_ideas), then it is pretty clear that none of the first-generation quantum physicists believed in such dogma and – if they did – that they also thought what I am writing here: that it should not be part of science but part of one’s personal religious beliefs.

So, once again, I repeat that this concept of entanglement – for which John Clauser got a Nobel Prize last year – is in the same category: it is just a fancy word for the simultaneous conservation of energy, linear and angular momentum, and charge. There is ‘no spooky action at a distance’, as Einstein would derogatorily describe it when the idea was first mentioned to him.

Let me end by noting the dishonor of Nobel Prize winner John Clauser once again. Climate change is real: we are right in the middle of it, and it is going to get a lot worse before it gets any better – if it is ever going to get better (which, in my opinion, is a rather big ‘if‘…). So, no matter how many Nobel Prize winners deny it, they cannot change the fact that average temperature on Earth has risen by about 2 degrees Celsius since 1850 already. The question is not: is climate change happening? No. The question now is: how do we adapt to it – and that is an urgent question – and, then, the question is: can we, perhaps, slow down the trend, and how? In short, if these scientists from physics or the medical field or whatever other field they excel in are true and honest scientists, then they would do a great favor to mankind not by advocating geo-engineering schemes to reverse a trend they actually deny is there, but by helping to devise and promote practical measures to allow communities that are affected by natural disaster to better recover from them.

So, I’ll conclude this rant by repeating what I think of all of this. Loud and clear: John Clauser and the other scientific advisors of the CO2 coalition are a disgrace to what goes under the name of ‘science’, and this umpteenth ‘incident’ in the history of science or logical thinking makes me think that it is about time that the Royal Swedish Academy of Sciences does some serious soul-searching when, amongst the many nominations, it selects its candidates for a prestigious award like this. Alfred Nobel – one of those geniuses who regretted his great contribution to science and technology was (also) (ab)used to increase the horrors of war – must have turned too many times in his grave now… :-/

The End of Physics

I wrote a post with this title already, but this time I mean it in a rather personal way: my last paper – with the same title – on ResearchGate sums up rather well whatever I achieved, and also whatever I did not explore any further because time and energy are lacking: I must pay more attention to my day job nowadays. 🙂

I am happy with the RG score all of my writing generated, the rare but heartfelt compliments I got from researchers with far more credentials than myself (such as, for example, Dr. Emmanouil Markoulakis of Nikolaos, which led me to put a paper on RG with a classical explanation of the Lamb shift), various friendly but not necessarily always agreeing commentators (one of them commenting here on this post: a good man!), and, yes, the interaction on my YouTube channel. But so… Well… That is it, then! 🙂

As a farewell, I will just quote from the mentioned paper – The End of Physics (only as a science, of course) – hereunder, and I hope that will help you to do what all great scientists would want you to do, and that is to think things through for yourself. 🙂

Brussels, 22 July 2023

Bohr, Heisenberg, and other famous quantum physicists – think of Richard Feynman, John Stewart Bell, Murray Gell-Mann, and quite a few other Nobel Prize winning theorists[1] – have led us astray. They swapped a rational world view – based on classical electromagnetic theory and statistical determinism – for a mystery world in which anything is possible, but nothing is real.

They invented ‘spooky action at a distance’ (as Einstein derogatorily referred to it), for example. So, what actually explains that long-distance interaction, then? It is quite simple. There is no interaction, and so there is nothing spooky or imaginary or unreal about it: if by measuring the spin state of one photon, we also know the spin state of its twin far away, then it is – quite simply – because physical quantities such as energy and momentum (linear or angular) will be conserved if no other interference is there after the two matter- or light-particles were separated.

Plain conservation laws explain many other things that are being described as ‘plain mysteries’ in quantum physics. The truth is this: there are no miracles or mysteries: everything has a physical cause and can be explained.[2] For example, there is also nothing mysterious about the interference pattern and the trajectory of an electron going through a slit, or one of two nearby slits. An electron is pointlike, but it is not infinitesimally small: it has an internal structure which explains its wave-like properties. Likewise, Mach-Zehnder one-photon interference can easily be explained when thinking of its polarization structure: a circularly polarized photon can be split in two linearly polarized electromagnetic waves, which are photons in their own right. Everything that you have been reading about mainstream quantum physics is, perhaps, not wrong, but it is highly misleading because it is all couched in guru language and mathematical gibberish.

Why is that mainstream physicists keep covering up? I am not sure: it is a strange mix of historical accident and, most probably, the human desire to be original or special, or the need to mobilize money for so-called fundamental research. I also suspect there is a rather deceitful intention to hide truths about what nuclear science should be all about, and that is to understand the enormous energies packed into elementary particles.[3]

The worst of all is that none of the explanations in mainstream quantum physics actually works: mainstream theory does not have a sound theory of signal propagation, for example (click the link to my paper on that or – better, perhaps – this link to our paper on signal propagation), and Schrödinger’s hydrogen model is a model of a hypothetical atom modelling orbitals of equally hypothetical zero-spin electron pairs. Zero-spin electrons do not exist, and real-life hydrogen only has one proton at its center, and one electron orbiting around it. Schrödinger’s equation is relativistically correct – even if all mainstream physicists think it is not – but the equation includes two mistakes that cancel each other out: it confuses the effective mass of an electron in motion with its total mass[4], and the 1/2 factor which is introduced by the m = 2meff substitution also takes care of the doubling of the potential that is needed to make the electron orbitals come out alright.

The worst thing of all is that mainstream quantum physicists never accurately modeled what they should have modeled: the hydrogen atom as a system of a real proton and a real electron (no hypothetical infinitesimally and structureless spin-zero particles). If they had done that, they would also be able to explain why hydrogen atoms come in molecular H2 pairs, and they would have a better theory of why two protons need a neutron to hold together in a helium nucleus. Moreover, they would have been able to explain what a neutron actually is.[5]


[1] James Stewart Bell was nominated for a Nobel Prize, but died from a brain hemorrhage before he could accept the prize for his theorem.

[2] The world of physics – at the micro-scale – is already fascinating enough: why should we invent mysteries?

[3] We do not think these energies can be exploited any time soon. Even nuclear energy is just binding energy between protons and neutrons: a nuclear bomb does not release the energy that is packed into protons. These elementary particles survive the blast: they are the true ‘atoms’ of this world (in the Greek sense of ‘a-tom’, which means indivisible).

[4] Mass is a measure of the inertia to a change in the state of motion of an oscillating charge. We showed how this works by explaining Einstein’s mass-energy equivalence relation and clearly distinguishing the kinetic and potential energy of an electron. Feynman first models an electron in motion correctly, with an equally correct interpretation of the effective mass of an electron in motion, but then substitutes this effective mass by half the electron mass (meff = m/2) in an erroneous reasoning process based on the non-relativistic kinetic energy concept. The latter reasoning also leads to the widespread misconception that Schrödinger’s equation would not be relativistically correct (see the Annexes to my paper on the matter-wave). For the trick it has to do, Schrödinger’s wave equation is correct – and then I mean also relativistically correct. 🙂

[5] A neutron is unstable outside of its nucleus. We, therefore, think it acts as the glue between protons, and it must be a composite particle.

On the quantum computing hype

1. The Wikipedia article on quantum computing describes a quantum computer as “a computer that exploits quantum -mechanical phenomena.” The rest of the article then tries to explain what these quantum-mechanical phenomena actually are.

Unfortunately, the article limits itself to the mainstream interpretation of these and, therefore, suffers from what I perceive to be logical and philosophical errors. Indeed, in the realistic interpretation of quantum mechanics that I have been developing, system wavefunctions are only useful to model our own uncertainty about the system. I subscribe to Hendrik Antoon Lorentz’s judgment at the last Solvay Conference under his leadership: there is no need whatsoever to elevate indeterminism to a philosophical principle. Not in science in general, and not in quantum mechanics in particular. I, therefore, think quantum mechanics cannot offer a substantially new computing paradigm.

Of course, one may argue that, for specific problems, some kind of three- or more-valued logic – rather than the binary or Boolean true/false dichotomy on which most logic circuits are based – may come in handy. However, such logic has already been worked out, and can be accessed using appropriate programming languages. Python and the powerful mathematical tools that come with it (Pandas, NumPy and SciPy) work great with ternary logic using a {true, false, unknown} or a {-1, 0, +1} set of logical values rather than the standard {0, 1} Boolean set. The Wikipedia article on three-valued logic is worth a read and, despite the rather arcane nature of the topic, much better written than the mentioned article: have a look at how operators are used on these three-valued sets in meaningful algebras or logical models, such as that of Kleene, Priest or Lukasiewicz.

2. One may, of course, argue that, even when there is probably no such thing as a new logical quantum computing model or logic, quantum technology may offer distinct advantages when it comes to storage of data about this or that state or, one day, lead to devices with faster clock and/or bus speeds. That appears to be a pipedream too:

  • To keep, say, an electron in this or that spin state, one must create and steady an electromagnetic field – usually one does so in a superconducting environment, which makes actual mechanical devices used for quantum computing (qubits) look like the modern-day equivalent of Babbage’s analytical machine. In my not-so-humble view, such devices will never ever achieve the sheer material performance offered by current nanometer-scale MOSFETs.  

  • As for bus or transmission speeds, quantum theory does not come with a new theory of charge propagation and, most importantly, is fundamentally flawed in its analysis of how signals actually propagate in, say, a lattice structure. I refer to one of my papers here (on electron propagation in a lattice), in which I deconstruct Feynman’s analysis of the concept of the free and effective mass of an electron. Hence, for long-distance transmission of signals, optical fiber cannot be beaten. For short-distance transmission of signals (say, within an electrical circuit, I refer to the above-mentioned nano-technology which continues to revolutionize the chip industry.

Brussels, 4 July 2023

Epilogue: an Easter podcast

I have been thinking on my explanation of dark matter/energy, and I think it is sound. It solves the last asymmetry in my models, and explains all. So, after a hiatus of two years, I bothered to make a podcast on my YouTube channel once again. It talks about everything. Literally everything !

It makes me feel my quest for understanding of matter and energy – in terms of classical concepts and measurements (as depicted below) – has ended. Perhaps I will write more but that would only be to promote the material, which should promote itself if it is any good (which I think it is).

I should, by way of conclusion, say a few final words about Feynman’s 1963 Lectures now. When everything is said and done, it is my reading of them which had triggered this blog about ten years ago. I would now recommend Volume I and II (classical physics and electromagnetic theory) – if only because it gives you all the math you need to understand all of physics – but not Volume III (the lectures on quantum mechanics). They are outdated, and I do find Feynman guilty of promoting rather than explaining the hocus-pocus around all of the so-called mysteries in this special branch of physics.

Quantum mechanics is special, but I do conclude now that it can all be explained in terms of classical concepts and quantities. So, Gell-Mann’s criticism of Richard Feynman is, perhaps, correct: Mr. Feynman did, perhaps, make too many jokes – and it gets annoying because he must have known some of what he suggests does not make sense – even if I would not go as far as Gell-Mann, who says “Feynman was only concerned about himself, his ego, and his own image !” :-/

So, I would recommend my own alternative series of ‘lectures’. Not only are they easier to read, but they also embody a different spirit of writing. Science is not about you, it is about thinking for oneself and deciding on what is truthful and useful, and what is not. So, to conclude, I will end by quoting Ludwig Boltzmann once more:

Bring forward what is true.

Write it so that it is clear.

Defend it to your last breath.”

Ludwig Boltzmann (1844 – 1906)

Post scriptum: As for the ‘hocus-pocus’ in Feynman’s Lectures, we should, perhaps, point once again to some of our early papers on the flaws in his arguments. We effectively put our finger on the arbitrary wavefunction convention, or the (false) boson-fermion dichotomy, or the ‘time machine’ argument that is inherent to his explanation of the Hamiltonian, and so on. We published these things on Academia.edu before (also) putting our (later) papers ResearchGate, so please check there for the full series. 🙂

Post scriptum (23 April 2023): Also check out this video, which was triggered by someone who thought my models amount to something like a modern aether theory, which it is definitely not the case: https://www.youtube.com/watch?v=X38u2-nXoto. 🙂 I really think it is my last reflection on these topics. I need to focus on my day job, sports, family, etcetera again ! 🙂

Onwards !

It has been ages since I last wrote something here. Regular work took over. I did do an effort, though, to synchronize and reorganize some stuff. And I am no longer shy about it. My stats on ResearchGate and academia.edu show that I am no longer a ‘crackpot theorist’. This is what I wrote about it on my LinkedIn account:

QUOTE

With good work-life balance now, I picked up one of my hobbies again: research into quantum theories. As for now, I only did a much-needed synchronization of papers on academia.edu and ResearchGate. When logging on the former network (which I had not done for quite a while), I found many friendly messages on it. One of them was from a researcher on enzymes: “I have been studying about these particles for around four years. All of the basics. But wat are they exactly? This though inspired me… Thank u so much!” I smiled and relaxed when I read that, telling myself that all those sleepless nights I spent on this were not the waste of time and energy that most of my friends thought it would be. 🙂

Another one was even more inspiring. It was written by another ‘independent’ researcher. Nelda Evans. No further detail in her profile. From the stats, I could see that she had downloaded an older manuscript of mine (https://lnkd.in/ecRKJwxQ). This is what she wrote about it to me: “I spoke to Richard Feynman in person at the Hughes Research Lab in Malibu California in 1967 where the first pulsed laser was invented when some of the students from the UCLA Physics Dept. went to hear him. Afterward I went to talk to him and said “Dr. Feynman, I’ve learned that some unknown scientists were dissatisfied with probability as a final description of Quantum Mechanics, namely Planck, Einstein, Schrodinger, de Broglie, Bohm,…” When I finished my list he immediately said “And Feynman”. We talked about it a little, and he told me “I like what you pick on.”
My guess is that he might have told you something similar.”

That message touched me deeply, because I do feel – from reading his rather famous Lectures on Physics somewhat ‘between the lines’ – that Richard Feynman effectively knew all but that he, somehow, was not allowed to clearly say what it was all about. I wrote a few things about that rather strange historical bias in the interpretation of ‘uncertainty’ and other ‘metaphysical’ concepts that infiltrated the science of quantum mechanics in my last paper: https://lnkd.in/ewZBcfke.

So… Well… I am not a crackpot scientist anymore ! 🙂 The bottom-line is to always follow your instinct when trying to think clearly about some problem or some issue. We should do what Ludwig Boltzmann (1844-1906) told us to do: “Bring forward what is true. Write it so that it is clear. Defend it to your last breath.”

[…] Next ‘thing to do’, is to chat with ChatGPT about my rather straightforward theories. I want to see how ‘intelligent’ it is. I wonder where it will hit its limit in terms of ‘abstract thinking.’ The models I worked on combine advanced geometrical thinking (building ‘realistic’ particle models requires imagining ‘rotations within rotations’, among other things) and formal math (e.g. quaternion algebra). ChatGPT is excellent in both, I was told, but can it combine the two intelligently? 🙂

UNQUOTE

On we go. When the going gets tough, the tough get going. 🙂 For those who want an easy ‘introduction’ to the work (at a K-12 level of understanding of mathematics), I wrote the first pages of what could become a very new K-12 level textbook on physics. Let us see. I do want to see some interest from a publisher first. 🙂

Deep electron orbitals and the essence of quantum physics

After a long break (more than six months), I have started to engage again in a few conversations. I also looked at the 29 papers on my ResearchGate page, and I realize some of them would need to be re-written or re-packaged so as to ensure a good flow. Also, some of the approaches were more productive than others (some did not lead anywhere at all, actually), and I would need to point those out. I have been thinking about how to approach this, and I think I am going to produce an annotated version of these papers, with comments and corrections as mark-ups. Re-writing or re-structuring all of them would require to much work.

The mark-up of those papers is probably going to be based on some ‘quick-fire’ remarks (a succession of thoughts triggered by one and the same question) which come out of the conversation below, so I thank these thinkers for having kept me in the loop of a discussion I had followed but not reacted to. It is an interesting one – on the question of ‘deep electron orbitals’ (read: the orbitals of negative charge inside of a nucleus exist and, if so, how one can model them. If one could solve that question, one would have a theoretical basis for what is referred to as low-energy nuclear reactions. That was known formerly as cold fusion, but that got a bit of a bad name because of a number of crooks spoiling the field, unfortunately.

PS: I leave the family names of my correspondents in the exchange below out so they cannot be bothered. One of them, Jerry, is a former American researcher at SLAC. Andrew – the key researcher on DEPs – is a Canadian astrophysicist, and the third one – Jean-Luc – is a rather prominent French scientist in LENR.]

From: Jean Louis Van Belle
Sent: 18 November 2021 22:51
Subject: Staying engaged (5)

Oh – and needless to say, Dirac’s basic equation can, of course, be expanded using the binomial expansion – just like the relativistic energy-momentum relation, and then one can ‘cut off’ the third-, fourth-, etc-order terms and keep the first and second-order terms only. Perhaps it is equations like that kept you puzzled (I should check your original emails). In any case, this way of going about energy equations for elementary particles is a bit the same as those used in perturbation equations in which – as Dirac complained – one randomly selects terms that seem to make sense and discard others because they do not seem to make sense. Of course, Dirac criticized perturbation theory much more severely than this – and rightly so. 😊 😊 JL

From: Jean Louis Van Belle
Sent: 18 November 2021 22:10
Subject: Staying engaged (4)

Also – I remember you had some questions on an energy equation – not sure which one – but so I found Dirac’s basic equation (based on which he derives the ‘Dirac’ wave equation) is essentially useless because it incorporates linear momentum only. As such, it repeats de Broglie’s mistake, and that is to interpret the ‘de Broglie’ wavelength as something linear. It is not: frequencies, wavelengths are orbital frequencies and orbital circumferences. So anything you would want to do with energy equations that are based on that, lead nowhere – in my not-so-humble opinion, of course. To illustrate the point, compare the relativistic energy-momentum relation and Dirac’s basic equation in his Nobel Prize lecture (I hope the subscripts/superscripts get through your email system so they display correctly):

m02c4 = E2 – p2c2 (see, for example, Feynman-I-16, formula 16-3)

Divide the above by c2 and re-arrange and you get Dirac’s equation: W2/c2 – pr2 – m2/c2 = 0 (see his 1933 Nobel Prize Lecture)

So that cannot lead anywhere. It’s why I totally discard Dirac’s wave equation (it has never yielded any practical explanation of a real-life phenomenon anyway, if I am not mistaken).

Cheers – JL

From: Jean Louis Van Belle
Sent: 18 November 2021 21:49
Subject: Staying engaged (3)

Just on ‘retarded sources’ and ‘retarded fields’ – I have actually tried to think of the ‘force mechanism’ inside of an electron or a proton (what keeps the pointlike charge in this geometric orbit around a center of mass?). I thought long and hard about some kind of model in which we have the charge radiate out a sub-Planck field, and that its ‘retarded effects’ might arrive ‘just in time’ to the other side of the orbital (or whatever other point on the orbital) so as to produce the desired ‘course correction’ might explain it. I discarded it completely: I am now just happy that we have ‘reduced’ the mystery to this ‘Planck-scale quantum-mechanical oscillation’ (in 2D or 3D orbitals) without the need for an ‘aether’, or quantized spacetime, or ‘virtual particles’ actually ‘holding the thing together’.

Also, a description in terms of four-vectors (scalar and vector potential) does not immediately call for ‘retarded time’ variables and all that, so that is another reason why I think one should somehow make the jump from E-B fields to scalar and vector potential, even if the math is hard to visualize. If we want to ‘visualize’ things, Feynman’s discussion of the ‘energy’ and ‘momentum’ flow in https://www.feynmanlectures.caltech.edu/II_27.html might make sense, because I think analyses in terms of Poynting vectors are relativistically current, aren’t they? It is just an intuitive idea…

Cheers – JL

From: Jean Louis Van Belle
Sent: 18 November 2021 21:28
Subject: Staying engaged (2)

But so – in the shorter run – say, the next three-six months, I want to sort out those papers on ResearchGate. The one on the de Broglie’s matter-wave (interpreting the de Broglie wavelength as the circumference of a loop rather than as a linear wavelength) is the one that gets most downloads, and rightly so. The rest is a bit of a mess – mixing all kinds of things I tried, some of which worked, but other things did not. So I want to ‘clean’ that up… 😊 JL

From: Jean Louis Van Belle
Sent: 18 November 2021 21:21
Subject: Staying engaged…

Please do include me in the exchanges, Andrew – even if I do not react, I do read them because I do need some temptation and distraction. As mentioned, I wanted to focus on building a credible n = p + e model (for free neutrons but probably more focused on a Schrodinger-like D = p + e + p Platzwechsel model, because the deuteron nucleus is stable). But so I will not do that the way I studied the zbw model of the electron and proton (I believe that is sound now) – so that’s with not putting in enough sleep. I want to do it slowly now. I find a lot of satisfaction in the fact that I think there is no need for complicated quantum field theories (fields are quantized, but in a rather obvious way: field oscillations – just like matter-particles – pack Planck’s quantum of (physical) action which – depending on whether you freeze time or positions as a variable, expresses itself as a discrete amount of energy or, alternatively, as a discrete amount of momentum), nor is there any need for this ‘ontologization’ of virtual field interactions (sub-Planck scale) – the quark-gluon nonsense.

Also, it makes sense to distinguish between an electromagnetic and a ‘strong’ or ‘nuclear’ force: the electron and proton have different form factors (2D versus 3D oscillations, but that is a bit of a non-relativistic shorthand for what might be the case) but, in addition, there is clearly a much stronger force at play within the proton – whose strength is the same kind of ‘scale’ as the force that gives the muon-electron its rather enormous mass. So that is my ‘belief’ and the ‘heuristic’ models I build (a bit of ‘numerology’ according to Dr Pohl’s rather off-hand remarks) support it sufficiently for me to make me feel at peace about all these ‘Big Questions’.

I am also happy I figured out these inconsistencies around 720-degree symmetries (just the result of a non-rigorous application of Occam’s Razor: if you use all possible ‘signs’ in the wavefunction, then the wavefunction may represent matter as well as anti-matter particles, and these 720-degree weirdness dissolves). Finally, the kind of ‘renewed’ S-matrix programme for analyzing unstable particles (adding a transient factor to wavefunctions) makes sense to me, but even the easiest set of equations look impossible to solve – so I may want to dig into the math of that if I feel like having endless amounts of time and energy (which I do not – but, after this cancer surgery, I know I will only die on some ‘moral’ or ‘mental’ battlefield twenty or thirty years from now – so I am optimistic).

So, in short, the DEP question does intrigue me – and you should keep me posted, but I will only look at it to see if it can help me on that deuteron model. 😊 That is the only ‘deep electron orbital’ I actually believe in. Sorry for the latter note.

Cheers – JL   

From: Andrew
Sent: 16 November 2021 19:05
To: Jean-Luc; Jerry; Jean Louis
Subject: Re: retarded potential?

Dear Jean-Louis,

Congratulations on your new position. I understand your present limitations, despite your incredible ability to be productive. They must be even worse than those imposed by my young kids and my age. Do you wish for us to not include you in our exchanges on our topic? Even with no expectation of your contributing at this point, such emails might be an unwanted temptation and distraction.

Dear Jean-Luc,

Thank you for the Wiki-Links. They are useful. I agree that the 4-vector potential should be considered. Since I am now considering the nuclear potentials as well as the deep orbits, it makes sense to consider the nuclear vector potentials to have an origin in the relativistic Coulomb potentials. I am facing this in my attempts to calculate the deep orbits from contributions to the potential energies that have a vector component, which non-rel Coulomb potentials do not have.

For examples: do we include the losses in Vcb (e.g., from the binding energy BE) when we make the relativistic correction to the potential; or, how do we relativistically treat pseudo potentials such as that of centrifugal force? We know that for equilibrium, the average forces must cancel. However, I’m not sure that it is possible to write out a proper expression for “A” to fit such cases.

Best regards to all,

Andrew

_ _ _

On Fri, Nov 12, 2021 at 1:42 PM Jean-Luc wrote:

Dear all,

I totally agree with the sentence of Jean-Louis, which I put in bold in his message, about vector potential and scalar potential, combined into a 4-vector
potential A
, for representing EM field in covariant formulation. So EM representation by 4-vector A has been very developed, as wished by JL,
in the framework of QED.

We can note the simplicity of Lorentz gauge written by using A.
   https://en.wikipedia.org/wiki/Lorenz_gauge_condition

We can see the reality of vector potential
in the Aharonov-Bohm effect:
    https://en.wikipedia.org/wiki/Aharonov-Bohm_effect.
In fact, we can see that vector potential contains more information than E,B fields.
Best regards

   Jean-Luc
Le 12/11/2021 à 05:43, Jean Louis Van Belle a écrit :

Hi All – I’ve been absent in the discussion, and will remain absent for a while. I’ve been juggling a lot of work – my regular job at the Ministry of Interior (I got an internal promotion/transfer, and am working now on police and security sector reform) plus consultancies on upcoming projects in Nepal. In addition, I am still recovering from my surgery – I got a bad flue (not C19, fortunately) and it set back my auto-immune system, I feel. I have a bit of a holiday break now (combining the public holidays of 11 and 15 November in Belgium with some days off to bridge so I have a rather nice super-long weekend – three in one, so to speak).

As for this thread, I feel like it is not ‘phrasing’ the discussion in the right ‘language’. Thinking of E-fields and retarded potential is thinking in terms of 3D potential, separating out space and time variables without using the ‘power’ of four-vectors (four-vector potential, and four-vector space-time). It is important to remind ourselves that we are measuring fields in continuous space and time (but, again, this is relativistic space-time – so us visualizing a 3D potential at some point in space is what it is: we visualize something because our mind needs that – wants that). The fields are discrete, however: a field oscillation packs one unit of Planck – always – and Planck’s quantum of action combines energy and momentum: we should not think of energy and momentum as truly ‘separate’ (discrete) variables, just like we should not think of space and time as truly ‘separate’ (continuous) variables.

I do not quite know what I want to say here – or how I should further work it out. I am going to re-read my papers. I think I should further develop the last one (https://www.researchgate.net/publication/351097421_The_concepts_of_charge_elementary_ring_currents_potential_potential_energy_and_field_oscillations), in which I write that the vector potential is more real than the electric field and the scalar potential should be further developed, and probably it is the combined scalar and vector potential that are the ’real’ things. Not the electric and magnetic field. Hence, illustrations like below – in terms of discs and cones in space – do probably not go all that far in terms of ‘understanding’ what it is going on… It’s just an intuition…

Cheers – JL

From: Andrew
Sent: 23 September 2021 17:17
To: Jean-Luc; Jerry; Jean Louis
Subject: retarded potential?

Dear Jean-Luc,

Becasue of the claim that gluons are tubal, I have been looking at the disk-shaped E-field lines of the highly-relativistic electron and comparing it to the retarded potential, which, based on timing, would seem to give a cone rather than a disk (see figure). This makes a difference when we consider a deep-orbiting electron. It even impacts selection of the model for impact of an electron when considering diffraction and interference.

Even if the field appears to be spreading out as a cone, the direction of the field lines are that of a disk from the retarded source. However, how does it interact with the radial field of a stationary charge?

Do you have any thoughts on the matter.

Best regards,

Andrew

_ _ _

On Thu, Sep 23, 2021 at 5:05 AM Jean-Luc wrote:

Dear Andrew, Thank you for the references. Best regards, Jean-Luc

Le 18/09/2021 à 17:32, Andrew a écrit :
> This might have useful thoughts concerning the question of radiation
> decay to/from EDOs.
>
> Quantum Optics Electrons see the quantum nature of light
> Ian S. Osborne
> We know that light is both a wave and a particle, and this duality
> arises from the classical and quantum nature of electromagnetic
> excitations. Dahan et al. observed that all experiments to date in
> which light interacts with free electrons have been described with
> light considered as a wave (see the Perspective by Carbone). The
> authors present experimental evidence revealing the quantum nature of
> the interaction between photons and free electrons. They combine an
> ultrafast transmission electron microscope with a silicon-photonic
> nanostructure that confines and strengthens the interaction between
> the light and the electrons. The “quantum” statistics of the photons
> are imprints onto the propagating electrons and are seen directly in
> their energy spectrum.
> Science, abj7128, this issue p. 1324; see also abl6366, p. 1309

The metaphysics of physics

I realized that my last posts were just some crude and rude soundbites, so I thought it would be good to briefly summarize them into something more coherent. Please let me know what you think of it.

The Uncertainty Principle: epistemology versus physics

Anyone who has read anything about quantum physics will know that its concepts and principles are very non-intuitive. Several interpretations have therefore emerged. The mainstream interpretation of quantum mechanics is referred to as the Copenhagen interpretation. It mainly distinguishes itself from more frivolous interpretations (such as the many-worlds and the pilot-wave interpretations) because it is… Well… Less frivolous. Unfortunately, the Copenhagen interpretation itself seems to be subject to interpretation.

One such interpretation may be referred to as radical skepticism – or radical empiricism[1]: we can only say something meaningful about Schrödinger’s cat if we open the box and observe its state. According to this rather particular viewpoint, we cannot be sure of its reality if we don’t make the observation. All we can do is describe its reality by a superposition of the two possible states: dead or alive. That’s Hilbert’s logic[2]: the two states (dead or alive) are mutually exclusive but we add them anyway. If a tree falls in the wood and no one hears it, then it is both standing and not standing. Richard Feynman – who may well be the most eminent representative of mainstream physics – thinks this epistemological position is nonsensical, and I fully agree with him:

“A real tree falling in a real forest makes a sound, of course, even if nobody is there. Even if no one is present to hear it, there are other traces left. The sound will shake some leaves, and if we were careful enough we might find somewhere that some thorn had rubbed against a leaf and made a tiny scratch that could not be explained unless we assumed the leaf were vibrating.” (Feynman’s Lectures, III-2-6)

So what is the mainstream physicist’s interpretation of the Copenhagen interpretation of quantum mechanics then? To fully answer that question, I should encourage the reader to read all of Feynman’s Lectures on quantum mechanics. But then you are reading this because you don’t want to do that, so let me quote from his introductory Lecture on the Uncertainty Principle: “Making an observation affects the phenomenon. The point is that the effect cannot be disregarded or minimized or decreased arbitrarily by rearranging the apparatus. When we look for a certain phenomenon we cannot help but disturb it in a certain minimum way.” (ibidem)

It has nothing to do with consciousness. Reality and consciousness are two very different things. After having concluded the tree did make a noise, even if no one was there to  hear it, he wraps up the philosophical discussion as follows: “We might ask: was there a sensation of sound? No, sensations have to do, presumably, with consciousness. And whether ants are conscious and whether there were ants in the forest, or whether the tree was conscious, we do not know. Let us leave the problem in that form.” In short, I think we can all agree that the cat is dead or alive, or that the tree is standing or not standing¾regardless of the observer. It’s a binary situation. Not something in-between. The box obscures our view. That’s all. There is nothing more to it.

Of course, in quantum physics, we don’t study cats but look at the behavior of photons and electrons (we limit our analysis to quantum electrodynamics – so we won’t discuss quarks or other sectors of the so-called Standard Model of particle physics). The question then becomes: what can we reasonably say about the electron – or the photon – before we observe it, or before we make any measurement. Think of the Stein-Gerlach experiment, which tells us that we’ll always measure the angular momentum of an electron – along any axis we choose – as either +ħ/2 or, else, as -ħ/2. So what’s its state before it enters the apparatus? Do we have to assume it has some definite angular momentum, and that its value is as binary as the state of our cat (dead or alive, up or down)?

We should probably explain what we mean by a definite angular momentum. It’s a concept from classical physics, and it assumes a precise value (or magnitude) along some precise direction. We may challenge these assumptions. The direction of the angular momentum may be changing all the time, for example. If we think of the electron as a pointlike charge – whizzing around in its own space – then the concept of a precise direction of its angular momentum becomes quite fuzzy, because it changes all the time. And if its direction is fuzzy, then its value will be fuzzy as well. In classical physics, such fuzziness is not allowed, because angular momentum is conserved: it takes an outside force – or torque – to change it. But in quantum physics, we have the Uncertainty Principle: some energy (force over a distance, remember) can be borrowed – so to speak – as long as it’s swiftly being returned – within the quantitative limits set by the Uncertainty Principle: ΔE·Δt = ħ/2.

Mainstream physicists – including Feynman – do not try to think about this. For them, the Stern-Gerlach apparatus is just like Schrödinger’s box: it obscures the view. The cat is dead or alive, and each of the two states has some probability – but they must add up to one – and so they will write the state of the electron before it enters the apparatus as the superposition of the up and down states. I must assume you’ve seen this before:

|ψ〉 = Cup|up〉 + Cdown|down〉

It’s the so-called Dirac or bra-ket notation. Cup is the amplitude for the electron spin to be equal to +ħ/2 along the chosen direction – which we refer to as the z-direction because we will choose our reference frame such that the z-axis coincides with this chosen direction – and, likewise, Cup is the amplitude for the electron spin to be equal to -ħ/2 (along the same direction, obviously). Cup and Cup will be functions, and the associated probabilities will vary sinusoidally – with a phase difference so as to make sure both add up to one.

The model is consistent, but it feels like a mathematical trick. This description of reality – if that’s what it is – does not feel like a model of a real electron. It’s like reducing the cat in our box to the mentioned fuzzy state of being alive and dead at the same time. Let’s try to come up with something more exciting. 😊

[1] Academics will immediately note that radical empiricism and radical skepticism are very different epistemological positions but we are discussing some basic principles in physics here rather than epistemological theories.

[2] The reference to Hilbert’s logic refers to Hilbert spaces: a Hilbert space is an abstract vector space. Its properties allow us to work with quantum-mechanical states, which become state vectors. You should not confuse them with the real or complex vectors you’re used to. The only thing state vectors have in common with real or complex vectors is that (1) we also need a base (aka as a representation in quantum mechanics) to define them and (2) that we can make linear combinations.

The ‘flywheel’ electron model

Physicists describe the reality of electrons by a wavefunction. If you are reading this article, you know how a wavefunction looks like: it is a superposition of elementary wavefunctions. These elementary wavefunctions are written as Ai·exp(-iθi), so they have an amplitude Ai  and an argument θi = (Ei/ħ)·t – (pi/ħ)·x. Let’s forget about uncertainty, so we can drop the index (i) and think of a geometric interpretation of A·exp(-iθ) = A·eiθ.

Here we have a weird thing: physicists think the minus sign in the exponent (-iθ) should always be there: the convention is that we get the imaginary unit (i) by a 90° rotation of the real unit (1) – but the rotation is counterclockwise rotation. I like to think a rotation in the clockwise direction must also describe something real. Hence, if we are seeking a geometric interpretation, then we should explore the two mathematical possibilities: A·eiθ and A·e+iθ. I like to think these two wavefunctions describe the same electron but with opposite spin. How should we visualize this? I like to think of A·eiθ and A·e+iθ as two-dimensional harmonic oscillators:

eiθ = cos(-θ) + i·sin(-θ) = cosθ – i·sinθ

e+iθ = cosθ + i·sinθ

So we may want to imagine our electron as a pointlike electric charge (see the green dot in the illustration below) to spin around some center in either of the two possible directions. The cosine keeps track of the oscillation in one dimension, while the sine (plus or minus) keeps track of the oscillation in a direction that is perpendicular to the first one.

Figure 1: A pointlike charge in orbit

Circle_cos_sin

So we have a weird oscillator in two dimensions here, and we may calculate the energy in this oscillation. To calculate such energy, we need a mass concept. We only have a charge here, but a (moving) charge has an electromagnetic mass. Now, the electromagnetic mass of the electron’s charge may or may not explain all the mass of the electron (most physicists think it doesn’t) but let’s assume it does for the sake of the model that we’re trying to build up here. The point is: the theory of electromagnetic mass gives us a very simple explanation for the concept of mass here, and so we’ll use it for the time being. So we have some mass oscillating in two directions simultaneously: we basically assume space is, somehow, elastic. We have worked out the V-2 engine metaphor before, so we won’t repeat ourselves here.

Figure 2: A perpetuum mobile?

V2

Previously unrelated but structurally similar formulas may be related here:

  1. The energy of an oscillator: E = (1/2)·m·a2ω2
  2. Kinetic energy: E = (1/2)·m·v2
  3. The rotational (kinetic) energy that’s stored in a flywheel: E = (1/2)·I·ω2 = (1/2)·m·r2·ω2
  4. Einstein’s energy-mass equivalence relation: E = m·c2

Of course, we are mixing relativistic and non-relativistic formulas here, and there’s the 1/2 factor – but these are minor issues. For example, we were talking not one but two oscillators, so we should add their energies: (1/2)·m·a2·ω2 + (1/2)·m·a2·ω2 = m·a2·ω2. Also, one can show that the classical formula for kinetic energy (i.e. E = (1/2)·m·v2) morphs into E = m·c2 when we use the relativistically correct force equation for an oscillator. So, yes, our metaphor – or our suggested physical interpretation of the wavefunction, I should say – makes sense.

If you know something about physics, then you know the concept of the electromagnetic mass – its mathematical derivation, that is – gives us the classical electron radius, aka as the Thomson radius. It’s the smallest of a trio of radii that are relevant when discussing electrons: the other two radii are the Bohr radius and the Compton scattering radius respectively. The Thomson radius is used in the context of elastic scattering: the frequency of the incident particle (usually a photon), and the energy of the electron itself, do not change. In contrast, Compton scattering does change the frequency of the photon that is being scattered, and also impacts the energy of our electron. [As for the Bohr radius, you know that’s the radius of an electron orbital, roughly speaking – or the size of a hydrogen atom, I should say.]

Now, if we combine the E = m·a2·ω2 and E = m·c2 equations, then a·ω must be equal to c, right? Can we show this? Maybe. It is easy to see that we get the desired equality by substituting the amplitude of the oscillation (a) for the Compton scattering radius r = ħ/(m·c), and ω (the (angular) frequency of the oscillation) by using the Planck relation (ω = E/ħ):     

a·ω = [ħ/(m·c)]·[E/ħ] = E/(m·c) = m·c2/(m·c) = c

We get a wonderfully simple geometric model of an electron here: an electric charge that spins around in a plane. Its radius is the Compton electron radius – which makes sense – and the radial velocity of our spinning charge is the speed of light – which may or may not make sense. Of course, we need an explanation of why this spinning charge doesn’t radiate its energy away – but then we don’t have such explanation anyway. All we can say is that the electron charge seems to be spinning in its own space – that it’s racing along a geodesic. It’s just like mass creates its own space here: according to Einstein’s general relativity theory, gravity becomes a pseudo-force—literally: no real force. How? I am not sure: the model here assumes the medium – empty space – is, somehow, perfectly elastic: the electron constantly borrows energy from one direction and then returns it to the other – so to speak. A crazy model, yes – but is there anything better? We only want to present a metaphor here: a possible visualization of quantum-mechanical models.

However, if this model is to represent anything real, then many more questions need to be answered. For starters, let’s think about an interpretation of the results of the Stern-Gerlach experiment.

Precession

A spinning charge is a tiny magnet – and so it’s got a magnetic moment, which we need to explain the Stern-Gerlach experiment. But it doesn’t explain the discrete nature of the electron’s angular momentum: it’s either +ħ/2 or -ħ/2, nothing in-between, and that’s the case along any direction we choose. How can we explain this? Also, space is three-dimensional. Why would electrons spin in a perfect plane? The answer is: they don’t.

Indeed, the corollary of the above-mentioned binary value of the angular momentum is that the angular momentum – or the electron’s spin – is never completely along any direction. This may or may not be explained by the precession of a spinning charge in a field, which is illustrated below (illustration taken from Feynman’s Lectures, II-35-3).

Figure 3: Precession of an electron in a magnetic fieldprecession

So we do have an oscillation in three dimensions here, really – even if our wavefunction is a two-dimensional mathematical object. Note that the measurement (or the Stein-Gerlach apparatus in this case) establishes a line of sight and, therefore, a reference frame, so ‘up’ and ‘down’, ‘left’ and ‘right’, and ‘in front’ and ‘behind’ get meaning. In other words, we establish a real space. The question then becomes: how and why does an electron sort of snap into place?

The geometry of the situation suggests the logical angle of the angular momentum vector should be 45°. Now, if the value of its z-component (i.e. its projection on the z-axis) is to be equal to ħ/2, then the magnitude of J itself should be larger. To be precise, it should be equal to ħ/√2 ≈ 0.7·ħ (just apply Pythagoras’ Theorem). Is that value compatible with our flywheel model?

Maybe. Let’s see. The classical formula for the magnetic moment is μ = I·A, with I the (effective) current and A the (surface) area. The notation is confusing because I is also used for the moment of inertia, or rotational mass, but… Well… Let’s do the calculation. The effective current is the electron charge (qe) divided by the period (T) of the orbital revolution: : I = qe/T. The period of the orbit is the time that is needed for the electron to complete one loop. That time (T) is equal to the circumference of the loop (2π·a) divided by the tangential velocity (vt). Now, we suggest vt = r·ω = a·ω = c, and the circumference of the loop is 2π·a. For a, we still use the Compton radius a = ħ/(m·c). Now, the formula for the area is A = π·a2, so we get:

μ = I·A = [qe/T]·π·a2 = [qe·c/(2π·a)]·[π·a2] = [(qe·c)/2]·a = [(qe·c)/2]·[ħ/(m·c)] = [qe/(2m)]·ħ

In a classical analysis, we have the following relation between angular momentum and magnetic moment:

μ = (qe/2m)·J

Hence, we find that the angular momentum J is equal to ħ, so that’s twice the measured value. We’ve got a problem. We would have hoped to find ħ/2 or ħ/√2. Perhaps it’s  because a = ħ/(m·c) is the so-called reduced Compton scattering radius…

Well… No.

Maybe we’ll find the solution one day. I think it’s already quite nice we have a model that’s accurate up to a factor of 1/2 or 1/√2. 😊

Post scriptum: I’ve turned this into a small article which may or may not be more readable. You can link to it here. Comments are more than welcome.

Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/

Playing with amplitudes

Pre-script (dated 26 June 2020): This post got mutilated by the removal of some material by the dark force. You should be able to follow the main story line, however. If anything, the lack of illustrations might actually help you to think things through for yourself. In any case, we now have different views on these concepts as part of our realist interpretation of quantum mechanics, so we recommend you read our recent papers instead of these old blog posts.

Original post:

Let’s play a bit with the stuff we found in our previous post. This is going to be unconventional, or experimental, if you want. The idea is to give you… Well… Some ideas. So you can play yourself. 🙂 Let’s go.

Let’s first look at Feynman’s (simplified) formula for the amplitude of a photon to go from point a to point b. If we identify point by the position vector r1 and point by the position vector r2, and using Dirac’s fancy bra-ket notation, then it’s written as:

propagator

So we have a vector dot product here: pr12 = |p|∙|r12|· cosθ = p∙r12·cosα. The angle here (α) is the angle between the and r12 vector. All good. Well… No. We’ve got a problem. When it comes to calculating probabilities, the α angle doesn’t matter: |ei·θ/r|2 = 1/r2. Hence, for the probability, we get: P = | 〈r2|r1〉 |2 = 1/r122. Always ! Now that’s strange. The θ = pr12/ħ argument gives us a different phase depending on the angle (α) between p and r12. But… Well… Think of it: cosα goes from 1 to 0 when α goes from 0 to ±90° and, of course, is negative when p and r12 have opposite directions but… Well… According to this formula, the probabilities do not depend on the direction of the momentum. That’s just weird, I think. Did Feynman, in his iconic Lectures, give us a meaningless formula?

Maybe. We may also note this function looks like the elementary wavefunction for any particle, which we wrote as:

ψ(x, t) = a·e−i∙θ = a·e−i(E∙t − px)/ħ= a·ei(E∙t)/ħ·ei(px)/ħ

The only difference is that the 〈r2|r1〉 sort of abstracts away from time, so… Well… Let’s get a feel for the quantities. Let’s think of a photon carrying some typical amount of energy. Hence, let’s talk visible light and, therefore, photons of a few eV only – say 5.625 eV = 5.625×1.6×10−19 J = 9×10−19 J. Hence, their momentum is equal to p = E/c = (9×10−19 N·m)/(3×105 m/s) = 3×10−24 N·s. That’s tiny but that’s only because newtons and seconds are enormous units at the (sub-)atomic scale. As for the distance, we may want to use the thickness of a playing card as a starter, as that’s what Young used when establishing the experimental fact of light interfering with itself. Now, playing cards in Young’s time were obviously rougher than those today, but let’s take the smaller distance: modern cards are as thin as 0.3 mm. Still, that distance is associated with a value of θ that is equal to 13.6 million. Hence, the density of our wavefunction is enormous at this scale, and it’s a bit of a miracle that Young could see any interference at all ! As shown in the table below, we only get meaningful values (remember: θ is a phase angle) when we go down to the nanometer scale (10−9 m) or, even better, the angstroms scale ((10−9 m). table action

So… Well… Again: what can we do with Feynman’s formula? Perhaps he didn’t give us a propagator function but something that is more general (read: more meaningful) at our (limited) level of knowledge. As I’ve been reading Feynman for quite a while now – like three or four years 🙂 – I think… Well… Yes. That’s it. Feynman wants us to think about it. 🙂 Are you joking again, Mr. Feynman? 🙂 So let’s assume the reasonable thing: let’s assume it gives us the amplitude to go from point a to point by the position vector along some path r. So, then, in line with what we wrote in our previous post, let’s say p·r (momentum over a distance) is the action (S) we’d associate with this particular path (r) and then see where we get. So let’s write the formula like this:

ψ = a·ei·θ = (1/rei·S = ei·p∙r/r

We’ll use an index to denote the various paths: r0 is the straight-line path and ri is any (other) path. Now, quantum mechanics tells us we should calculate this amplitude for every possible path. The illustration below shows the straight-line path and two nearby paths. So each of these paths is associated with some amount of action, which we measure in Planck units: θ = S/ħalternative paths

The time interval is given by = tr0/c, for all paths. Why is the time interval the same for all paths? Because we think of a photon going from some specific point in space and in time to some other specific point in space and in time. Indeed, when everything is said and done, we do think of light as traveling from point a to point at the speed of light (c). In fact, all of the weird stuff here is all about trying to explain how it does that. 🙂

Now, if we would think of the photon actually traveling along this or that path, then this implies its velocity along any of the nonlinear paths will be larger than c, which is OK. That’s just the weirdness of quantum mechanics, and you should actually not think of the photon actually traveling along one of these paths anyway although we’ll often put it that way. Think of something fuzzier, whatever that may be. 🙂

So the action is energy times time, or momentum times distance. Hence, the difference in action between two paths and j is given by:

δ= p·rj − p·ri = p·(rj − ri) = p·Δr

I’ll explain the δS < ħ/3 thing in a moment. Let’s first pause and think about the uncertainty and how we’re modeling it. We can effectively think of the variation in as some uncertainty in the action: δ= ΔS = p·Δr. However, if S is also equal to energy times time (= E·t), and we insist is the same for all paths, then we must have some uncertainty in the energy, right? Hence, we can write δas ΔS = ΔE·t. But, of course, E = E = m·c2 = p·c, so we will have an uncertainty in the momentum as well. Hence, the variation in should be written as:

δ= ΔS = Δp·Δr

That’s just logical thinking: if we, somehow, entertain the idea of a photon going from some specific point in spacetime to some other specific point in spacetime along various paths, then the variation, or uncertainty, in the action will effectively combine some uncertainty in the momentum and the distance. We can calculate Δp as ΔE/c, so we get the following:

δ= ΔS = Δp·Δr = ΔE·Δr/c = ΔE·Δt with ΔtΔr/c

So we have the two expressions for the Uncertainty Principle here: ΔS = Δp·Δr = ΔE·Δt. Just be careful with the interpretation of Δt: it’s just the equivalent of Δr. We just express the uncertainty in distance in seconds using the (absolute) speed of light. We are not changing our spacetime interval: we’re still looking at a photon going from to in seconds, exactly. Let’s now look at the δS < ħ/3 thing. If we’re adding two amplitudes (two arrows or vectors, so to speak) and we want the magnitude of the result to be larger than the magnitude of the two contributions, then the angle between them should be smaller than 120 degrees, so that’s 2π/3 rad. The illustration below shows how you can figure that out geometrically.angles 2Hence, if S0 is the action for r0, then S1 = S0 + ħ and S2 = S0 + 2·ħ are still good, but S3 = S0 + 3·ħ is not good. Why? Because the difference in the phase angles is Δθ = S1/ħ − S0/ħ = (S0 + ħ)/ħ − S0/ħ = 1 and Δθ = S2/ħ − S0/ħ = (S0 + 2·ħ)/ħ − S0/ħ = 2 respectively, so that’s 57.3° and 114.6° respectively and that’s, effectively, less than 120°. In contrast, for the next path, we find that Δθ = S3/ħ − S0/ħ = (S0 + 3·ħ)/ħ − S0/ħ = 3, so that’s 171.9°. So that amplitude gives us a negative contribution.

Let’s do some calculations using a spreadsheet. To simplify things, we will assume we measure everything (time, distance, force, mass, energy, action,…) in Planck units. Hence, we can simply write: Sn = S0 + n. Of course, = 1, 2,… etcetera, right? Well… Maybe not. We are measuring action in units of ħ, but do we actually think action comes in units of ħ? I am not sure. It would make sense, intuitively, but… Well… There’s uncertainty on the energy (E) and the momentum (p) of our photon, right? And how accurately can we measure the distance? So there’s some randomness everywhere. 😦 So let’s leave that question open as for now.

We will also assume that the phase angle for S0 is equal to 0 (or some multiple of 2π, if you want). That’s just a matter of choosing the origin of time. This makes it really easy: ΔSn = Sn − S0 = n, and the associated phase angle θn = Δθn is the same. In short, the amplitude for each path reduces to ψn = ei·n/r0. So we need to add these first and then calculate the magnitude, which we can then square to get a probability. Of course, there is also the issue of normalization (probabilities have to add up to one) but let’s tackle that later. For the calculations, we use Euler’s r·ei·θ = r·(cosθ + i·sinθ) = r·cosθ + i·r·sinθ formula. Needless to say, |r·ei·θ|2 = |r|2·|ei·θ|2 = |r|2·(cos2θ + sin2θ) = r. Finally, when adding complex numbers, we add the real and imaginary parts respectively, and we’ll denote the ψ0 + ψ1 +ψ2 + … sum as Ψ.

Now, we also need to see how our ΔS = Δp·Δr works out. We may want to assume that the uncertainty in p and in r will both be proportional to the overall uncertainty in the action. For example, we could try writing the following: ΔSn = Δpn·Δrn = n·Δp1·Δr1. It also makes sense that you may want Δpn and Δrn to be proportional to Δp1 and Δr1 respectively. Combining both, the assumption would be this:

Δpn = √n·Δpand Δrn = √n·Δr1

So now we just need to decide how we will distribute ΔS1 = ħ = 1 over Δp1 and Δr1 respectively. For example, if we’d assume Δp1 = 1, then Δr1 = ħ/Δp1 = 1/1 = 1. These are the calculations. I will let you analyze them. 🙂newnewWell… We get a weird result. It reminds me of Feynman’s explanation of the partial reflection of light, shown below, but… Well… That doesn’t make much sense, does it?

partial reflection

Hmm… Maybe it does. 🙂 Look at the graph more carefully. The peaks sort of oscillate out so… Well… That might make sense… 🙂

Does it? Are we doing something wrong here? These amplitudes should reflect the ones that are reflected in those nice animations (like this one, for example, which is part of that’s part of the Wikipedia article on Feynman’s path integral formulation of quantum mechanics). So what’s wrong, if anything? Well… Our paths differ by some fixed amount of action, which doesn’t quite reflect the geometric approach that’s used in those animations. The graph below shows how the distance varies as a function of ngeometry

If we’d use a model in which the distance would increase linearly or, preferably, exponentially, then we’d get the result we want to get, right?

Well… Maybe. Let’s try it. Hmm… We need to think about the geometry here. Look at the triangle below. triangle sideIf is the straight-line path (r0), then ac could be one of the crooked paths (rn). To simplify, we’ll assume isosceles triangles, so equals c and, hence, rn = 2·a = 2·c. We will also assume the successive paths are separated by the same vertical distance (h = h1) right in the middle, so hb = hn = n·h1. It is then easy to show the following:r formulaThis gives the following graph for rn = 10 and h= 0.01.r graph

Is this the right step increase? Not sure. We can vary the values in our spreadsheet. Let’s first build it. The photon will have to travel faster in order to cover the extra distance in the same time, so its momentum will be higher. Let’s think about the velocity. Let’s start with the first path (= 1). In order to cover the extra distance Δr1, the velocity c1 must be equal to (r0 + Δr1)/= r0/+ Δr1/t = + Δr1/= c0 + Δr1/t. We can write c1 as c1 = c0 + Δc1, so Δc1 = Δr1/t. Now, the ratio of p1  and p0 will be equal to the ratio of c1 and c0 because p1/p= (mc1)/mc0) = c1/c0. Hence, we have the following formula for p1:

p1 = p0·c1/c0 = p0·(c0 + Δc1)/c0 = p0·[1 + Δr1/(c0·t) = p0·(1 + Δr1/r0)

For pn, the logic is the same, so we write:

pn = p0·cn/c0 = p0·(c0 + Δcn)/c0 = p0·[1 + Δrn/(c0·t) = p0·(1 + Δrn/r0)

Let’s do the calculations, and let’s use meaningful values, so the nanometer scale and actual values for Planck’s constant and the photon momentum. The results are shown below. original

Pretty interesting. In fact, this looks really good. The probability first swings around wildly, because of these zones of constructive and destructive interference, but then stabilizes. [Of course, I would need to normalize the probabilities, but you get the idea, right?] So… Well… I think we get a very meaningful result with this model. Sweet ! 🙂 I’m lovin’ it ! 🙂 And, here you go, this is (part of) the calculation table, so you can see what I am doing. 🙂newnew

The graphs below look even better: I just changed the h1/r0 ratio from 1/100 to 1/10. The probability stabilizes almost immediately. 🙂 So… Well… It’s not as fancy as the referenced animation, but I think the educational value of this thing here is at least as good ! 🙂great

🙂 This is good stuff… 🙂

Post scriptum (19 September 2017): There is an obvious inconsistency in the model above, and in the calculations. We assume there is a path r1 = , r2, r2,etcetera, and then we calculate the action for it, and the amplitude, and then we add the amplitude to the sum. But, surely, we should count these paths twice, in two-dimensional space, that is. Think of the graph: we have positive and negative interference zones that are sort of layered around the straight-line path, as shown below.zones

In three-dimensional space, these lines become surfaces. Hence, rather than adding one arrow for every δ  having one contribution only, we may want to add… Well… In three-dimensional space, the formula for the surface around the straight-line path would probably look like π·hn·r1, right? Hmm… Interesting idea. I changed my spreadsheet to incorporate that idea, and I got the graph below. It’s a nonsensical result, because the probability does swing around, but it gradually spins out of control: it never stabilizes.revisedThat’s because we increase the weight of the paths that are further removed from the center. So… Well… We shouldn’t be doing that, I guess. 🙂 I’ll you look for the right formula, OK? Let me know when you found it. 🙂

Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/

The energy and 1/2 factor in Schrödinger’s equation

Schrödinger’s equation, for a particle moving in free space (so we have no external force fields acting on it, so V = 0 and, therefore, the Vψ term disappears) is written as:

∂ψ(x, t)/∂t = i·(1/2)·(ħ/meff)·∇2ψ(x, t)

We already noted and explained the structural similarity with the ubiquitous diffusion equation in physics:

∂φ(x, t)/∂t = D·∇2φ(x, t) with x = (x, y, z)

The big difference between the wave equation and an ordinary diffusion equation is that the wave equation gives us two equations for the price of one: ψ is a complex-valued function, with a real and an imaginary part which, despite their name, are both equally fundamental, or essential. Whatever word you prefer. 🙂 That’s also what the presence of the imaginary unit (i) in the equation tells us. But for the rest it’s the same: the diffusion constant (D) in Schrödinger’s equation is equal to (1/2)·(ħ/meff).

Why the 1/2 factor? It’s ugly. Think of the following: If we bring the (1/2)·(ħ/meff) to the other side, we can write it as meff/(ħ/2). The ħ/2 now appears as a scaling factor in the diffusion constant, just like ħ does in the de Broglie equations: ω = E/ħ and k = p/ħ, or in the argument of the wavefunction: θ = (E·t − p∙x)/ħ. Planck’s constant is, effectively, a physical scaling factor. As a physical scaling constant, it usually does two things:

  1. It fixes the numbers (so that’s its function as a mathematical constant).
  2. As a physical constant, it also fixes the physical dimensions. Note, for example, how the 1/ħ factor in ω = E/ħ and k = p/ħ ensures that the ω·t = (E/ħ)·t and k·x = (p/ħ)·x terms in the argument of the wavefunction are both expressed as some dimensionless number, so they can effectively be added together. Physicists don’t like adding apples and oranges.

The question is: why did Schrödinger use ħ/2, rather than ħ, as a scaling factor? Let’s explore the question.

The 1/2 factor

We may want to think that 1/2 factor just echoes the 1/2 factor in the Uncertainty Principle, which we should think of as a pair of relations: σx·σp ≥ ħ/2 and σE·σ≥ ħ/2. However, the 1/2 factor in those relations only makes sense because we chose to equate the fundamental uncertainty (Δ) in x, p, E and t with the mathematical concept of the standard deviation (σ), or the half-width, as Feynman calls it in his wonderfully clear exposé on it in one of his Lectures on quantum mechanics (for a summary with some comments, see my blog post on it). We may just as well choose to equate Δ with the full-width of those probability distributions we get for x and p, or for E and t. If we do that, we get σx·σp ≥ ħ and σE·σ≥ ħ.

It’s a bit like measuring the weight of a person on an old-fashioned (non-digital) bathroom scale with 1 kg marks only: do we say this person is x kg ± 1 kg, or x kg ± 500 g? Do we take the half-width or the full-width as the margin of error? In short, it’s a matter of appreciation, and the 1/2 factor in our pair of uncertainty relations is not there because we’ve got two relations. Likewise, it’s not because I mentioned we can think of Schrödinger’s equation as a pair of relations that, taken together, represent an energy propagation mechanism that’s quite similar in its structure to Maxwell’s equations for an electromagnetic wave (as shown below), that we’d insert (or not) that 1/2 factor: either of the two representations below works. It just depends on our definition of the concept of the effective mass.

The 1/2 factor is really a matter of choice, because the rather peculiar – and flexible – concept of the effective mass takes care of it. However, we could define some new effective mass concept, by writing: meffNEW = 2∙meffOLD, and then Schrödinger’s equation would look more elegant:

∂ψ/∂t = i·(ħ/meffNEW)·∇2ψ

Now you’ll want the definition, of course! What is that effective mass concept? Feynman talks at length about it, but his exposé is embedded in a much longer and more general argument on the propagation of electrons in a crystal lattice, which you may not necessarily want to go through right now. So let’s try to answer that question by doing something stupid: let’s substitute ψ in the equation for ψ = a·ei·[E·t − p∙x]/ħ (which is an elementary wavefunction), calculate the time derivative and the Laplacian, and see what we get. If we do that, the ∂ψ/∂t = i·(1/2)·(ħ/meff)·∇2ψ equation becomes:

i·a·(E/ħei∙(E·t − p∙x)/ħ = i·a·(1/2)·(ħ/meff)(p2/ħ2ei∙(E·t − p∙x) 

⇔ E = (1/2)·p2/meff = (1/2)·(m·v)2/meff ⇔ meff = (1/2)·(m/E)·m·v2

⇔ meff = (1/c2)·(m·v2/2) = m·β2/2

Hence, the effective mass appears in this equation as the equivalent mass of the kinetic energy (K.E.) of the elementary particle that’s being represented by the wavefunction. Now, you may think that sounds good – and it does – but you should note the following:

1. The K.E. = m·v2/2 formula is only correct for non-relativistic speeds. In fact, it’s the kinetic energy formula if, and only if, if m ≈ m0. The relativistically correct formula for the kinetic energy calculates it as the difference between (1) the total energy (which is given by the E = m·c2 formula, always) and (2) its rest energy, so we write:

K.E. = E − E0 = mv·c2 − m0·c2 = m0·γ·c2 − m0·c2 = m0·c2·(γ − 1)

2. The energy concept in the wavefunction ψ = a·ei·[E·t − p∙x]/ħ is, obviously, the total energy of the particle. For non-relativistic speeds, the kinetic energy is only a very small fraction of the total energy. In fact, using the formula above, you can calculate the ratio between the kinetic and the total energy: you’ll find it’s equal to 1 − 1/γ = 1 − √(1−v2/c2), and its graph goes from 0 to 1.

graph

Now, if we discard the 1/2 factor, the calculations above yield the following:

i·a·(E/ħ)·ei∙(E·t − p∙x)/ħ = −i·a·(ħ/meff)(p22ei∙(E·t − p∙x)/ħ 

⇔ E = p2/meff = (m·v)2/meff ⇔ meff = (m/E)·m·v2

⇔ meff = m·v2/c= m·β2

In fact, it is fair to say that both definitions are equally weird, even if the dimensions come out alright: the effective mass is measured in old-fashioned mass units, and the βor β2/2 factor appears as a sort of correction factor, varying between 0 and 1 (for β2) or between 0 and 1/2 (for β2/2). I prefer the new definition, as it ensures that meff becomes equal to m in the limit for the velocity going to c. In addition, if we bring the ħ/meff or (1/2)∙ħ/meff factor to the other side of the equation, the choice becomes one between a meffNEW/ħ or a 2∙meffOLD/ħ coefficient.

It’s a choice, really. Personally, I think the equation without the 1/2 factor – and, hence, the use of ħ rather than ħ/2 as the scaling factor – looks better, but then you may argue that – if half of the energy of our particle is in the oscillating real part of the wavefunction, and the other is in the imaginary part – then the 1/2 factor should stay, because it ensures that meff becomes equal to m/2 as v goes to c (or, what amounts to the same, β goes to 1). But then that’s the argument about whether or not we should have a 1/2 factor because we get two equations for the price of one, like we did for the Uncertainty Principle.

So… What to do? Let’s first ask ourselves whether that derivation of the effective mass actually makes sense. Let’s therefore look at both limit situations.

1. For v going to c (or β = v/c going to 1), we do not have much of a problem: meff just becomes the total mass of the particle that we’re looking at, and Schrödinger’s equation can easily be interpreted as an energy propagation mechanism. Our particle has zero rest mass in that case ( we may also say that the concept of a rest mass is meaningless in this situation) and all of the energy – and, therefore, all of the equivalent mass – is kinetic: m = E/cand the effective mass is just the mass: meff = m·c2/c= m. Hence, our particle is everywhere and nowhere. In fact, you should note that the concept of velocity itself doesn’t make sense in this rather particular case. It’s like a photon (but note it’s not a photon: we’re talking some theoretical particle here with zero spin and zero rest mass): it’s a wave in its own frame of reference, but as it zips by at the speed of light, we think of it as a particle.

2. Let’s look at the other limit situation. For v going to 0 (or β = v/c going to 0), Schrödinger’s equation no longer makes sense, because the diffusion constant goes to zero, so we get a nonsensical equation. Huh? What’s wrong with our analysis?

Well… I must be honest. We started off on the wrong foot. You should note that it’s hard – in fact, plain impossible – to reconcile our simple a·ei·[E·t − p∙x]/ħ function with the idea of the classical velocity of our particle. Indeed, the classical velocity corresponds to a group velocity, or the velocity of a wave packet, and so we just have one wave here: no group. So we get nonsense. You can see the same when equating p to zero in the wave equation: we get another nonsensical equation, because the Laplacian is zero! Check it. If our elementary wavefunction is equal to ψ = a·ei·(E/ħ)·t, then that Laplacian is zero.

Hence, our calculation of the effective mass is not very sensical. Why? Because the elementary wavefunction is a theoretical concept only: it may represent some box in space, that is uniformly filled with energy, but it cannot represent any actual particle. Actual particles are always some superposition of two or more elementary waves, so then we’ve got a wave packet (as illustrated below) that we can actually associate with some real-life particle moving in space, like an electron in some orbital indeed. 🙂

wave-packet

I must credit Oregon State University for the animation above. It’s quite nice: a simple particle in a box model without potential. As I showed on my other page (explaining various models), we must add at least two waves – traveling in opposite directions – to model a particle in a box. Why? Because we represent it by a standing wave, and a standing wave is the sum of two waves traveling in opposite directions.

So, if our derivation above was not very meaningful, then what is the actual concept of the effective mass?

The concept of the effective mass

I am afraid that, at this point, I do have to direct you back to the Grand Master himself for the detail. Let me just try to sum it up very succinctly. If we have a wave packet, there is – obviously – some energy in it, and it’s energy we may associate with the classical concept of the velocity of our particle – because it’s the group velocity of our wave packet. Hence, we have a new energy concept here – and the equivalent mass, of course. Now, Feynman’s analysis – which is Schrödinger’s analysis, really – shows we can write that energy as:

E = meff·v2/2

So… Well… That’s the classical kinetic energy formula. And it’s the very classical one, because it’s not relativistic. 😦 But that’s OK for relatively small-moving electrons! [Remember the typical (relative) velocity is given by the fine-structure constant: α = β = v/c. So that’s impressive (about 2,188 km per second), but it’s only a tiny fraction of the speed of light, so non-relativistic formulas should work.]

Now, the meff factor in this equation is a function of the various parameters of the model he uses. To be precise, we get the following formula out of his model (which, as mentioned above, is a model of electrons propagating in a crystal lattice):

meff = ħ2/(2·A·b2 )

Now, the b in this formula is the spacing between the atoms in the lattice. The A basically represents an energy barrier: to move from one atom to another, the electron needs to get across it. I talked about this in my post on it, and so I won’t explain the graph below – because I did that in that post. Just note that we don’t need that factor 2: there is no reason whatsoever to write E+ 2·A and E2·A. We could just re-define a new A: (1/2)·ANEW = AOLD. The formula for meff then simplifies to ħ2/(2·AOLD·b2) = ħ2/(ANEW·b2). We then get an Eeff = meff·vformula for the extra energy.

energy

Eeff = meff·v2?!? What energy formula is that? Schrödinger must have thought the same thing, and so that’s why we have that ugly 1/2 factor in his equation. However, think about it. Our analysis shows that it is quite straightforward to model energy as a two-dimensional oscillation of mass. In this analysis, both the real and the imaginary component of the wavefunction each store half of the total energy of the object, which is equal to E = m·c2. Remember, indeed, that we compared it to the energy in an oscillator, which is equal to the sum of kinetic and potential energy, and for which we have the T + U = m·ω02/2 formula. But so we have two oscillators here and, hence, twice the energy. Hence, the E = m·c2 corresponds to m·ω0and, hence, we may think of as the natural frequency of the vacuum.

Therefore, the Eeff = meff·v2 formula makes much more sense. It nicely mirrors Einstein’s E = m·c2 formula and, in fact, naturally merges into E = m·c for v approaching c. But, I admit, it is not so easy to interpret. It’s much easier to just say that the effective mass is the mass of our electron as it appears in the kinetic energy formula, or – alternatively – in the momentum formula. Indeed, Feynman also writes the following formula:

meff·v = p = ħ·k

Now, that is something we easily recognize! 🙂

So… Well… What do we do now? Do we use the 1/2 factor or not?

It would be very convenient, of course, to just stick with tradition and use meff as everyone else uses it: it is just the mass as it appears in whatever medium we happen to look it, which may be a crystal lattice (or a semi-conductor), or just free space. In short, it’s the mass of the electron as it appears to us, i.e. as it appears in the (non-relativistic) kinetic energy formula (K.E. = meff·v2/2), the formula for the momentum of an electron (p = meff·v), or in the wavefunction itself (k = p/ħ = (meff·v)/ħ. In fact, in his analysis of the electron orbitals, Feynman (who just follows Schrödinger here) drops the eff subscript altogether, and so the effective mass is just the mass: meff = m. Hence, the apparent mass of the electron in the hydrogen atom serves as a reference point, and the effective mass in a different medium (such as a crystal lattice, rather than free space or, I should say, a hydrogen atom in free space) will also be different.

The thing is: we get the right results out of Schrödinger’s equation, with the 1/2 factor in it. Hence, Schrödinger’s equation works: we get the actual electron orbitals out of it. Hence, Schrödinger’s equation is true – without any doubt. Hence, if we take that 1/2 factor out, then we do need to use the other effective mass concept. We can do that. Think about the actual relation between the effective mass and the real mass of the electron, about which Feynman writes the following: “The effective mass has nothing to do with the real mass of an electron. It may be quite different—although in commonly used metals and semiconductors it often happens to turn out to be the same general order of magnitude: about 0.1 to 30 times the free-space mass of the electron.” Hence, if we write the relation between meff and m as meff = g(m), then the same relation for our meffNEW = 2∙meffOLD becomes meffNEW = 2·g(m), and the “about 0.1 to 30 times” becomes “about 0.2 to 60 times.”

In fact, in the original 1963 edition, Feynman writes that the effective mass is “about 2 to 20 times” the free-space mass of the electron. Isn’t that interesting? I mean… Note that factor 2! If we’d write meff = 2·m, then we’re fine. We can then write Schrödinger’s equation in the following two equivalent ways:

  1. (meff/ħ)·∂ψ/∂t = i·∇2ψ
  2. (2m/ħ)·∂ψ/∂t = i·∇2ψ

Both would be correct, and it explains why Schrödinger’s equation works. So let’s go for that compromise and write Schrödinger’s equation in either of the two equivalent ways. 🙂 The question then becomes: how to interpret that factor 2? The answer to that question is, effectively, related to the fact that we get two waves for the price of one here. So we have two oscillators, so to speak. Now that‘s quite deep, and I will explore that in one of my next posts.

Let me now address the second weird thing in Schrödinger’s equation: the energy factor. I should be more precise: the weirdness arises when solving Schrödinger’s equation. Indeed, in the texts I’ve read, there is this constant switching back and forth between interpreting E as the energy of the atom, versus the energy of the electron. Now, both concepts are obviously quite different, so which one is it really?

The energy factor E

It’s a confusing point—for me, at least and, hence, I must assume for students as well. Let me indicate, by way of example, how the confusion arises in Feynman’s exposé on the solutions to the Schrödinger equation. Initially, the development is quite straightforward. Replacing V by −e2/r, Schrödinger’s equation becomes:

Eq1

As usual, it is then assumed that a solution of the form ψ (r, t) =  e−(i/ħ)·E·t·ψ(r) will work. Apart from the confusion that arises because we use the same symbol, ψ, for two different functions (you will agree that ψ (r, t), a function in two variables, is obviously not the same as ψ(r), a function in one variable only), this assumption is quite straightforward and allows us to re-write the differential equation above as:

de

To get this, you just need to actually to do that time derivative, noting that the ψ in our equation is now ψ(r), not ψ (r, t). Feynman duly notes this as he writes: “The function ψ(rmust solve this equation, where E is some constant—the energy of the atom.” So far, so good. In one of the (many) next steps, we re-write E as E = ER·ε, with E= m·e4/2ħ2. So we just use the Rydberg energy (E≈ 13.6 eV) here as a ‘natural’ atomic energy unit. That’s all. No harm in that.

Then all kinds of complicated but legitimate mathematical manipulations follow, in an attempt to solve this differential equation—attempt that is successful, of course! However, after all these manipulations, one ends up with the grand simple solution for the s-states of the atom (i.e. the spherically symmetric solutions):

En = −ER/nwith 1/n= 1, 1/4, 1/9, 1/16,…, 1

So we get: En = −13.6 eV, −3.4 eV, −1.5 eV, etcetera. Now how is that possible? How can the energy of the atom suddenly be negative? More importantly, why is so tiny in comparison with the rest energy of the proton (which is about 938 mega-electronvolt), or the electron (0.511 MeV)? The energy levels above are a few eV only, not a few million electronvolt. Feynman answers this question rather vaguely when he states the following:

“There is, incidentally, nothing mysterious about negative numbers for the energy. The energies are negative because when we chose to write V = −e2/r, we picked our zero point as the energy of an electron located far from the proton. When it is close to the proton, its energy is less, so somewhat below zero. The energy is lowest (most negative) for n = 1, and increases toward zero with increasing n.”

We picked our zero point as the energy of an electron located far away from the proton? But we were talking the energy of the atom all along, right? You’re right. Feynman doesn’t answer the question. The solution is OK – well, sort of, at least – but, in one of those mathematical complications, there is a ‘normalization’ – a choice of some constant that pops up when combining and substituting stuff – that is not so innocent. To be precise, at some point, Feynman substitutes the ε variable for the square of another variable – to be even more precise, he writes: ε = −α2. He then performs some more hat tricks – all legitimate, no doubt – and finds that the only sensible solutions to the differential equation require α to be equal to 1/n, which immediately leads to the above-mentioned solution for our s-states.

The real answer to the question is given somewhere else. In fact, Feynman casually gives us an explanation in one of his very first Lectures on quantum mechanics, where he writes the following:

“If we have a “condition” which is a mixture of two different states with different energies, then the amplitude for each of the two states will vary with time according to an equation like a·eiωt, with ħ·ω = E0 = m·c2. Hence, we can write the amplitude for the two states, for example as:

ei(E1/ħ)·t and ei(E2/ħ)·t

And if we have some combination of the two, we will have an interference. But notice that if we added a constant to both energies, it wouldn’t make any difference. If somebody else were to use a different scale of energy in which all the energies were increased (or decreased) by a constant amount—say, by the amount A—then the amplitudes in the two states would, from his point of view, be

ei(E1+A)·t/ħ and ei(E2+A)·t/ħ

All of his amplitudes would be multiplied by the same factor ei(A/ħ)·t, and all linear combinations, or interferences, would have the same factor. When we take the absolute squares to find the probabilities, all the answers would be the same. The choice of an origin for our energy scale makes no difference; we can measure energy from any zero we want. For relativistic purposes it is nice to measure the energy so that the rest mass is included, but for many purposes that aren’t relativistic it is often nice to subtract some standard amount from all energies that appear. For instance, in the case of an atom, it is usually convenient to subtract the energy Ms·c2, where Ms is the mass of all the separate pieces—the nucleus and the electrons—which is, of course, different from the mass of the atom. For other problems, it may be useful to subtract from all energies the amount Mg·c2, where Mg is the mass of the whole atom in the ground state; then the energy that appears is just the excitation energy of the atom. So, sometimes we may shift our zero of energy by some very large constant, but it doesn’t make any difference, provided we shift all the energies in a particular calculation by the same constant.”

It’s a rather long quotation, but it’s important. The key phrase here is, obviously, the following: “For other problems, it may be useful to subtract from all energies the amount Mg·c2, where Mg is the mass of the whole atom in the ground state; then the energy that appears is just the excitation energy of the atom.” So that’s what he’s doing when solving Schrödinger’s equation. However, I should make the following point here: if we shift the origin of our energy scale, it does not make any difference in regard to the probabilities we calculate, but it obviously does make a difference in terms of our wavefunction itself. To be precise, its density in time will be very different. Hence, if we’d want to give the wavefunction some physical meaning – which is what I’ve been trying to do all along – it does make a huge difference. When we leave the rest mass of all of the pieces in our system out, we can no longer pretend we capture their energy.

This is a rather simple observation, but one that has profound implications in terms of our interpretation of the wavefunction. Personally, I admire the Great Teacher’s Lectures, but I am really disappointed that he doesn’t pay more attention to this. 😦

Freewheeling…

In my previous post, I copied a simple animation from Wikipedia to show how one can move from Cartesian to polar coordinates. It’s really neat. Just watch it a few times to appreciate what’s going on here.

Cartesian_to_polarFirst, the function is being inverted, so we go from y = f(x) to x = g(y) with gf−1. In this case, we know that if y = sin(6x) + 2 (that’s the function above), then x = (1/6)·arcsin(y – 2). [Note the troublesome convention to denote the inverse function by the -1 superscript: it’s troublesome because that superscript is also used for a reciprocal—and f−1 has, obviously, nothing to do with 1/f. In any case, let’s move on.] So we swap the x-axis for the y-axis, and vice versa. In fact, to be precise, we reflect them about the diagonal. In fact, w’re reflecting the whole space here, including the graph of the function. Note that, in three-dimensional space, this reflection can also be looked at as a rotation – again, of all space, including the graph and the axes  – by 180 degrees. The axis of rotation is, obviously, the same diagonal. [I like how the animation visualizes this. Neat! It made me think!]

Of course, if we swap the axes, then the domain and the range of the function get swapped too. Let’s see how that works here: x goes from −π to +π, so that’s one cycle (but one that starts from −π and goes to +π, rather than from 0 to 2π), and, hence, y ranges between 1 and 3. [Whatever its argument, the sine function always yields a value between −1 and +1, but we add 2 to every value it takes, so we get the [1, 3] interval now.] After swapping the x- and y-axis, the angle, i.e. the interval between −π and +π, is now on the vertical axis. That’s clear enough. So far so good. 🙂 The operation that follows, however, is a much more complicated transformation of space and, therefore, much more interesting.

The transformation bends the graph around the origin so its head and tail meet. That’s easy to see. What’s a bit more difficult to understand is how the coordinate axes transform. I had to look at the animation several times – so please do the same. Note how this transformation wraps all of the vertical lines around a circle, and how the radius of those  circles depends on the distance of those lines from the origin (as measured along the horizontal axis). What about the vertical axis? The animation is somewhat misleading here, as it gives the impression we’re first making another circle out of it, which we then sort of shrink—all the way down to a circle with zero radius! So the vertical axis becomes the origin of our new space. However, there’s no shrinking really. What happens is that we also wrap it around a circle—but one with zero radius indeed!

It’s a very weird operation because we’re dealing with a non-linear transformation here (unlike rotation or reflection) and, therefore, we’re not familiar with it. Even weirder is what happens to the horizontal axis: somehow, this axis becomes an infinite disc, so the distance out is now measured from the center outwards. I should figure out the math here, but that’s for later. The point is: the r = sin(6θ) + 2 function in the final graph (i.e. the curve that looks like a petaled flower) is the same as that y = sin(6x) + 2 curve, so y = r and x = θ, and so we can write what’s written above: r(θ) = sin(6·θ) + 2.

You’ll say: nice, but so what? Well… When I saw this animation, my first reaction was: what if the x and y would be time and space respectively? You’ll say: what space? Well… Just space: three-dimensional space. So think of one of the axes as packing three dimensions really, or three directions—like what’s depicted below. Now think of some point-like object traveling through spacetime, as shown below. It doesn’t need to be point-like, of course—just small enough so we can represent its trajectory by a line. You can also think of the movement of its center-of-mass if you don’t like point-like assumptions. 🙂

trajectory

Of course, you’ll immediately say the trajectory above is not kosher, as our object travels back in time in three sections of this ‘itinerary’.

You’re right. Let’s correct that. It’s easy to see how we should correct it. We just need to ensure the itinerary is a well-defined function, which isn’t the case with the function above: for one value of t, we have only one value of everywhere—except where we allow our particle to travel back in time. So… Well… We shouldn’t allow that. The concept of a well-defined function implies we need to choose one direction in time. 🙂 That’s neat, because this gives us an explanation for the unique direction of time without having to invoke entropy or other macro-concepts. So let’s replace that thing above by something more kosher traveling in spacetime, like the thing below.

trajectory 2Now think of wrapping that around some circle. We’d get something like below. [Don’t worry about the precise shape of the  graph, as I made up a new one. Note the remark on the need to have a well-behaved function applies here too!]

trajectory 4Neat, you’ll say, but so what? All we’ve done so far is show that we can represent some itinerary in spacetime in two different ways. In the first representation, we measure time along some linear axis, while, in the second representation, time becomes some angle—an angle that increases, counter-clockwise. To put it differently: time becomes an angular velocity.

Likewise, the spatial dimension was a linear feature in the first representation, while in the second we think of it as some distance measured from some zero point. Well… In fact… No. That’s not correct. The above has got nothing to do with the distance traveled: the distance traveled would need to be measured along the curve.

Hmm… What’s the deal here?

Frankly, I am not sure. Now that I look at it once more, I note that the exercise with our graph above involved one cycle of a periodic function only—so it’s really not like some object traveling in spacetime, because that’s not a periodic thing. But… Well… Does that matter all that much? It’s easy to imagine how our new representation would just involve some thing that keeps going around and around, as illustrated below.

trajectory 5

So, in this representation, any movement in spacetime – regular or irregular – does become something periodic. But what is periodic here? My first answer is the simplest and, hence, probably the correct one: it’s just time. Time is the periodic thing here.

Having said that, I immediately thought of something else that’s periodic: the wavefunction that’s associated with this object—any object traveling in spacetime, really—is periodic too. So my guts instinct tells me there’s something here that we might want to explore further. 🙂 Could we replace the function for the trajectory with the wavefunction?

Huh? Yes. The wavefunction also associates each x and t, although the association is a bit more complex—literally, because we’ll associate it with two periodic functions: the real part and the imaginary part of the (complex-valued) wavefunction. But for the rest, no problem, I’d say. Remember our wavefunction, when squared, represents the probability of our object being there. [I should say “absolute-squared” rather than squared, but that sounds so weird.]

But… Yes? Well… Don’t we get in trouble here because the same complex number (i.e. r·eθ = x + i·y) may be related to two points in spacetime—as shown in the example above? My answer is the same: I don’t think so. It’s the same thing: our new representation implies stuff keeps going around and around in it. In fact, that just captures the periodicity of the wavefunction. So… Well… It’s fine. 🙂

The more important question is: what can we do with this new representation? Here I do not have any good answer. Nothing much for the moment. I just wanted to jot it down, because it triggers some deep thoughts—things I don’t quite understand myself, as yet.

First, I like the connection between a regular trajectory in spacetime – as represented by a well-defined function – and the unique direction in time it implies. It’s a simple thing: we know something can travel in any direction in space – forward, backwards, sideways, whatever – but time has one direction only. At least we can see why now: both in Cartesian as well as polar coordinates, we’d want to see a well-behaved function. 🙂 Otherwise we couldn’t work with it.

Another thought is the following. We associate the momentum of a particle with a linear trajectory in spacetime. But what’s linear in curved spacetime? Remember how we struggle to represent – or imagine, I would say – curved spacetime, as evidenced by the fact that most illustrations of curved spacetime represent a two-dimensional space in three-dimensional Cartesian space? Think of the typical illustration, like that rubber surface with the ball deforming it.

That’s why this transformation of a Cartesian coordinate space into a polar coordinate space is such an interesting exercise. We now measure distance along the circle. [Note that we suddenly need to keep track of the number of rotations, which we can do by keeping track of time, as time units become some angle, and linear speed becomes angular speed.] The whole thing underscores, in my view, that’s it’s only our mind that separates time and space: the reality of the object is just its movement or velocity – and that’s one movement.

My guts instinct tells me that this is what the periodicity of the wavefunction (or its component waves, I should say) captures, somehow. If the movement is linear, it’s linear both in space as well in time, so to speak:

  • As a mental construct, time is always linear – it goes in one direction (and we think of the clock being regular, i.e. not slowing down or speeding up) – and, hence, the mathematical qualities of the time variable in the wavefunction are the same as those of the position variable: it’s a factor in one of its two terms. To be precise, it appears as the t in the E·t term in the argument θ = E·t – p·x. [Note the minus sign appears because we measure angles counter-clockwise when using polar coordinates or complex numbers.]
  • The trajectory in space is also linear – whether or not space is curved because of the presence of other masses.

OK. I should conclude here, but I want take this conversation one step further. Think of the two graphs below as representing some oscillation in space. Some object that goes back and forth in space: it accelerates and decelerates—and reverses direction. Imagine the g-forces on it as it does so: if you’d be traveling with that object, you would sure feel it’s going back and forth in space! The graph on the left-hand side is our usual perspective on stuff like this: we measure time using some steadily ticking clock, and so the seconds, minutes, hours, days, etcetera just go by.graph 1

The graph on the right-hand side applies our inversion technique. But, frankly, it’s the same thing: it doesn’t give us any new information. It doesn’t look like a well-behaved function but it actually is. It’s just a matter of mathematical convention: if we’d be used to looking at the y-axis as the independent variable (rather than the dependent variable), the function would be acceptable.

This leads me to the idea I started to explore in my previous post, and that’s to try to think of wavefunctions as oscillations of spacetime, rather than oscillations in spacetime. I inserted the following graph in that post—but it doesn’t say all that much, as it suggests we’re doing the same thing here: we’re just swapping axes. The difference is that the θ in the first graph now combines both time and space. We might say it represents spacetime itself. So the wavefunction projects it into some other ‘space’, i.e. the complex space. And then in the second graph, we reflect the whole thing.

dependent independent

So the idea is the following: our functions sort of project one ‘space’ into another ‘space’. In this case: the wavefunction sort of transforms spacetime – i.e. what we like to think of as the ‘physical’ space – into a complex space – which is purely mathematical.

Hmm… This post is becoming way too long, so I need to wrap it up. Look at the graph below, and note the dimension of the axes. We’re looking at an oscillation once more, but an oscillation of time this time around.

Graph 2

Huh? Yes. Imagine that, for some reason, you don’t feel those g-forces while going up and down in space: it’s the rest of the world that’s moving. You think you’re stationary or—what amounts to the same according to the relativity principle—moving in a straight line at constant velocity. The only way how you could explain the rest of the world moving back and forth, accelerating and decelerating, is that time itself is oscillating: objects reverse their direction for no apparent reason—so that’s time reversal—and they do so a varying speeds, so we’ve got a clock going wild!

You’ll nod your head in agreement now and say: that’s Einstein’s intuition in regard to the gravitational force. There’s no force really: mass just bends spacetime in such a way a planet in orbit follows a straight line, in a curved spacetime continuum. What I am saying here is that there must be ways to think of the electromagnetic force in exactly the same way. If the accelerations and decelerations of an electron moving in some electron would really be due to an electromagnetic force in the classical picture of a force (i.e. something pulling or pushing), then it would radiate energy away. We know it doesn’t do that—because otherwise it would spiral down into the nucleus itself. So I’ve been thinking it must be traveling in its own curved spacetime, but then it’s curved because of the electromagnetic force—obviously, as that’s the force that’s much more relevant at this scale.

The underlying thought is simple enough: if gravity curves spacetime, why can’t we look at the other forces as doing the same? Why can’t we think of any force coming ‘with its own space’, so to say? The difference between the various forces is the curvature – which will, obviously, be much more complex (literally) for the electromagnetic force. Just think of the other forces as curving space in more than one dimension. 🙂

I am sure you’ll think I’ve just gone crazy. Perhaps. In any case, I don’t care too much. As mentioned, because the electromagnetic force is different—we don’t have negative masses attracting positive masses when discussing gravity—it’s going to be a much weirder type of curvature, but… Well… That’s probably why we need those ‘two-dimensional’ complex numbers when discussing quantum mechanics! 🙂 So we’ve got some more mathematical dimensions, but the physical principle behind all forces should be the same, no? All forces are measured using Newton’s Law, so we relate them to the motion of some mass. The principle is simple: if force is related to the change in motion of a mass, then the trajectory in the space that’s related to that force will be linear if the force is not acting.

So… Well… Hmm… What? 

All of what I write above is a bit of a play with words, isn’t it? An oscillation of spacetime—but then spacetime must oscillate in something else, doesn’it? So in what then is it oscillating?

Great question. You’re right. It must be oscillating in something else or, to be precise, we need some other reference space so as to define what we mean by an oscillation of spacetime. That space is going to be some complex mathematical space—and I use complex both in its mathematical as well as in its everyday meaning here (complicated). Think of, for example, that x-axis representing three-dimensional space. We’d have something similar here: dimensions within dimensions.

There’s some great videos on YouTube that illustrate how one can turn a sphere inside out without punching a hole in it. That’s basically what we’re talking about here: it’s more than just switching the range for the domain of a function, which we can do by that reflection – or mirroring – using the 45º line. Conceptually, it’s really like turning a sphere inside out. Think of the surface of the curve connecting the two spaces.

Huh? Yes. But… Well… You’re right. Stuff like this is for the graduate level, I guess. So I’ll let you think about it—and do watch the videos that follow it. 🙂

In any case, I have to stop my wandering about here. Rather than wrapping up, however, I thought of something else yesterday—and so I’ll quickly jot that down as well, so I can re-visit it some other time. 🙂

Some other thinking on the Uncertainty Principle

I wanted to jot down something else too here. Something about the Uncertainty Principle once more. In my previous post, I noted we should think of Planck’s constant as expressing itself in time or in space, as we have two ways of looking at the dimension of Planck’s constant:

  1. [Planck’s constant] = [ħ] = N∙m∙s = (N∙m)∙s = [energy]∙[time]
  2. [Planck’s constant] = [ħ] = N∙m∙s = (N∙s)∙m = [momentum]∙[distance]

The bracket symbols [ and ] mean: ‘the dimension of what’s between the brackets’. Now, this may look like kids stuff, but the idea is quite fundamental: we’re thinking here of some amount of action (ħ, i.e. the quantum of action) expressing itself in time or, alternatively, expressing itself in space, indeed. In the former case, some amount of energy is expended during some time. In the latter case, some momentum is expended over some distance. We also know ħ can be written in terms of fundamental units, which are referred to as Planck units:

ħ = FPlP∙tP = Planck force unit × Planck distance unit × Planck time unit

Finally, we thought of the Planck distance unit and the Planck time unit as the smallest units of time and distance possible. As such, they become countable variables, so we’re talking of a trajectory in terms of discrete steps in space and time here, or discrete states of our particle. As such, the E·t and p·x in the argument (θ) of the wavefunction—remember: θ = (E/ħ)·t − (p/ħ)·x—should be some multiple of ħ as well. We may write:

E·t = m·ħ and p·x = n·ħ, with m and n both positive integers

Of course, there’s uncertainty: Δp·Δx ≥ ħ/2 and ΔE·Δt ≥ ħ/2. Now, if Δx and Δt also become countable variables, so Δx and Δt can only take on values like ±1, ±2, ±3, ±4, etcetera, then we can think of trying to model some kind of random walk through spacetime, combining various values for n and m, as well as various values for Δx and Δt. The relation between E and p, and the related difference between m and n, should determine in what direction our particle should be moving even if it can go along different trajectories. In fact, Feynman’s path integral formulation of quantum mechanics tells us it’s likely to move along different trajectories at the same time, with each trajectory having its own amplitude. Feynman’s formulation uses continuum theory, of course, but a discrete analysis – using a random walk approach – should yield the same result because, when everything is said and done, the fact that physics tells us time and space must become countable at some scale (the Planck scale), suggests that continuum theory may not represent reality, but just be an approximation: a limiting situation, in other words.

Hmmm… Interesting… I’ll need to do something more with this. Unfortunately, I have little time over the coming weeks. Again, I am just  writing it down to re-visit it later—probably much later. 😦

The Uncertainty Principle

In my previous post, I showed how Feynman derives Schrödinger’s equation using a historical and, therefore, quite intuitive approach. The approach was intuitive because the argument used a discrete model, so that’s stuff we are well acquainted with—like a crystal lattice, for example. However, now we’re now going to think continuity from the start. Let’s first see what changes in terms of notation.

New notations

Our C(xn, t) = 〈xn|ψ〉 now becomes C(x) = 〈x|ψ〉. This notation does not explicitly show the time dependence but then you know amplitudes like this do vary in space as well as in time. Having said that, the analysis below focuses mainly on their behavior in space, so it does make sense to not explicitly mention the time variable. It’s the usual trick: we look at how stuff behaves in space or, alternatively, in time. So we temporarily ‘forget’ about the other variable. That’s just how we work: it’s hard for our mind to think about these wavefunctions in both dimensions simultaneously although, ideally, we should do that.

Now, you also know that quantum physicists prefer to denote the wavefunction C(x) with some Greek letter: ψ (psi) or φ (phi). Feynman think it’s somewhat confusing because we use the same to denote a state itself, but I don’t agree. I think it’s pretty straightforward. In any case, we write:

ψ(x) = Cψ(x) = C(x) = 〈x|ψ〉

The next thing is the associated probabilities. From your high school math course, you’ll surely remember that we have two types of probability distributions: they are either discrete or, else, continuous. If they’re continuous, then our probability distribution becomes a probability density function (PDF) and, strictly speaking, we should no longer say that the probability of finding our particle at any particular point x at some time t is this or that. That probability is, strictly speaking, zero: if our variable is continuous, then our probability is defined for an interval only, and the P[x] value itself is referred to as a probability density. So we’ll look at little intervals Δx, and we can write the associated probability as:

prob (x, Δx) = |〈x|ψ〉|2Δx = |ψ(x)|2Δx

The idea is illustrated below. We just re-divide our continuous scale in little intervals and calculate the surface of some tiny elongated rectangle now. 🙂

image024

It is also easy to see that, when moving to an infinite set of states, our 〈φ|ψ〉 = ∑〈φ|x〉〈x|ψ〉 (over all x) formula for calculating the amplitude for a particle to go from state ψ to state φ should now be written as an infinite sum, i.e. as the following integral:

amplitude continuous

Now, we know that 〈φ|x〉 = 〈x|φ〉* and, therefore, this integral can also be written as:

integral

For example, if φ(x) =  〈x|φ〉 is equal to a simple exponential, so we can write φ(x) = a·eiθ, then φ*(x) =  〈φ|x〉 = a·e+iθ.

With that, we’re ready for the plat de résistance, except for one thing, perhaps: we don’t look at spin here. If we’d do that, we’d have to take two sets of base sets: one for up and one for down spin—but we don’t worry about this, for the time being, that is. 🙂

The momentum wavefunction

Our wavefunction 〈x|ψ〉 varies in time as well as in space. That’s obvious. How exactly depends on the energy and the momentum: both are related and, hence, if there’s uncertainty in the momentum, there will be uncertainty in the momentum, and vice versa. Uncertainty in the momentum changes the behavior of the wavefunction in space—through the p = ħk factor in the argument of the wavefunction (θ = ω·t − k·x)—while uncertainty in the energy changes the behavior of the wavefunction in time—through the E = ħω relation. As mentioned above, we focus on the variation in space here. We’ll do so y defining a new state, which is referred to as a state of definite momentum. We’ll write it as mom p, and so now we can use the Dirac notation to write the amplitude for an electron to have a definite momentum equal to p as:

φ(p) = 〈 mom p | ψ 〉

Now, you may think that the 〈x|ψ〉 and 〈mom p|ψ〉 amplitudes should be the same because, surely, we do associate the state with a definite momentum p, don’t we? Well… No! If we want to localize our wave ‘packet’, i.e. localize our particle, then we’re actually not going to associate it with a definite momentum. See my previous posts: we’re going to introduce some uncertainty so our wavefunction is actually a superposition of more elementary waves with slightly different (spatial) frequencies. So we should just go through the motions here and apply our integral formula to ‘unpack’ this amplitude. That goes as follows:

integral 2

So, as usual, when seeing a formula like this, we should remind ourselves of what we need to solve. Here, we assume we somehow know the ψ(x) = 〈x|ψ〉 wavefunction, so the question is: what do we use for 〈 mom p | x 〉? At this point, Feynman wanders off to start a digression on normalization, which really confuses the picture. When everything is said and done, the easiest thing to do is to just jot down the formula for that 〈mom p | x〉 in the integrand and think about it for a while:

〈mom p | x〉 = ei(p/ħ)∙x

I mean… What else could it be? This formula is very fundamental, and I am not going to try to explain it. As mentioned above, Feynman tries to ‘explain’ it by some story about probabilities and normalization, but I think his ‘explanation’ just confuses things even more. Really, what else would it be? The formula above really encapsulates what it means if we say that p and x are conjugate variables. [I can already note, of course, that symmetry implies that we can write something similar for energy and time. Indeed, we can define a state of definite energy as 〈E | ψ〉, and then ‘unpack’ it in the same way, and see that one of the two factors in the integrand would be equal to 〈E | t〉 and, of course, we’d associate a similar formula with it:

E | t〉 = ei(E/ħ)∙t

But let me get back to the lesson here. We’re analyzing stuff in space now, not in time. Feynman gives a simple example here. He suggests a wavefunction which has the following form:

ψ(x) = K·ex2/4σ2

The example is somewhat disingenuous because this is not a complex– but real-valued function. In fact, squaring it, and then calculating applying the normalization condition (all probabilities have to add up to one), yields the normal probability distribution:

prob (x, Δx) = P(x)dx = (2πσ2)−1/2ex2/2σ2dx

So that’s just the normal distribution for μ = 0, as illustrated below.

720px-Normal_Distribution_PDF

In any case, the integral we have to solve now is:Integral 3

Now, I hate integrals as much as you do (probably more) and so I assume you’re also only interested in the result (if you want the detail: check it in Feynman), which we can write as:

φ(p) = (2πη2)−1/4·ep2/4η2, with η = ħ/2σ

This formula is totally identical to the ψ(x) = (2πσ2)−1/4·ex2/4σdistribution we started with, except that it’s got another sigma value, which we denoted by η (and that’s not nu but eta), with 

η = ħ/2σ

Just for the record, Feynman refers to η and σ as the ‘half-width’ of the respective distributions. Mathematicians would say they’re the standard deviation. The concept are nearly the same, but not quite. In any case, that’s another thing I’ll let you find our for yourself. 🙂 The point is: η and σ are inversely proportional to each other, and the constant of proportionality is equal to ħ/2.

Now, if we take η and σ as measures of the uncertainty in and respectively – which is what they are, obviously ! – then we can re-write that η = ħ/2σ as ησ = ħ/2 or, better still, as the Uncertainty Principle itself:

ΔpΔx = ħ/2

You’ll say: that’s great, but we usually see the Uncertainty Principle written as:

ΔpΔx ≥ ħ/2

So where does that come from? Well… We choose a normal distribution (or the Gaussian distribution, as physicists call it), and so that yields the ΔpΔx = ħ/2 identity. If we’d chosen another one, we’d find a slightly different relation and so… Well… Let me quote Feynman here: “Interestingly enough, it is possible to prove that for any other form of a distribution in x or p, the product ΔpΔcannot be smaller than the one we have found here, so the Gaussian distribution gives the smallest possible value for the ΔpΔproduct.”

This is great. So what about the even more approximate ΔpΔx ≥ ħ formula? Where does that come from? Well… That’s more like a qualitative version of it: it basically says the minimum value of the same product is of the same order as ħ which, as you know, is pretty tiny: it’s about 0.0000000000000000000000000000000006626 J·s. 🙂 The last thing to note is its dimension: momentum is expressed in newton-second and position in meter, obviously. So the uncertainties in them are expressed in the same unit, and so the dimension of the product is N·m·s = J·s. So this dimension combines force, distance and time. That’s quite appropriate, I’d say. The ΔEΔproduct obviously does the same. But… Well… That’s it, folks! I enjoyed writing this – and I cannot always say the same of other posts! So I hope you enjoyed reading it. 🙂

Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/

The de Broglie relations, the wave equation, and relativistic length contraction

Pre-script (dated 26 June 2020): Our ideas have evolved into a full-blown realistic (or classical) interpretation of all things quantum-mechanical. So no use to read this. Read my recent papers instead. 🙂

Original post:

You know the two de Broglie relations, also known as matter-wave equations:

f = E/h and λ = h/p

You’ll find them in almost any popular account of quantum mechanics, and the writers of those popular books will tell you that is the frequency of the ‘matter-wave’, and λ is its wavelength. In fact, to add some more weight to their narrative, they’ll usually write them in a somewhat more sophisticated form: they’ll write them using ω and k. The omega symbol (using a Greek letter always makes a big impression, doesn’t it?) denotes the angular frequency, while k is the so-called wavenumber.  Now, k = 2π/λ and ω = 2π·f and, therefore, using the definition of the reduced Planck constant, i.e. ħ = h/2π, they’ll write the same relations as:

  1. λ = h/p = 2π/k ⇔ k = 2π·p/h
  2. f = E/h = (ω/2π)

⇒ k = p/ħ and ω = E/ħ

They’re the same thing: it’s just that working with angular frequencies and wavenumbers is more convenient, from a mathematical point of view that is: it’s why we prefer expressing angles in radians rather than in degrees (k is expressed in radians per meter, while ω is expressed in radians per second). In any case, the ‘matter wave’ – even Wikipedia uses that term now – is, of course, the amplitude, i.e. the wave-function ψ(x, t), which has a frequency and a wavelength, indeed. In fact, as I’ll show in a moment, it’s got two frequencies: one temporal, and one spatial. I am modest and, hence, I’ll admit it took me quite a while to fully distinguish the two frequencies, and so that’s why I always had trouble connecting these two ‘matter wave’ equations.

Indeed, if they represent the same thing, they must be related, right? But how exactly? It should be easy enough. The wavelength and the frequency must be related through the wave velocity, so we can write: f·λ = v, with the velocity of the wave, which must be equal to the classical particle velocity, right? And then momentum and energy are also related. To be precise, we have the relativistic energy-momentum relationship: p·c = mv·v·c = mv·c2·v/c = E·v/c. So it’s just a matter of substitution. We should be able to go from one equation to the other, and vice versa. Right?

Well… No. It’s not that simple. We can start with either of the two equations but it doesn’t work. Try it. Whatever substitution you try, there’s no way you can derive one of the two equations above from the other. The fact that it’s impossible is evidenced by what we get when we’d multiply both equations. We get:

  1. f·λ = (E/h)·(h/p) = E/p
  2. v = f·λ  ⇒ f·λ = v = E/p ⇔ E = v·p = v·(m·v)

⇒ E = m·v2

Huh? What kind of formula is that? E = m·v2? That’s a formula you’ve never ever seen, have you? It reminds you of the kinetic energy formula of course—K.E. = m·v2/2—but… That factor 1/2 should not be there. Let’s think about it for a while. First note that this E = m·vrelation makes perfectly sense if v = c. In that case, we get Einstein’s mass-energy equivalence (E = m·c2), but that’s besides the point here. The point is: if v = c, then our ‘particle’ is a photon, really, and then the E = h·f is referred to as the Planck-Einstein relation. The wave velocity is then equal to c and, therefore, f·λ = c, and so we can effectively substitute to find what we’re looking for:

E/p = (h·f)/(h/λ) = f·λ = c ⇒ E = p·

So that’s fine: we just showed that the de Broglie relations are correct for photons. [You remember that E = p·c relation, no? If not, check out my post on it.] However, while that’s all nice, it is not what the de Broglie equations are about: we’re talking the matter-wave here, and so we want to do something more than just re-confirm that Planck-Einstein relation, which you can interpret as the limit of the de Broglie relations for v = c. In short, we’re doing something wrong here! Of course, we are. I’ll tell you what exactly in a moment: it’s got to do with the fact we’ve got two frequencies really.

Let’s first try something else. We’ve been using the relativistic E = mv·c2 equation above. Let’s try some other energy concept: let’s substitute the E in the f = E/h by the kinetic energy and then see where we get—if anywhere at all. So we’ll use the Ekinetic = m∙v2/2 equation. We can then use the definition of momentum (p = m∙v) to write E = p2/(2m), and then we can relate the frequency f to the wavelength λ using the v = λ∙f formula once again. That should work, no? Let’s do it. We write:

  1. E = p2/(2m)
  2. E = h∙f = h·v

⇒ λ = h·v/E = h·v/(p2/(2m)) = h·v/[m2·v2/(2m)] = h/[m·v/2] = 2∙h/p

So we find λ = 2∙h/p. That is almost right, but not quite: that factor 2 should not be there. Well… Of course you’re smart enough to see it’s just that factor 1/2 popping up once more—but as a reciprocal, this time around. 🙂 So what’s going on? The honest answer is: you can try anything but it will never work, because the f = E/h and λ = h/p equations cannot be related—or at least not so easily. The substitutions above only work if we use that E = m·v2 energy concept which, you’ll agree, doesn’t make much sense—at first, at least. Again: what’s going on? Well… Same honest answer: the f = E/h and λ = h/p equations cannot be related—or at least not so easily—because the wave equation itself is not so easy.

Let’s review the basics once again.

The wavefunction

The amplitude of a particle is represented by a wavefunction. If we have no information whatsoever on its position, then we usually write that wavefunction as the following complex-valued exponential:

ψ(x, t) = a·ei·[(E/ħ)·t − (p/ħ)∙x] = a·ei·(ω·t − kx= a·ei(kx−ω·t) = a·eiθ = (cosθ + i·sinθ)

θ is the so-called phase of our wavefunction and, as you can see, it’s the argument of a wavefunction indeed, with temporal frequency ω and spatial frequency k (if we choose our x-axis so its direction is the same as the direction of k, then we can substitute the k and x vectors for the k and x scalars, so that’s what we’re doing here). Now, we know we shouldn’t worry too much about a, because that’s just some normalization constant (remember: all probabilities have to add up to one). However, let’s quickly develop some logic here. Taking the absolute square of this wavefunction gives us the probability of our particle being somewhere in space at some point in time. So we get the probability as a function of x and t. We write:

P(x ,t) = |a·ei·[(E/ħ)·t − (p/ħ)∙x]|= a2

As all probabilities have to add up to one, we must assume we’re looking at some box in spacetime here. So, if the length of our box is Δx = x2 − x1, then (Δx)·a2 = (x2−x1a= 1 ⇔ Δx = 1/a2. [We obviously simplify the analysis by assuming a one-dimensional space only here, but the gist of the argument is essentially correct.] So, freezing time (i.e. equating t to some point t = t0), we get the following probability density function:

Capture

That’s simple enough. The point is: the two de Broglie equations f = E/h and λ = h/p give us the temporal and spatial frequencies in that ψ(x, t) = a·ei·[(E/ħ)·t − (p/ħ)∙x] relation. As you can see, that’s an equation that implies a much more complicated relationship between E/ħ = ω and p/ħ = k. Or… Well… Much more complicated than what one would think of at first.

To appreciate what’s being represented here, it’s good to play a bit. We’ll continue with our simple exponential above, which also illustrates how we usually analyze those wavefunctions: we either assume we’re looking at the wavefunction in space at some fixed point in time (t = t0) or, else, at how the wavefunction changes in time at some fixed point in space (x = x0). Of course, we know that Einstein told us we shouldn’t do that: space and time are related and, hence, we should try to think of spacetime, i.e. some ‘kind of union’ of space and time—as Minkowski famously put it. However, when everything is said and done, mere mortals like us are not so good at that, and so we’re sort of condemned to try to imagine things using the classical cut-up of things. 🙂 So we’ll just an online graphing tool to play with that a·ei(k∙x−ω·t) = a·eiθ = (cosθ + i·sinθ) formula.

Compare the following two graps, for example. Just imagine we either look at how the wavefunction behaves at some point in space, with the time fixed at some point t = t0, or, alternatively, that we look at how the wavefunction behaves in time at some point in space x = x0. As you can see, increasing k = p/ħ or increasing ω = E/ħ gives the wavefunction a higher ‘density’ in space or, alternatively, in time.

density 1

density 2That makes sense, intuitively. In fact, when thinking about how the energy, or the momentum, affects the shape of the wavefunction, I am reminded of an airplane propeller: as it spins, faster and faster, it gives the propeller some ‘density’, in space as well as in time, as its blades cover more space in less time. It’s an interesting analogy: it helps—me, at least—to think through what that wavefunction might actually represent.

propeller

So as to stimulate your imagination even more, you should also think of representing the real and complex part of that ψ = a·ei(k∙x−ω·t) = a·eiθ = (cosθ + i·sinθ) formula in a different way. In the graphs above, we just showed the sine and cosine in the same plane but, as you know, the real and the imaginary axis are orthogonal, so Euler’s formula a·eiθ (cosθ + i·sinθ) = cosθ + i·sinθ = Re(ψ) + i·Im(ψ) may also be graphed as follows:

5d_euler_f

The illustration above should make you think of yet another illustration you’ve probably seen like a hundred times before: the electromagnetic wave, propagating through space as the magnetic and electric field induce each other, as illustrated below. However, there’s a big difference: Euler’s formula incorporates a phase shift—remember: sinθ = cos(θ − π/2)—and you don’t have that in the graph below. The difference is much more fundamental, however: it’s really hard to see how one could possibly relate the magnetic and electric field to the real and imaginary part of the wavefunction respectively. Having said that, the mathematical similarity makes one think!

FG02_06

Of course, you should remind yourself of what E and B stand for: they represent the strength of the electric (E) and magnetic (B) field at some point x at some time t. So you shouldn’t think of those wavefunctions above as occupying some three-dimensional space. They don’t. Likewise, our wavefunction ψ(x, t) does not occupy some physical space: it’s some complex number—an amplitude that’s associated with each and every point in spacetime. Nevertheless, as mentioned above, the visuals make one think and, as such, do help us as we try to understand all of this in a more intuitive way.

Let’s now look at that energy-momentum relationship once again, but using the wavefunction, rather than those two de Broglie relations.

Energy and momentum in the wavefunction

I am not going to talk about uncertainty here. You know that Spiel. If there’s uncertainty, it’s in the energy or the momentum, or in both. The uncertainty determines the size of that ‘box’ (in spacetime) in which we hope to find our particle, and it’s modeled by a splitting of the energy levels. We’ll say the energy of the particle may be E0, but it might also be some other value, which we’ll write as En = E0 ± n·ħ. The thing to note is that energy levels will always be separated by some integer multiple of ħ, so ħ is, effectively , the quantum of energy for all practical—and theoretical—purposes. We then super-impose the various wave equations to get a wave function that might—or might not—resemble something like this:

Photon waveWho knows? 🙂 In any case, that’s not what I want to talk about here. Let’s repeat the basics once more: if we write our wavefunction a·ei·[(E/ħ)·t − (p/ħ)∙x] as a·ei·[ω·t − k∙x], we refer to ω = E/ħ as the temporal frequency, i.e. the frequency of our wavefunction in time (i.e. the frequency it has if we keep the position fixed), and to k = p/ħ as the spatial frequency (i.e. the frequency of our wavefunction in space (so now we stop the clock and just look at the wave in space). Now, let’s think about the energy concept first. The energy of a particle is generally thought of to consist of three parts:

  1. The particle’s rest energy m0c2, which de Broglie referred to as internal energy (Eint): it includes the rest mass of the ‘internal pieces’, as Feynman puts it (now we call those ‘internal pieces’ quarks), as well as their binding energy (i.e. the quarks’ interaction energy);
  2. Any potential energy it may have because of some field (so de Broglie was not assuming the particle was traveling in free space), which we’ll denote by U, and note that the field can be anything—gravitational, electromagnetic: it’s whatever changes the energy because of the position of the particle;
  3. The particle’s kinetic energy, which we write in terms of its momentum p: m·v2/2 = m2·v2/(2m) = (m·v)2/(2m) = p2/(2m).

So we have one energy concept here (the rest energy) that does not depend on the particle’s position in spacetime, and two energy concepts that do depend on position (potential energy) and/or how that position changes because of its velocity and/or momentum (kinetic energy). The two last bits are related through the energy conservation principle. The total energy is E = mvc2, of course—with the little subscript (v) ensuring the mass incorporates the equivalent mass of the particle’s kinetic energy.

So what? Well… In my post on quantum tunneling, I drew attention to the fact that different potentials , so different potential energies (indeed, as our particle travels one region to another, the field is likely to vary) have no impact on the temporal frequency. Let me re-visit the argument, because it’s an important one. Imagine two different regions in space that differ in potential—because the field has a larger or smaller magnitude there, or points in a different direction, or whatever: just different fields, which corresponds to different values for U1 and U2, i.e. the potential in region 1 versus region 2. Now, the different potential will change the momentum: the particle will accelerate or decelerate as it moves from one region to the other, so we also have a different p1 and p2. Having said that, the internal energy doesn’t change, so we can write the corresponding amplitudes, or wavefunctions, as:

  1. ψ11) = Ψ1(x, t) = a·eiθ1 = a·e−i[(Eint + p12/(2m) + U1)·t − p1∙x]/ħ 
  2. ψ22) = Ψ2(x, t) = a·e−iθ2 = a·e−i[(Eint + p22/(2m) + U2)·t − p2∙x]/ħ 

Now how should we think about these two equations? We are definitely talking different wavefunctions. However, their temporal frequencies ω= Eint + p12/(2m) + U1 and ω= Eint + p22/(2m) + Umust be the same. Why? Because of the energy conservation principle—or its equivalent in quantum mechanics, I should say: the temporal frequency f or ω, i.e. the time-rate of change of the phase of the wavefunction, does not change: all of the change in potential, and the corresponding change in kinetic energy, goes into changing the spatial frequency, i.e. the wave number k or the wavelength λ, as potential energy becomes kinetic or vice versa. The sum of the potential and kinetic energy doesn’t change, indeed. So the energy remains the same and, therefore, the temporal frequency does not change. In fact, we need this quantum-mechanical equivalent of the energy conservation principle to calculate how the momentum and, hence, the spatial frequency of our wavefunction, changes. We do so by boldly equating ω= Eint + p12/(2m) + Uand ω2 = Eint + p22/(2m) + U2, and so we write:

ω= ω2 ⇔ Eint + p12/(2m) + U=  Eint + p22/(2m) + U

⇔ p12/(2m) − p22/(2m) = U– U⇔ p2=  (2m)·[p12/(2m) – (U– U1)]

⇔ p2 = (p12 – 2m·ΔU )1/2

We played with this in a previous post, assuming that p12 is larger than 2m·ΔU, so as to get a positive number on the right-hand side of the equation for p22, so then we can confidently take the positive square root of that (p12 – 2m·ΔU ) expression to calculate p2. For example, when the potential difference ΔU = U– U1 was negative, so ΔU < 0, then we’re safe and sure to get some real positive value for p2.

Having said that, we also contemplated the possibility that p2= p12 – 2m·ΔU was negative, in which case p2 has to be some pure imaginary number, which we wrote as p= i·p’ (so p’ (read: p prime) is a real positive number here). We could work with that: it resulted in an exponentially decreasing factor ep’·x/ħ that ended up ‘killing’ the wavefunction in space. However, its limited existence still allowed particles to ‘tunnel’ through potential energy barriers, thereby explaining the quantum-mechanical tunneling phenomenon.

This is rather weird—at first, at least. Indeed, one would think that, because of the E/ħ = ω equation, any change in energy would lead to some change in ω. But no! The total energy doesn’t change, and the potential and kinetic energy are like communicating vessels: any change in potential energy is associated with a change in p, and vice versa. It’s a really funny thing. It helps to think it’s because the potential depends on position only, and so it should not have an impact on the temporal frequency of our wavefunction. Of course, it’s equally obvious that the story would change drastically if the potential would change with time, but… Well… We’re not looking at that right now. In short, we’re assuming energy is being conserved in our quantum-mechanical system too, and so that implies what’s described above: no change in ω, but we obviously do have changes in p whenever our particle goes from one region in space to another, and the potentials differ. So… Well… Just remember: the energy conservation principle implies that the temporal frequency of our wave function doesn’t change. Any change in potential, as our particle travels from one place to another, plays out through the momentum.

Now that we know that, let’s look at those de Broglie relations once again.

Re-visiting the de Broglie relations

As mentioned above, we usually think in one dimension only: we either freeze time or, else, we freeze space. If we do that, we can derive some funny new relationships. Let’s first simplify the analysis by re-writing the argument of the wavefunction as:

θ = E·t − p·x

Of course, you’ll say: the argument of the wavefunction is not equal to E·t − p·x: it’s (E/ħ)·t − (p/ħ)∙x. Moreover, θ should have a minus sign in front. Well… Yes, you’re right. We should put that 1/ħ factor in front, but we can change units, and so let’s just measure both E as well as p in units of ħ here. We can do that. No worries. And, yes, the minus sign should be there—Nature choose a clockwise direction for θ—but that doesn’t matter for the analysis hereunder.

The E·t − p·x expression reminds one of those invariant quantities in relativity theory. But let’s be precise here. We’re thinking about those so-called four-vectors here, which we wrote as pμ = (E, px, py, pz) = (E, p) and xμ = (t, x, y, z) = (t, x) respectively. [Well… OK… You’re right. We wrote those four-vectors as pμ = (E, px·c , py·c, pz·c) = (E, p·c) and xμ = (c·t, x, y, z) = (t, x). So what we write is true only if we measure time and distance in equivalent units so we have = 1. So… Well… Let’s do that and move on.] In any case, what was invariant was not E·t − p·x·c or c·t − x (that’s a nonsensical expression anyway: you cannot subtract a vector from a scalar), but pμ2 = pμpμ = E2 − (p·c)2 = E2 − p2·c= E2 − (px2 + py2 + pz2c2 and xμ2 = xμxμ = (c·t)2 − x2 = c2·t2 − (x2 + y2 + z2) respectively. [Remember pμpμ and xμxμ are four-vector dot products, so they have that +— signature, unlike the p2 and x2 or a·b dot products, which are just a simple sum of the squared components.] So… Well… E·t − p·x is not an invariant quantity. Let’s try something else.

Let’s re-simplify by equating ħ as well as c to one again, so we write: ħ = c = 1. [You may wonder if it is possible to ‘normalize’ both physical constants simultaneously, but the answer is yes. The Planck unit system is an example.]  then our relativistic energy-momentum relationship can be re-written as E/p = 1/v. [If c would not be one, we’d write: E·β = p·c, with β = v/c. So we got E/p = c/β. We referred to β as the relative velocity of our particle: it was the velocity, but measured as a ratio of the speed of light. So here it’s the same, except that we use the velocity symbol v now for that ratio.]

Now think of a particle moving in free space, i.e. without any fields acting on it, so we don’t have any potential changing the spatial frequency of the wavefunction of our particle, and let’s also assume we choose our x-axis such that it’s the direction of travel, so the position vector (x) can be replaced by a simple scalar (x). Finally, we will also choose the origin of our x-axis such that x = 0 zero when t = 0, so we write: x(t = 0) = 0. It’s obvious then that, if our particle is traveling in spacetime with some velocity v, then the ratio of its position x and the time t that it’s been traveling will always be equal to = x/t. Hence, for that very special position in spacetime (t, x = v·t) – so we’re talking the actual position of the particle in spacetime here – we get: θ = E·t − p·x = E·t − p·v·t = E·t − m·v·v·t= (E −  m∙v2)·t. So… Well… There we have the m∙v2 factor.

The question is: what does it mean? How do we interpret this? I am not sure. When I first jotted this thing down, I thought of choosing a different reference potential: some negative value such that it ensures that the sum of kinetic, rest and potential energy is zero, so I could write E = 0 and then the wavefunction would reduce to ψ(t) = ei·m∙v2·t. Feynman refers to that as ‘choosing the zero of our energy scale such that E = 0’, and you’ll find this in many other works too. However, it’s not that simple. Free space is free space: if there’s no change in potential from one region to another, then the concept of some reference point for the potential becomes meaningless. There is only rest energy and kinetic energy, then. The total energy reduces to E = m (because we chose our units such that c = 1 and, therefore, E = mc2 = m·12 = m) and so our wavefunction reduces to:

ψ(t) = a·ei·m·(1 − v2)·t

We can’t reduce this any further. The mass is the mass: it’s a measure for inertia, as measured in our inertial frame of reference. And the velocity is the velocity, of course—also as measured in our frame of reference. We can re-write it, of course, by substituting t for t = x/v, so we get:

ψ(x) = a·ei·m·(1/vv)·x

For both functions, we get constant probabilities, but a wavefunction that’s ‘denser’ for higher values of m. The (1 − v2) and (1/vv) factors are different, however: these factors becomes smaller for higher v, so our wavefunction becomes less dense for higher v. In fact, for = 1 (so for travel at the speed of light, i.e. for photons), we get that ψ(t) = ψ(x) = e0 = 1. [You should use the graphing tool once more, and you’ll see the imaginary part, i.e. the sine of the (cosθ + i·sinθ) expression, just vanishes, as sinθ = 0 for θ = 0.]

graph

The wavefunction and relativistic length contraction

Are exercises like this useful? As mentioned above, these constant probability wavefunctions are a bit nonsensical, so you may wonder why I wrote what I wrote. There may be no real conclusion, indeed: I was just fiddling around a bit, and playing with equations and functions. I feel stuff like this helps me to understand what that wavefunction actually is somewhat better. If anything, it does illustrate that idea of the ‘density’ of a wavefunction, in space or in time. What we’ve been doing by substituting x for x = v·t or t for t = x/v is showing how, when everything is said and done, the mass and the velocity of a particle are the actual variables determining that ‘density’ and, frankly, I really like that ‘airplane propeller’ idea as a pedagogic device. In fact, I feel it may be more than just a pedagogic device, and so I’ll surely re-visit it—once I’ve gone through the rest of Feynman’s Lectures, that is. 🙂

That brings me to what I added in the title of this post: relativistic length contraction. You’ll wonder why I am bringing that into a discussion like this. Well… Just play a bit with those (1 − v2) and (1/vv) factors. As mentioned above, they decrease the density of the wavefunction. In other words, it’s like space is being ‘stretched out’. Also, it can’t be a coincidence we find the same (1 − v2) factor in the relativistic length contraction formula: L = L0·√(1 − v2), in which L0 is the so-called proper length (i.e. the length in the stationary frame of reference) and is the (relative) velocity of the moving frame of reference. Of course, we also find it in the relativistic mass formula: m = mv = m0/√(1−v2). In fact, things become much more obvious when substituting m for m0/√(1−v2) in that ψ(t) = ei·m·(1 − v2)·t function. We get:

ψ(t) = a·ei·m·(1 − v2)·t = a·ei·m0·√(1−v2)·t 

Well… We’re surely getting somewhere here. What if we go back to our original ψ(x, t) = a·ei·[(E/ħ)·t − (p/ħ)∙x] function? Using natural units once again, that’s equivalent to:

ψ(x, t) = a·ei·(m·t − p∙x) = a·ei·[(m0/√(1−v2))·t − (m0·v/√(1−v2)∙x)

= a·ei·[m0/√(1−v2)]·(t − v∙x)

Interesting! We’ve got a wavefunction that’s a function of x and t, but with the rest mass (or rest energy) and velocity as parameters! Now that really starts to make sense. Look at the (blue) graph for that 1/√(1−v2) factor: it goes from one (1) to infinity (∞) as v goes from 0 to 1 (remember we ‘normalized’ v: it’s a ratio between 0 and 1 now). So that’s the factor that comes into play for t. For x, it’s the red graph, which has the same shape but goes from zero (0) to infinity (∞) as v goes from 0 to 1.

graph 2Now that makes sense: the ‘density’ of the wavefunction, in time and in space, increases as the velocity v increases. In space, that should correspond to the relativistic length contraction effect: it’s like space is contracting, as the velocity increases and, therefore, the length of the object we’re watching contracts too. For time, the reasoning is a bit more complicated: it’s our time that becomes more dense and, therefore, our clock that seems to tick faster.

[…]

I know I need to explore this further—if only so as to assure you I have not gone crazy. Unfortunately, I have no time to do that right now. Indeed, from time to time, I need to work on other stuff besides this physics ‘hobby’ of mine. :-/

Post scriptum 1: As for the E = m·vformula, I also have a funny feeling that it might be related to the fact that, in quantum mechanics, both the real and imaginary part of the oscillation actually matter. You’ll remember that we’d represent any oscillator in physics by a complex exponential, because it eased our calculations. So instead of writing A = A0·cos(ωt + Δ), we’d write: A = A0·ei(ωt + Δ) = A0·cos(ωt + Δ) + i·A0·sin(ωt + Δ). When calculating the energy or intensity of a wave, however, we couldn’t just take the square of the complex amplitude of the wave – remembering that E ∼ A2. No! We had to get back to the real part only, i.e. the cosine or the sine only. Now the mean (or average) value of the squared cosine function (or a squared sine function), over one or more cycles, is 1/2, so the mean of A2 is equal to 1/2 = A02. cos(ωt + Δ). I am not sure, and it’s probably a long shot, but one must be able to show that, if the imaginary part of the oscillation would actually matter – which is obviously the case for our matter-wave – then 1/2 + 1/2 is obviously equal to 1. I mean: try to think of an image with a mass attached to two springs, rather than one only. Does that make sense? 🙂 […] I know: I am just freewheeling here. 🙂

Post scriptum 2: The other thing that this E = m·vequation makes me think of is – curiously enough – an eternally expanding spring. Indeed, the kinetic energy of a mass on a spring and the potential energy that’s stored in the spring always add up to some constant, and the average potential and kinetic energy are equal to each other. To be precise: 〈K.E.〉 + 〈P.E.〉 = (1/4)·k·A2 + (1/4)·k·A= k·A2/2. It means that, on average, the total energy of the system is twice the average kinetic energy (or potential energy). You’ll say: so what? Well… I don’t know. Can we think of a spring that expands eternally, with the mass on its end not gaining or losing any speed? In that case, is constant, and the total energy of the system would, effectively, be equal to Etotal = 2·〈K.E.〉 = (1/2)·m·v2/2 = m·v2.

Post scriptum 3: That substitution I made above – substituting x for x = v·t – is kinda weird. Indeed, if that E = m∙v2 equation makes any sense, then E − m∙v2 = 0, of course, and, therefore, θ = E·t − p·x = E·t − p·v·t = E·t − m·v·v·t= (E −  m∙v2)·t = 0·t = 0. So the argument of our wavefunction is 0 and, therefore, we get a·e= for our wavefunction. It basically means our particle is where it is. 🙂

Post scriptum 4: This post scriptum – no. 4 – was added later—much later. On 29 February 2016, to be precise. The solution to the ‘riddle’ above is actually quite simple. We just need to make a distinction between the group and the phase velocity of our complex-valued wave. The solution came to me when I was writing a little piece on Schrödinger’s equation. I noticed that we do not find that weird E = m∙v2 formula when substituting ψ for ψ = ei(kx − ωt) in Schrödinger’s equation, i.e. in:

Schrodinger's equation 2

Let me quickly go over the logic. To keep things simple, we’ll just assume one-dimensional space, so ∇2ψ = ∂2ψ/∂x2. The time derivative on the left-hand side is ∂ψ/∂t = −iω·ei(kx − ωt). The second-order derivative on the right-hand side is ∂2ψ/∂x2 = (ik)·(ik)·ei(kx − ωt) = −k2·ei(kx − ωt) . The ei(kx − ωt) factor on both sides cancels out and, hence, equating both sides gives us the following condition:

iω = −(iħ/2m)·k2 ⇔ ω = (ħ/2m)·k2

Substituting ω = E/ħ and k = p/ħ yields:

E/ħ = (ħ/2m)·p22 = m2·v2/(2m·ħ) = m·v2/(2ħ) ⇔ E = m·v2/2

In short: the E = m·v2/2 is the correct formula. It must be, because… Well… Because Schrödinger’s equation is a formula we surely shouldn’t doubt, right? So the only logical conclusion is that we must be doing something wrong when multiplying the two de Broglie equations. To be precise: our v = f·λ equation must be wrong. Why? Well… It’s just something one shouldn’t apply to our complex-valued wavefunction. The ‘correct’ velocity formula for the complex-valued wavefunction should have that 1/2 factor, so we’d write 2·f·λ = v to make things come out alright. But where would this formula come from? The period of cosθ + isinθ is the period of the sine and cosine function: cos(θ+2π) + isin(θ+2π) = cosθ + isinθ, so T = 2π and f = 1/T = 1/2π do not change.

But so that’s a mathematical point of view. From a physical point of view, it’s clear we got two oscillations for the price of one: one ‘real’ and one ‘imaginary’—but both are equally essential and, hence, equally ‘real’. So the answer must lie in the distinction between the group and the phase velocity when we’re combining waves. Indeed, the group velocity of a sum of waves is equal to vg = dω/dk. In this case, we have:

vg = d[E/ħ]/d[p/ħ] = dE/dp

We can now use the kinetic energy formula to write E as E = m·v2/2 = p·v/2. Now, v and p are related through m (p = m·v, so = p/m). So we should write this as E = m·v2/2 = p2/(2m). Substituting E and p = m·v in the equation above then gives us the following:

dω/dk = d[p2/(2m)]/dp = 2p/(2m) = v= v

However, for the phase velocity, we can just use the v= ω/k formula, which gives us that 1/2 factor:

v= ω/k = (E/ħ)/(p/ħ) = E/p = (m·v2/2)/(m·v) = v/2

Bingo! Riddle solved! 🙂 Isn’t it nice that our formula for the group velocity also applies to our complex-valued wavefunction? I think that’s amazing, really! But I’ll let you think about it. 🙂