An acquaintance sent me a video with her New Year’s wishes titled Carl Sagan’s spiritual side. I liked it and so I googled a bit further and found that many videos and transcripts now circulate online under the same heading. The framing is very well-intentioned but, in my humble view, also slightly misleading. It suggests a hidden dimension, a concession to religion, or a quiet retreat from science into something softer.
In fact, Carl Sagan was doing almost the opposite. He was insisting that science, taken seriously enough, already carries all the depth, humility, and emotional gravity that people often seek elsewhere.
What he offered was not spirituality instead of science, but a way of inhabiting science without becoming either cynical or metaphysical.
I. Spirituality without the supernatural
Sagan used the word spiritual carefully and sparingly. When he did, he did not mean belief in gods, hidden purposes, or unseen realms. He meant the human response to scale, structure, and intelligibility — the quiet shock of realizing what kind of universe we actually inhabit.
For Sagan, the universe did not become meaningful because it broke its own laws. It became meaningful because it has laws — stable, discoverable, astonishingly productive ones. The miracle was not that anything supernatural intervened, but that matter organized itself into stars, chemistry, life, and eventually minds capable of asking how any of this came to be.
That is not mysticism. It is respect for reality.
II. Wonder as a disciplined response
One of Sagan’s most enduring insights was that wonder is not something science erodes. It is something science trains. Childlike amazement fades quickly; informed amazement deepens with every layer of understanding.
A star does not become less beautiful once you understand nuclear fusion. It becomes more demanding of your attention.
Sagan rejected the idea that seriousness requires emotional distance. He also rejected the opposite idea: that emotion should outrun evidence. His stance was subtler and harder to maintain. Feel deeply — but only about what you have taken the time to understand.
In that sense, wonder was not a mood. It was a discipline.
III. Einstein’s earlier echo
Long before Sagan, Albert Einstein struggled with similar language. When Einstein spoke of a “cosmic religious feeling,” he was not gesturing toward theology. He was pointing to an attitude: humility before order, gratitude for intelligibility, and suspicion of all claims to final certainty.
Einstein’s “mysterious” was not the supernatural. It was the fact that the universe is lawful at all — that abstract reasoning can reach into nature and come back with equations that work.
Sagan did not add much to this philosophically. What he added was clarity of expression, historical context, and a modern voice. If Einstein articulated the posture, Sagan taught generations how to stand in it.
IV. Meaning without guarantees
Neither Einstein nor Sagan believed the universe hands out meaning. The cosmos does not whisper instructions, assign destinies, or promise moral closure. That indifference is not bleak; it is simply honest.
Meaning, on this view, is not discovered like a buried artifact. It is constructed through attention, responsibility, and choice. We care not because the universe demands it, but because we can.
This is where both men quietly diverge from religion and from nihilism alike. There is no cosmic judge — but there is also no excuse to stop caring. The absence of guarantees does not empty life of significance. It places significance squarely in human hands.
That shift is not comforting in the usual sense. It is steadier.
V. Why this still matters
In an age saturated with noise, instant explanations, and synthetic forms of transcendence, Sagan’s voice still feels unusually calm. Not because he offered reassurance, but because he refused shortcuts.
Pay attention. Learn carefully. Stay curious. Accept uncertainty without romanticizing it.
That combination — wonder without illusion, humility without surrender — is rare. It asks more of us than belief systems do. But it also gives more back: a way to stand in the universe without pretending it owes us anything.
Sagan’s spirituality, like Einstein’s before him, was not about escape. It was about orientation. About learning how to look outward without losing intellectual honesty, and inward without inventing metaphysics.
If that still feels “spiritual,” it is only because reality, understood clearly enough, is already more than enough.
On Quantum Mechanics, Meaning, and the Limits of Metaphysical Inquiry
This post is a rewritten version of an essay I published on this blog in September 2020 under the title “The End of Physics.” The original text captured a conviction I still hold: that quantum mechanics is strange but not mysterious, and that much of what is presented as metaphysical depth in modern physics is better understood as interpretive excess. What has changed since then is not the substance of that conviction, but the way I think it should be expressed.
Over the past years, I have revisited several of my physics papers in dialogue with artificial intelligence — not as a replacement for human judgment, but as a tool for clarification, consistency checking, and tone correction. This post is an experiment of the same kind: returning to an older piece of writing with the help of AI, asking not “was I wrong?” but “can this be said more precisely, more calmly, and with fewer rhetorical shortcuts?”
The result is not a repudiation of the 2020 text (and similar ones here on this blog site, or on my ResearchGate page) but a refinement of it. If there is progress here, it lies not in new claims about physics, but in a clearer separation between what physics tells us about the world and what humans sometimes want it to tell us.
— Jean Louis Van Belle 1 January 2026
After the Mysteries: Physics Without Consolations
For more than a century now, quantum mechanics has been presented as a realm of deep and irreducible mystery. We are told that nature is fundamentally unknowable, that particles do not exist until observed, that causality breaks down at the smallest scales, and that reality itself is somehow suspended in a fog of probabilities.
Yet this way of speaking says more about us than about physics.
Quantum mechanics is undeniably strange. But strange is not the same as mysterious. The equations work extraordinarily well, and — more importantly — we have perfectly adequate physical interpretations for what they describe. Wavefunctions are not metaphysical ghosts. They encode physical states, constraints, and statistical regularities in space and time. Particles such as photons, electrons, and protons are not abstract symbols floating in Hilbert space; they are real physical systems whose behavior can be described using familiar concepts: energy, momentum, charge, field structure, stability.
No additional metaphysics is required.
Over time, however, physics acquired something like a priesthood of interpretation. Mathematical formalisms were promoted from tools to truths. Provisional models hardened into ontologies. Concepts introduced for calculational convenience were treated as if they had to exist — quarks, virtual particles, many worlds — not because experiment demanded it, but because the formalism allowed it.
This is not fraud. It is human behavior.
The Comfort of Indeterminism
There is another, less discussed reason why quantum mechanics became mystified. Indeterminism offered something deeply attractive: a perceived escape hatch from a fully ordered universe.
For some, this meant intellectual freedom. For others, moral freedom. And for some — explicitly or implicitly — theological breathing room.
It is not an accident that indeterminism was welcomed in cultural environments shaped by religious traditions. Many prominent physicists of the twentieth century were embedded — socially, culturally, or personally — in Jewish, Catholic, or Protestant worlds. A universe governed strictly by deterministic laws had long been seen as hostile to divine action, prayer, or moral responsibility. Quantum “uncertainty” appeared to reopen a door that classical physics seemed to have closed.
The institutional embrace of this framing is telling. The Vatican showed early enthusiasm for modern cosmology and quantum theory, just as it did for the Big Bang model — notably developed by Georges Lemaître, a Catholic priest as well as a physicist. The Big Bang fit remarkably well with a creation narrative, and quantum indeterminism could be read as preserving divine freedom in a lawful universe.
None of this proves that physics was distorted intentionally. But it does show that interpretations do not emerge in a vacuum. They are shaped by psychological needs, cultural background, and inherited metaphysical anxieties.
Determinism, Statistics, and Freedom
Rejecting metaphysical indeterminism does not mean endorsing a cold, mechanical universe devoid of choice or responsibility.
Statistical determinism is not fatalism.
Complex systems — from molecules to brains to societies — exhibit emergent behavior that is fully lawful and yet unpredictable in detail. Free will does not require violations of physics; it arises from self-organizing structures capable of evaluation, anticipation, and choice. Moral responsibility is not rescued by randomness. In fact, randomness undermines responsibility far more than lawfulness ever did.
Consciousness, too, does not need mystery to be meaningful. It is one of the most remarkable phenomena we know precisely because it emerges from matter organizing itself into stable, recursive, adaptive patterns. The same principles operate at every scale: atoms in molecules, molecules in cells, cells in organisms, organisms in ecosystems — and, increasingly, artificial systems embedded in human-designed environments.
There is no voice speaking to us from outside the universe. But there is meaning, agency, and responsibility arising from within it.
Progress Without Revelation
It is sometimes said that physics is advancing at an unprecedented pace. In a technical sense, this is true. But conceptually, the situation is more sobering.
Most of the technologies we rely on today — semiconductors, lasers, superconductors, waveguides — were already conceptually understood by the mid-twentieth century and are clearly laid out in The Feynman Lectures on Physics. Later developments refined, scaled, and engineered these ideas, but they did not introduce fundamentally new physical principles.
Large experimental programs have confirmed existing theories with extraordinary precision. That achievement deserves respect. But confirmation is not revelation. Precision is not profundity.
Recognizing this is not pessimism. It is intellectual honesty.
After Physics Ends
If there is an “end of physics,” it is not the end of inquiry, technology, or wonder. It is the end of physics as a source of metaphysical consolation. The end of physics as theology by other means.
What remains is enough: a coherent picture of the material world, an understanding of how complexity and consciousness arise, and the responsibility that comes with knowing there is no external guarantor of meaning.
We are on our own — but not lost.
And that, perhaps, is the most mature scientific insight of all.
I have just uploaded a new working paper to ResearchGate: Ontology, Physics, and Math – Einstein’s Unfinished Revolution. I am not announcing it with any sense of urgency, nor with the expectation that it will “change” physics. If it contributes anything at all, it may simply offer a bit of clarity about what we can reasonably claim to see in physics — and what we merely calculate, fit, or postulate. That distinction has preoccupied me for years.
A space to think
One unexpected consequence of taking AI seriously over the past one or two years is that it restored something I had quietly lost: a space to think.
Not a space to produce.
Not a space to publish.
Not a space to compete.
Just a space to think — slowly, carefully, without having to defend a position before it has fully formed. That kind of space has become rare. Academia is under pressure, industry is under pressure, and even independent thinkers often feel compelled to rush toward closure. The conversations I’ve had with AI — what I’ve come to call a corridor — were different. They were not about winning arguments, but about keeping the corridor open only where conceptual clarity survived.
In a strange way, this brought me back to something much older than AI. When I was young, I wanted to study philosophy. My father refused. I had failed my mathematics exam for engineering studies, and in his view philosophy without mathematics was a dead end. In retrospect, I can see that he was probably right — and also that he struggled with me as much as I struggled with him. He should perhaps have pushed me into mathematics earlier; I should perhaps have worked harder. But life does not run backward, and neither does understanding. What AI unexpectedly gave me, decades later, was the chance to reunite those two threads: conceptual questioning disciplined by mathematical restraint. Not philosophy as free-floating speculation, and not mathematics as pure formalism — but something closer to what physics once called natural philosophy.
Why I was always uncomfortable
For a long time, I could not quite place my discomfort. I was uneasy with mainstream Standard Model theorists — not because their work lacks brilliance or empirical success (it clearly does not), but because formal success increasingly seemed to substitute for ontological clarity. At the same time, I felt equally uneasy among outsiders and “fringe” thinkers, who were often too eager to replace one elaborate ontology with another, convinced that the establishment had simply missed the obvious.
I now think I understand why I could not belong comfortably to either camp. Both, in different ways, tend to underestimate what went into building the Standard Model in the first place.
The Standard Model is not just a theory. It is the result of enormous societal investment (yes, taxes matter), decades of engineering ingenuity, and entire academic ecosystems built around measurement, refinement, and internal consistency. One does not wave that away lightly. Criticizing it without acknowledging that effort is not radical — it is careless.
At the same time, acknowledging that effort does not oblige one to treat the resulting ontology as final. Formal closure is not the same thing as physical understanding.
That tension — respect without reverence — is where I found myself stuck.
Seeing versus calculating
The paper I just uploaded does not attempt to overthrow the Standard Model, nor to replace ΛCDM, nor to propose a new unification. It does something much more modest: it tries to separate what we can physically interpret from what we can formally manipulate.
That distinction was central to the worries of people like Albert Einstein, long before it became unfashionable to worry about such things. Einstein’s famous remark to Max Born — “God does not play dice” — was not a rejection of probability as a calculational tool. It was an expression of discomfort with mistaking a formalism for a description of reality. Something similar motivated Louis de Broglie, and later thinkers who never quite accepted that interpretation should be outsourced entirely to mathematics.
What my paper argues — cautiously, and without claiming finality — is that much of modern physics suffers from a kind of ontological drift: symmetries that began life as mathematical operations sometimes came to be treated as physical mandates.
When those symmetries fail, new quantum numbers, charges, or conservation laws are introduced to restore formal order. This works extraordinarily well — but it also risks confusing bookkeeping with explanation.
Matter, antimatter, and restraint
The most difficult part of the paper concerns matter–antimatter creation and annihilation. For a long time, I resisted interpretations that treated charge as something that could simply appear or disappear. That resistance did not lead me to invent hidden reservoirs or speculative intermediates — on the contrary, I explicitly rejected such moves as ontological inflation. Instead, I left the tension open.
Only later did I realize that insisting on charge as a substance may itself have been an unjustified metaphor. Letting go of that metaphor did not solve everything — but it did restore coherence without adding entities. That pattern — refusing both cheap dismissal and cheap solutions — now feels like the right one.
Ambition, patience, and time
We live in a period of extraordinary measurement and, paradoxically, diminished understanding. Data accumulates. Precision improves. Parameters are refined. But the underlying picture often becomes more fragmented rather than more unified.
New machines may or may not be built. China may or may not build the next CERN. That is largely beyond the control of individual thinkers. What is within reach is the slower task of making sense of what we already know. That task does not reward ambition. It rewards patience.
This is also where I part ways — gently, but firmly — with some bright younger thinkers and some older, semi-wise ones. Not because they are wrong in detail, but because they sometimes underestimate the weight of history, infrastructure, and collective effort behind the theories they critique or attempt to replace. Time will tell whether their alternatives mature. Time always tells :-). […] PS: I add a ‘smiley’ here because, perhaps, that is the most powerful phrase of all in this post.
A pause, not a conclusion
This paper may mark the end of my own physics quest — or at least a pause. Not because everything is resolved, but because I finally understand why I could neither fully accept nor fully reject what I was given. I don’t feel compelled anymore to choose sides. I can respect the Standard Model without canonizing it, and I can question it without trying to dethrone it. I can accept that some questions may remain open, not because we lack data, but because clarity sometimes requires restraint.
For now, that feels like enough. Time to get back on the bike. 🙂
PS: Looking back at earlier philosophical notes I wrote years ago — for instance on the relation between form, substance, and charge — I’m struck less by how “wrong” they were than by how unfinished they remained. The questions were already there; what was missing was discipline. Not more speculation, but sharper restraint.
One of the recurring temptations in physics is to mistake violence for depth.
When we push matter to extreme energy densities—whether in particle colliders or in thought experiments about the early universe—we tend to believe we are peeling away layers of reality, discovering ever more “fundamental” constituents beneath the familiar surface of stable matter. The shorter-lived and more exotic a state is, the more “real” it sometimes appears to us.
The starting point is almost embarrassingly simple: stable charged particles persist; unstable ones do not. That fact alone already carries a surprising amount of explanatory power—if we resist the urge to overinterpret it.
Stability as the exception, not the rule
If we imagine the early universe as a high-energy, high-density environment—a kind of primordial soup—then instability is not mysterious at all. Under such conditions, long-lived, self-consistent structures should be rare. Most configurations would be fleeting, short-lived, unable to maintain their identity.
From this perspective, stable particles are not “primitive building blocks” in a metaphysical sense. They are low-energy survivors: configurations that remain coherent once the universe cools and energetic chaos subsides.
Stability, then, is not something that needs to be explained away. It is the phenomenon that needs to be accounted for.
Colliders as stress tests, not ontological excavations
Modern facilities such as CERN allow us to recreate, for fleeting moments, energy densities that no longer exist naturally in the present universe. What we observe there—resonances, decay chains, short-lived states—is fascinating and deeply informative.
But there is a subtle conceptual shift that often goes unnoticed.
These experiments do not necessarily reveal deeper layers of being. They may instead be doing something more modest and more honest: testing how known structures fail under extreme conditions.
In that sense, unstable high-energy states are not more fundamental than stable ones. They are what stability looks like when it is pushed beyond its limits.
A simpler cosmological intuition
Seen this way, cosmogenesis does not require an ever-growing menagerie of proto-entities. A universe that begins hot and dense will naturally favor instability. As it cools, only a small number of configurations will remain phase-coherent and persistent.
Those are the particles we still see today.
No exotic metaphysics is required—only the recognition that persistence is meaningful.
Were the mega-projects worth it?
This perspective does not diminish the value of large-scale scientific projects. On the contrary.
The enormous investments behind colliders or fusion experiments—think of projects like ITER—have given us something invaluable: empirical certainty. They confirmed, with extraordinary precision, intuitions already sensed by the giants of the early twentieth century—figures like Albert Einstein, Paul Dirac, and Erwin Schrödinger.
Perhaps the deepest outcome of these projects is not that they uncovered a hidden zoo of ultimate constituents, but that they showed how remarkably robust the basic structure of physics already was.
That, too, is progress.
Knowing when not to add layers
Physics advances not only by adding entities and mechanisms, but also by learning when not to do so. Sometimes clarity comes from subtraction rather than accumulation.
If nothing else, the simple distinction between stable and unstable charged particles reminds us of this: reality does not owe us an ever-deeper ontology just because we can afford to build more powerful machines.
And perhaps that realization—quiet, unglamorous, but honest—is one of the most valuable lessons high-energy physics has taught us.
This reflection builds directly on an earlier blog post, “Stability First: A Personal Programme for Re-reading Particle Physics” (18 December 2025), in which I outlined a deliberate shift in emphasis: away from ontological layering and towards persistence as a physical criterion. That post introduced the motivation behind Lecture X1—not as a challenge to established data or formalisms, but as an invitation to reread them through a simpler lens. What follows can be read as a continuation of that programme: an attempt to see whether the basic distinction between stable and unstable charged particles already carries more explanatory weight than we usually grant it.
Post Scriptum — An empirical follow-up
When I wrote this piece, the emphasis was deliberately conceptual. The central idea was to treat stability versus instability as a primary organizing perspective, rather than starting from particle families, quark content, or other internal classifications. At the time, I explicitly presented this as an intuition — something that felt structurally right, but that still needed to be confronted with data in a disciplined way.
That confrontation has now been carried out.
Using the Particle Data Group listings as a source, I constructed a deliberately minimalist dataset containing only two observables: rest mass and lifetime. All a priori particle classifications were excluded. Stable or asymptotic states were removed, as were fractionally charged entities, leaving an unclassified ensemble of unstable particles. The resulting mass–lifetime landscape was examined in logarithmic coordinates and subjected to density-based clustering, with the full data table included to allow independent reanalysis.
The outcome is modest, but instructive. A dominant continuum of prompt decays clearly emerges, accompanied by only weak additional structure at longer lifetimes. No rich taxonomy presents itself when decay behaviour alone is considered — but the clusters that do appear are real, reproducible, and consistent with the intuition developed here and in earlier work.
This empirical annex does not “prove” a new theory, nor does it challenge existing classifications. Its value lies elsewhere: it shows what survives when one strips the description down to observables alone, and it clarifies both the power and the limits of a stability-first perspective.
For readers interested in seeing how these ideas behave when confronted with actual data — and in re-using that data themselves — the empirical follow-up is available here:
Sometimes, the most useful result is not a spectacular confirmation, but a careful consistency check that tells us where intuition holds — and where it stops.
Over the past years, I’ve been working — quietly but persistently — on a set of papers that circle one simple, impossible question: What is the Universe really made of?
Not in the language of metaphors. Not in speculative fields. But in terms of geometry, charge, and the strange clarity of equations that actually work.
Here are the three pieces of that arc:
🌀 1. Radial Genesis Radial Genesis: A Finite Universe with Emergent Spacetime Geometry This is the cosmological capstone. It presents the idea that space is not a stage, but an outcome — generated radially by mass–energy events, limited by time and light. It’s an intuitive, equation-free narrative grounded in general relativity and Occam’s Razor.
⚛️ 2. Lectures on Physics: On General Relativity (2) Lectures on GRT (2) This one is for the mathematically inclined. It builds from the ground up: tensors, geodesics, curvature. If Radial Genesis is the metaphor, this is the machinery. Co-written with AI, but line by line, and verified by hand.
🌑 3. The Vanishing Charge The Vanishing Charge: What Happens in Matter–Antimatter Annihilation? This paper is where the mystery remains. It presents two possible views of annihilation: (1) as a collapse of field geometry into free radiation, (2) or as the erasure of charge — with geometry as the by-product. We didn’t choose between them. We just asked the question honestly.
Why This Arc Matters
These three papers don’t offer a Theory of Everything. But they do something that matters more right now: They strip away the fog — the inflation of terms, the myth of complexity for complexity’s sake — and try to draw what is already known in clearer, more beautiful lines.
This is not a simulation of thinking. This is thinking — with AI as a partner, not a prophet.
So if you’re tired of being told that the Universe is beyond your grasp… Start here. You might find that it isn’t.
What if space isn’t a container — but a consequence?
That’s the question I explore in my latest paper, Radial Genesis: A Finite Universe with Emergent Spacetime Geometry, now available on ResearchGate.
The core idea is surprisingly simple — and deeply rooted in general relativity: matter and energy don’t just move through space. They define it. Every object with mass–energy generates its own curved, local geometry. If we take that seriously, then maybe the Universe itself isn’t expanding into something. Maybe it’s unfolding from within — one energy event, one radial patch of space at a time.
This new paper builds on two earlier lecture-style essays on general relativity. But unlike those, this one has no equations — just plain language and geometric reasoning. It’s written for thinkers, not specialists. And yes, co-written with GPT-4 again — in what I call a “creative but critical spirit.”
We also explore:
Why the Universe might be finite and still expanding;
How a mirror version of electromagnetism could explain dark matter;
Why the so-called cosmological constant may be a placeholder for our conceptual gaps;
And whether our cosmos is just one region in a greater, radially unfolding whole — with no center, and no edge.
If you like cosmology grounded in Einstein, Dirac, and Feynman — but with fresh eyes and minimal metaphysics — this one’s for you.
I just published a new lecture — not on quantum physics this time, but on general relativity. It’s titled Lecture on General Relativity and, like my earlier papers, it’s written in collaboration with GPT-4 — who, as I’ve said before, might just be the best teacher I (n)ever had.
We start simple: imagine a little bug walking across the surface of a sphere. From there, we build up the full machinery of general relativity — metric tensors, covariant derivatives, Christoffel symbols, curvature, and ultimately Einstein’s beautiful but not-so-easy field equations.
What makes this lecture different?
No string theory.
No quantum gravity hype.
No metaphysical hand-waving about time being an illusion.
Just geometry — and the conviction that Einstein’s insight still deserves to be understood on its own terms before we bolt anything speculative onto it.
I used ChatGPT to push the math and logic of my ‘realist’ interpretation of (1) matter-antimatter annihilation and creation (the Dirac and Breit-Wheeler processes, respectively) and (2) dark matter and dark energy to its logical and philosophical limits. For those who do not like to read, I made two short audio videos as well: the one on my “mirror force” idea is here, and from there you can go to the other video(s) in the playlist. 🙂 The implications for cosmogenesis models are rather profound – it calls for another approach to explain any “Big Bang” that may or may not have occurred when our Universe was born – so that is something to explore in the future, perhaps.
As an amateur physicist, I get a regular stream of email updates from Science, Nature and Phys.org on new discoveries and new theories in quantum physics. I usually have no idea what to do with them. However, I want to single out two recent updates on the state of affairs of research which these channels report on. The first one is reflected in the title of this post. It’s on a very rare decay mode of kaons: see https://phys.org/news/2024-09-ultra-rare-particle-decay-uncover.html.
Something inside of me says this may lead to a review of all these newly invented conservation laws – combined with new ideas on symmetry breaking too – and/or new ‘quantum numbers’ that are associated with the quark hypothesis: I think everyone has already forgotten about ‘baryon conservation’, so other simplifications based on, yes, simpler Zitterbewegung models of particles may be possible.
The historical background to this is well described by Richard Feynman in his discussion of how these new quantum numbers – strangeness, specifically – were invented to deal with the observation that certain decay reactions were not being observed (see: Feynman’s Lectures, III-11-5, the (neutral) K-meson). So now it turns that certain decay reactions are being observed! Shouldn’t that lead to (future) scientists revisiting the quark/gluon hypothesis itself?
Of course, that would call into question several Nobel Prize awards, so we think it won’t happen any time soon. 🙂 This brings me to the second update from the field. Indeed, a more recent Nobel Prize in Physics which should, perhaps, be questioned in light of more recent measurements questioning old(er) ones (and the theories that are based on them) is the Nobel Prize in 2011 for work on the cosmological constant. Why? Because… Well… New measurements on the rate of expansion of the Universe as reported by Phys.org last month question the measurements which led to that 2011 Prize. Is anyone bothered by that? No. Except me, perhaps, because I am old-fashioned and wonder what is going on.
I get asked about gravity, and some people push particle theories to me talking about gravity. I am, quite simply, not interested. This ‘coming and going’ of the “cosmological constant hypothesis” over the past decades – or, should we say, the past 80 years or so – makes me stay away from GUTs and anything that is related to it. If scientists cannot even agree on these measurements, it is of not much use to invent new modified gravity theories fitting into ever-expanding grand unification schemes based on mathematical frameworks that can only be understood by the conoscienti, isn’t it?
It is tough: I am not the only one (and definitely not the best placed one) to see a lot of researchers – both amateur as well as professional – “getting lost in math” (cf. the title of Hossenfelder’s best-seller). Will there be an end to this, one day?
I am optimistic and so I think: yes. One of the recurring principles that guides some of the critical physicists I greatly admire is Occam’sRazor Principle: keep it simple! Make sure the degrees of freedom in your mathematical scheme match those of the physics you are trying to describe. That requires a lot of rigor in the use of concepts: perhaps we should add concepts to those that, say, Schrödinger and Einstein used 100 years ago. However, my own pet theories and recycling of their ideas do not suggest that. And so I really just can’t get myself to read up on Clifford algebras and other mathematical constructs I am told to study – simply because this or that person tells me I should think in terms of spinors rather than in terms of currents (to just give one specific example here).
I can only hope that more and more academics will come to see this, and that the Nobel Prize committee may think some more about rewarding more conservative approaches rather than the next cargo cult science idea.
OK. I should stop rambling. The musings above do not answer the question we all have: what about gravity, then? My take on that is this: I am fine with Einstein’s idea of gravity just being a reflection of the distribution of energy/mass in the Universe. Whether or not the Universe expands at an ever-faster-accelerating pace must, first, be firmly established by measurements and then, secondly, even then there may be no need for invoking a cosmological constant or other elements of a new “aetherial” theory of space and time.
Indeed, Einstein thought that his first hypothesis on a possible cosmological constant was “his biggest blunder ever.” While I know nothing of the nitty-gritty, I think it is important to listen to “good ol’ Einstein” – especially when he talked about what he ‘trusted’ or not in terms of physical explanations. Einstein’s rejection of the idea of a cosmological constant – after first coming up with it himself and, therefore, having probably having the best grasp of its implications – suggests the cosmological constant is just yet another non-justifiable metaphysical construct in physics and astronomy.
So, let us wrap up this post: is or is there not a need for ‘modified gravity’ theories? I will let you think about that. I am fine with Einstein’s ‘geometric’ explanation of it.
Post scriptum: While I think quite a few of these new quantum numbers related to quarks and – most probably – the quark hypothesis itself will be forgotten in, say, 50 or 100 years from now, the idea of some ‘triadic’ structure to explain the three generations of particles and strange decay modes, is – essentially – sound. Some kind of ‘color’ scheme (I call, rather jokingly, an “RGB scheme” – referring to the color scheme used in video/image processing) should be very useful: an electron annihilates a positron but an electron combines with a proton to form an atom, so there’s something different about these two charges. Likewise, if we think of a neutron as neutral neutronic current, the two charges “inside” must be very different… See pp. 7 ff. on this in my recent paper on multi-charge zbw models.
I was sceptical before – and I am still not a believer in the quark hypothesis – but I do think physicists – or, more likely, future generations of physicists – should get a better “grip” on these three different ‘types’ of electric charge as part of a more realist explanation of what second- or third-generation “versions” of elementary particles might actually be. Such explanation will then probably also explain these “unstable states” (not quite respecting the Planck-Einstein relation) or “exotic” particles. Indeed, I do not see much of a distinction between stable and unstable particle states in current physics. But that’s a remark that’s probably not essential to the discussion here… 🙂
One final remark, perhaps: my first instinct when looking at particle physics, was actually very much inspired by the idea that the quantum-mechanical wavefunction might be something else than just an EM oscillation. When I first calculated force fields in a Zitter electron, and then in the muon-electron and proton, I was rather shocked (see pp. 16 ff. of one of my early papers) and thought: wow! Are we modelling tiny black holes here? But then I quickly came to terms with it. Small massive things must come with such huge field strengths, and all particle radius formulas have mass (or energy) in the denominator: so more mass/energy means smaller scale, indeed! And I also quickly calculated the Schwarzschild radius for these elementary particles, and that is A WHOLE LOT smaller than the radius I get from my simple electromagnetic equations and the Planck-Einstein relation. So I see absolutely no reason whatsoever to think gravitational effects might take over from plain EM fields when you look at things at the smallest of scales.
But, then, who am I? I like to think I am not inventing anything new. I just enjoy playing with old ideas to see if something new comes out of it. I think I am fortunate because I do see a lot of new things coming out of the old ideas, even if there is little or nothing we can add to them: the old Masters have already written it all out. So, now I should stop chewing on these old ideas as well and conclude: if you want to read something, don’t read me or anything contemporary. Just read the classics! Many modern minds – often great mathematicians – tried or try to be smarter than Einstein, Lorentz, de Broglie or Schrödinger (I am deliberately not mentioning other great names): I think the more recent discoveries in physics and cosmology show they are not. 🙂
I wrote my last post here two months ago and so, yes, I feel I have done a good job of ‘switching off’. I have to: I’ve started a new and pretty consuming job as ICT project manager. 🙂
Before starting work, I did take a relaxing break: I went to Barcelona and read quite a few books and, no, no books on quantum physics. Historical and other things are more fun and give you less of a headache.
However, having said that, the peace and quiet did lead to some kind of ‘final thoughts’ on the ‘metaphysics of physics’, and I also did what I never did in regard to my intuition that dark matter/energy might be explained by some kind of ‘mirror force’: the electromagnetic force as it appears in a mirror image. Not much change in the math, but physical left- and right-hand rules for magnetic effects that just swap for each other.
I find that just working off some notes from my tablet and talking about them works better for me than writing elaborate papers. Boileau: “Ce que l’on conçoit bien s’énonce clairement, Et les mots pour le dire arrivent aisément.” I did five new lectures in just one week on my YouTube channel. Have a look at the last one: symmetries and asymmetries in Nature.
It takes an easy-to-understand look at CP- and CPT-symmetry (and the related processes that sometimes break these symmetries) by thinking about what particles actually are: not infinitesimally small, but charged oscillations with a 2D or 3D structure. We also revisit the inherent mass-generating mechanism, which explains all mass in terms of electromagnetic mass.
We talked about CP- and CPT-symmetries before – back in 2014, to be very precise – but then I did not know what I know now, and those older posts also suffered from the 2020 attack by the dark force. 🙂 Briefly, what you should take away from it, is that the most fundamental asymmetry in Nature is this: the asymmetry in the electromagnetic force or field itself. It is that 90 degree phase difference (or ‘lag’) between the electric and magnetic field vectors. That explains why mirror images cannot be real, and it also explains why some processes go one way only. So… Another mystery solved ! I call it “the fallacy of CPT arguments.” 🙂
Post scriptum: I also wrapped up my YouTube ‘Schrödinger’s cat is dead’ series. For those who do not like the theoretical aspects of all these things, have a look at the last one (on pair creation-annihilation and intermediate vector bosons), in which I discuss the two interpretations (mainstream versus my classical perspective) one can have when looking at this wonderful world. I wrote this comment on it, which is probably my farewell to this hobby of mine:
For those who struggle with this, the key to understanding it all, is to understand that the superposition principle works for fields, but not for charges. That is also the key to understanding Bose-Einstein statistics, Fermi-Dirac statistics and – at larger scales – the ‘real world’ Maxwell-Boltzmann statistics (which combine both). See: https://readingfeynman.org/2015/07/21/maxwell-boltzmann-bose-einstein-and-fermi-dirac-statistics/. Always do a good dimensional analysis of the equations: distinguish real physical dimensions from purely mathematical ones: do not add apples and oranges. Distinguish potential or field strengths from real forces and actual energy (a force acting on a charge over some distance). That is why charges should not ‘vanish’ in the analysis, and it is also why i*pi and -i*pi are not ‘common phase factors’ which vanish against each other (both are equal to -1, right?) in equations involving wavefunctions. A positive charge zittering around in one direction is not the same as a negative charge zittering around in the other direction. Neutral particles are either real photons (which carry no charge whatsoever) or, else, neutral matter-particles. Applying the saying that was looks and quacks like a duck must be a duck, we might say most of these neutral particles will look like ordinary matter. Some, however, will look like light-like or be photon-like because they travel at or near the speed of light (the orbital motion of the two charges has vanished and so there is zero angular momentum). That does not mean they are photons. Also do not worry about wave equations when you prefer to think in terms of wavefunctions: wavefunctions are the real thing, not wave equations (see: https://www.researchgate.net/publication/341269271_De_Broglie’s_matter-wave_concept_and_issues and https://www.researchgate.net/publication/342424980_Feynman’s_Time_Machine). If you think otherwise, that is fine. Everyone looks for the Holy Grail, and you may be amongst those who think they have found it. If it is looks very different from the Holy Grail that I have finally found, that is OK. Jesus might have left more than one Holy Grail – fake or real ones – and just be happy with yours ! I will end this short illustrated Guide to the Universe with the Looney Tunes sign-off: “That’s All, Folks!”
I just came back from a consultancy (an IT assessment – it is nice to be fully focused again on work rather than obscure quantum-mechanical models) and, while flying back, I wrote a small paper on the implications of what I have tried to do (showing that, ultimately, we can understand Nature as being ‘statistically deterministic’, just like what A. Einstein and H.A. Lorentz always said) on epistemology, or the inquiry that philosophers refer to as ‘metaphysics’ (interpreted as thoughts on the ‘essence’ of Nature).
I also detail why and how it does not do away with what is probably the single most important foundation of our society (laws, business, etcetera): the idea of free will. Here is the link to the paper, and below I copy the key conclusions:
What I write above [see the paper] and its explanations of the principle of uncertainty as used in modern physics should not make you think that I do not believe in a religious mindset: conscious thoughts, or some sense or feeling of wonder that we would refer to as religious or – a better word, perhaps – mystical. On the contrary, in my journey to understanding, I have often been amazed that our mind is able to understand all of this. Here again, I appreciate my courses of philosophy – especially Hegel’s idea on the concept of our human mind encompassing and understanding more and more as mankind continues its rather lonely journey on a very small planet in a Universe whose borders we cannot imagine.
Such feeling of wonder – an old teacher of mine said the Greeks referred to this as tauma, and that it fuels our desire for knowledge, but I have not been able to find any bibliographic reference to this idea – is, exactly, what has been driving my own personal journey in search of truth. Whether you call that religious or not, is not of much interest to me: I have no need to describe that experience in old or new words.
Likewise, statistically determinism does not do away with the concept of free will: of course, we are the product of a rather amazing evolution, which I think of as rather random – but I do not attach negative connotation to this randomness. On the contrary, while our human mind was originally concerned with making sense of life or death situations, it is now free to think about other things: one’s own personal condition, or the human condition at large. Such thinking may lead to us taking rational decisions that actually change the path that we are following: we stop drinking or smoking for health reasons, perhaps, or we engage in a non-profit project aimed at improving our neighborhood or society at large. And we all realize we should change our behavior in order to ensure the next generation lives in an even better world than we do.
All of this is evidence of a true free will. It is based on our mental ability to rationally analyze in what situation we happen to find ourselves, so as to then try to do what we think we should be doing.
My ‘last’ post talks about the end of physics as a science: nothing or nothing much is left to explain but – of course – a lot of engineering is left to be done! 😉
I have been thinking on my explanation of dark matter/energy, and I think it is sound. It solves the last asymmetry in my models, and explains all. So, after a hiatus of two years, I bothered to make a podcast on my YouTube channel once again. It talks about everything. Literally everything !
It makes me feel my quest for understanding of matter and energy – in terms of classical concepts and measurements (as depicted below) – has ended. Perhaps I will write more but that would only be to promote the material, which should promote itself if it is any good (which I think it is).
I should, by way of conclusion, say a few final words about Feynman’s 1963 Lectures now. When everything is said and done, it is my reading of them which had triggered this blog about ten years ago. I would now recommend Volume I and II (classical physics and electromagnetic theory) – if only because it gives you all the math you need to understand all of physics – but not Volume III (the lectures on quantum mechanics). They are outdated, and I do find Feynman guilty of promoting rather than explaining the hocus-pocus around all of the so-called mysteries in this special branch of physics.
Quantum mechanics is special, but I do conclude now that it can all be explained in terms of classical concepts and quantities. So, Gell-Mann’s criticism of Richard Feynman is, perhaps, correct: Mr. Feynman did, perhaps, make too many jokes – and it gets annoying because he must have known some of what he suggests does not make sense – even if I would not go as far as Gell-Mann, who says “Feynman was only concerned about himself, his ego, and his own image !”
So, I would recommend my own alternative series of ‘lectures’. Not only are they easier to read, but they also embody a different spirit of writing. Science is not about you, it is about thinking for oneself and deciding on what is truthful and useful, and what is not. So, to conclude, I will end by quoting Ludwig Boltzmann once more:
“Bring forward what is true.
Write it so that it is clear.
Defend it to your last breath.”
Ludwig Boltzmann (1844 – 1906)
Post scriptum: As for the ‘hocus-pocus’ in Feynman’s Lectures, we should, perhaps, point once again to some of our early papers on the flaws in his arguments. We effectively put our finger on the arbitrary wavefunction convention, or the (false) boson-fermion dichotomy, or the ‘time machine’ argument that is inherent to his explanation of the Hamiltonian, and so on. We published these things on Academia.edu before (also) putting our (later) papers ResearchGate, so please check there for the full series. 🙂
Post scriptum (23 April 2023): Also check out this video, which was triggered by someone who thought my models amount to something like a modern aether theory, which it is definitely not the case: https://www.youtube.com/watch?v=X38u2-nXoto. 🙂 I really think it is my last reflection on these topics. I need to focus on my day job, sports, family, etcetera again ! 🙂
If you are interested in physics and cosmological theories, then you will know all research has been shaken up by the discovery of dark matter and dark energy. The fact of the matter is this: in 2011, a Nobel Prize was awarded to different teams of astronomers who, independently, discovered a whole lot of matter in our Universe – most matter in the Universe, actually – and that mainstream physicists have no idea about how to go about it in terms of modeling its structure and true nature: it seems quantum field theory and confined quarks and gluons and color charges are pretty useless in this regard.
The discovery goes back to 1998 (so it took the Nobel Prize committee more than ten years to verify it or to see its enormous value as a discovery), and is duly reported in the Wikipedia article on the cosmological constant because of its implications, although I have issues with the contributor to that article talking about ‘a repulsive force’ that would counterbalance the gravitational braking produced by the matter contained in the universe’: that sounds whacky to me. 🙂
The bottom line is this: according to research quoted by NASA, roughly 68% of the Universe would be dark energy, while dark matter makes up another 27%. Hence, all normal matter – including our Earth and all we observe as normal matter outside of it – would add up to less than 5% of it. Hence, NASA rightly notes we should, perhaps, not refer to ‘normal’ matter as ‘normal’ matter at all, since it is such a small fraction of the universe!
Now, as mentioned above: theoretical physicists have no clue about the nature of this dark matter. As our modeling of electrons and protons as two- and three-dimensional electromagnetic oscillations has provided easy answers to difficult questions, we thought we might, perhaps, explore one particularity of the electromagnetic force. Indeed, the electromagnetic force introduces this weird asymmetry in Nature: we know that, in our world, the magnetic field lags the electric field. The phase difference is 90 degrees, and you probably have a good mental image of that electric and magnetic field vector oscillating up and down and also moving together along a line in space. [If not, have a look at this GIF animation in the Wikipedia article on Maxwell’s equations. It shows a linearly polarized wave: both the electric and magnetic field vector oscillate along a straight line rather than rotating around (as they would do in a circularly or elliptically polarized wave).]
Of course, you may not think of this as a necessary asymmetry: if the magnetic field vector were to be 180 degrees out of phase with the electric field vector, then that would make no sense because the magnetic and electric field vectors would be working against each other. Also, we would have no propagation mechanism and all that. In fact, we would have no electromagnetic force theory and we would, quite simply, not be here to write this.
However, that is not what I mean by an asymmetry: what I am saying is that we can imagine another alternative. We can imagine the magnetic field vector to lead instead of to lag in regard to the electric field vector. Hence, Occam’s Razor tells us we should seriously consider such force actually exists! The situation is not unlike how the positron was discovered: people start looking for it because, in the math of his wave equation, Dirac saw positrons could possibly exist. Once people started seriously considering it, they actually found it (Anderson, 1932).
Exceptional measurements require exceptional explanations and so, yes, we thought: why not apply Occam’s Razor once more? Our idea of an antiforce is or was the one degree of freedom in our mathematical representation of matter-particles that we had not exploited yet[1], so our intuition tells us it might be worth considering.
Have a look at it (click the link to our RG paper here). It is a very short and crisp paper, and we think of it as fun to read but that is, of course, for you to judge. 🙂
[1] Truth be told, we were not aware or intrigued by the idea of dark matter or energy about a year ago. We can, however, now see we are actually closing and exploiting an aspect of our modeling of the electromagnetic force which we had not seen before. The history of science shows Occam’s Razor is a good guide for getting at the right model, and so we feel our rather radical use of this principle – in the tradition of P.A.M. Dirac and others, indeed! – may yield interesting results once more.
I went to see the follow-up to Avatar (‘The Way of Water’). It took over 10 years to produce it. Indeed, how time flies: the first ‘Avatar’ was released in 2009 and was, apparently, the highest grossing film of all times (according to Wikipedia, at least). This installment is not doing badly either in terms of revenue and popularity but, frankly, I found it rather underwhelming. This may be because of the current international situation. Indeed, I wonder why American soldiers must always be the ‘true’ space explorers in such movies. Why not some friendly Chinese or Indian explorers? Fortunately, it will be a while before mankind will be able to build spaceships that can travel at speeds that would allow us to visit, say, the Gliese 667 Cc planet, which may well be the nearest planet that is inhabitable (practically speaking), but which is about 22 lightyears away, so that would be a few thousand years of travel with our current spacecraft. Mankind will have to find a way to keep our own planet inhabitable for some more time… Planets like Gliese 667 Cc and other exoplanets that may have life like we know it, will be safe from us for quite a while. 🙂
These are rather philosophical thoughts, but they came up as I was adding an annex to my one and only paper on cosmology, in which I argue there are no mysteries left: the question of ‘dark matter’ is solved when we think of it as anti-matter, and even the accelerating rate of expansion of the Universe could probably be explained by assuming our Universe is just a blob in a larger cluster of universes. These other universes are, obviously, beyond our horizon: that horizon is just the age of the Universe, which is currently estimated to be about 13.8 billion (109) years and which determines the limits of the observable Universe. Hence, not only can we not see or know the outer edges of our Universe (because those outer parts moved further out in the meanwhile, and at the rather astonishing speed of 2c/3, and so must assume the end-to-end distance across the Universe is of the order of 46 billion lightyears), but we would also never see the other universes that are tearing our own Universe apart, so to speak.
By the way, this thought is quite consistent with an earlier thought I had – much before I even knew about this acceleration in the expansion of our Universe when thinking about the Big Bang theory: I always wondered why the coming-into-being of our Universe should be such simple linear and unique process. Why not think of several Big Bangs at different places and times? So, if other universes would exist and tear ours apart, so to speak, then here you have the explanation !
[…]
However, I am not writing this post to share some assumptions or observations. It is to share this thought: is it not strange to think we know all about how reality works (as mentioned, I think there are no real questions or mysteries left in the science of physics) but that, at the same time, we are quite alone with our science and technology here on Earth?
Indeed, other forms of intelligent life are likely (highly likely, in light of the rather incredible size of the Universe), but they are too far away to be relevant to us: probably hundreds or even thousands of lightyears away, rather than only 20 or 40 of lightyears, which is the distance to the nearest terrestrial exoplanets, such as the mentioned Gliese 667 Cc planet. So we know it all and we relish in such knowledge and then, one day, we just die?
I had been wanting to update my paper on matter-antimatter pair creation and annihilation for a long time, and I finally did it: here is the new version of it. It was one of my early papers on ResearchGate and, somewhat surprising, it got quite a few downloads (all is relative: I am happy with a few thousand). I actually did not know why, but now I understand: it does take down the last defenses of QCD- and QFT-theorists. As such, I now think this paper is at least as groundbreaking as my paper on de Broglie’s matter-wave (which gets the most reads), or my paper on the proton radius (which gets the most recommendations).
My paper on de Broglie’s matter-wave is important because it explains why and how de Broglie’s bright insight (matter having some frequency and wavelength) was correct, but got the wrong interpretation: the frequencies and wavelengths are orbital frequencies, and the wavelengths are are not to be interpreted as linear distances (not like wavelengths of light) but the quantum-mechanical equivalent of the circumferences of orbital radii. The paper also shows why spin (in this or the opposite direction) should be incorporated into any analysis straight from the start: you cannot just ignore spin and plug it in back later. The paper on the proton radius shows how that works to yield short and concise explanations of the measurable properties of elementary particles (the electron and the proton). The two combined provide the framework: an analysis of matter in terms of pointlike particles does not get us anywhere. We must think of matter as charge in motion, and we must analyze the two- or three-dimensional structure of these oscillations, and use it to also explain interactions between matter-particles (elementary or composite) and light-particles (photons and neutrinos, basically). I have explained these mass-without-mass models too many times now, so I will not dwell on it.
So, how that paper on matter-antimatter pair creation and annihilation fit in? The revision resulted in a rather long and verbose thing, so I will refer you to it and just summarize it very briefly. Let me start by copying the abstract: “The phenomenon of matter-antimatter pair creation and annihilation is usually taken as confirmation that, somehow, fields can condense into matter-particles or, conversely, that matter-particles can somehow turn into lightlike particles (photons and/or neutrinos, which are nothing but traveling fields: electromagnetic or, in the case of the neutrino, some strong field, perhaps). However, pair creation usually involves the presence of a nucleus or other charged particles (such as electrons in experiment #E144). We, therefore, wonder whether pair creation and annihilation cannot be analyzed as part of some nuclear process. To be precise, we argue that the usual nuclear reactions involving protons and neutrons can effectively account for the processes of pair creation and annihilation. We therefore argue that the need to invoke some quantum field theory (QFT) to explain these high-energy processes would need to be justified much better than it currently is.”
Needless to say, the last line above is a euphemism: we think our explanation is complete, and that QFT is plain useless. We wrote the following rather scathing appreciation of it in a footnote of the paper: “We think of Aitchison & Hey’s presentation of [matter-antimatter pair creation and annihilation] in their Gauge Theories in Particle Physics (2012) – or presentations (plural), we should say. It is considered to be an advanced but standard textbook on phenomena like this. However, one quickly finds oneself going through the index and scraping together various mathematical treatments – wondering what they explain, and also wondering how all of the unanswered questions or hypotheses (such as, for example, the particularities of flavor mixing, helicity, the Majorana hypothesis, etcetera) contribute to understanding the nature of the matter at hand. I consider it a typical example of how – paraphrasing Sabine Hossenfelder’s judgment on the state of advanced physics research – physicist do indeed tend to get lost in math.”
That says it all. Our thesis is that charge cannot just appear or disappear: it is not being created out of nothing (or out of fields, we should say). The observations (think of pion production and decay from cosmic rays here) and the results of the experiments (the mentioned #E144 experiment or other high-energy experiments) cannot be disputed, but the mainstream interpretation of what actually happens or might be happening in those chain reactions suffers from what, in daily life, we would refer to as ‘very sloppy accounting’. Let me quote or paraphrase a few more lines from my paper to highlight the problem, and to also introduce my interpretation of things which, as usual, are based on a more structural analysis of what matter actually is:
“Pair creation is most often observed in the presence of a nucleus. The role of the nucleus is usually reduced to that of a heavy mass only: it only appears in the explanation to absorb or provide some kinetic energy in the overall reaction. We instinctively feel the role of the nucleus must be far more important than what is usually suggested. To be specific, we suggest pair creation should (also) be analyzed as being part of a larger nuclear process involving neutron-proton interactions. […]”
“Charge does not get ‘lost’ or is ‘created’, but [can] switch its ‘spacetime’ or ‘force’ signature [when interacting with high-energy (anti)photons or (anti)neutrinos].”
“[The #E144 experiment or other high-energy experiments involving electrons] accounts for the result of the experiment in terms of mainstream QED analysis, and effectively thinks of the pair production being the result of the theoretical ‘Breit-Wheeler’ pair production process from photons only. However, this description of the experiment fails to properly account for the incoming beam of electrons. That, then, is the main weakness of the ‘explanation’: it is a bit like making abstraction of the presence of the nucleus in the pair creation processes that take place near them (which, as mentioned above, account for the bulk of those).”
We will say nothing more about it here because we want to keep our blog post(s) short: read the paper! 🙂 To wrap this up for you, the reader(s) of this post, we will only quote or paraphrase some more ontological or philosophical remarks in it:
“The three-layered structure of the electron (the classical, Compton and Bohr radii of the electron) suggest that charge may have some fractal structure and – moreover – that such fractal structure may be infinite. Why do we think so? If the fractal structure would not be infinite, we would have to acknowledge – logically – that some kind of hard core charge is at the center of the oscillations that make up these particles, and it would be very hard to explain how this can actually disappear.” [Note: This is a rather novel new subtlety in our realist interpretation of quantum physics, so you may want to think about it. Indeed, we were initially not very favorable to the idea of a fractal charge structure because such fractal structure is, perhaps, not entirely consistent with the idea of a Zitterbewegung charge with zero rest mass), we think much more favorably of the hypothesis now.]
“The concept of charge is and remains mysterious. However, in philosophical or ontological terms, I do not think of it as a mystery: at some point, we must, perhaps, accept that the essence of the world is charge, and that:
There is also an antiworld, and that;
It consists of an anticharge that we can fully define in terms of the signature of the force(s) that keep it together, and that;
The two worlds can, quite simply, not co-exist or – at least – not interact with each other without annihilating each other.
Such simple view of things must, of course, feed into cosmological theories: how, then, came these two worlds into being? We offered some suggestions on that in a rather simple paper on cosmology (our one and only paper on the topic), but it is not a terrain that we have explored (yet).”
So, I will end this post in pretty much the same way as the old Looney Tunes or Merrie Melodies cartoons used to end, and that’s by saying: “That’s all Folks.” 🙂
Enjoy life and do not worry too much. It is all under control and, if it is not, then that is OK too. 🙂
I made a start with annotating all of my papers. I will arrange them in a paper of itself: working paper no. 30 on ResearchGate. I will date it on 6 December when finished, in honor of one my brothers who died on that day (6 December), from a cancer that visited me too. Jean-Claude was his name. He was a great guy. I miss him, and sometimes feel guilty of having survived. Hereunder follows the first draft – a sort of preview for those who like this blog and have encouraged me to go on.
The 29 papers which I published on ResearchGate end a long period of personal research, which started in earnest when I sent my very first paper, as a young student in applied economics and philosophy, to the 1995 ‘Einstein meets Magritte’ Conference in Brussels. I do no longer have that paper, but I remember it vehemently defended the point of view that the ‘uncertainty’ as modeled in the Uncertainty Principle must be some kind of statistical determinism: what else can it be? Paraphrasing the words of H.A. Lorentz, at the occasion of the 1927 Solvay Conference, a few months before his death, there is, effectively, no need to elevate indeterminism to a philosophical principle: scientists must keep determinism has to be kept as ‘an object of faith.’ That is what science is all about. All that is needed is to replace our notion of predictability by the notion of statistical determinism: we can no longer predict what is going to happen, because we can or do not know the initial conditions, or because our measurement disturbs the phenomenon we are analyzing, but that is it. There is nothing more to it. That is what Heisenberg’s now rather infamous Uncertainty Principle is all about it: it is just what he originally thought about it himself.
I found the metaphor of a fast-rotating airplane propeller a very apt one[1], and several people who wrote me also said it made them see what it was all about. One cannot say where the blades are, exactly, and if you would shoot bullets through it, those bullets will either hit a blade and be deflected or will, quite simply, just go straight through. There is no third possibility. We can only describe the moving propeller in terms of some density in space. This is why the probabilities in quantum physics are proportional to mass densities or, what amounts to the same because of Einstein’s mass-energy equivalence relation, energydensities.
The propeller metaphor is useful in other contexts too. It explains quantum-mechanical tunneling, for example: if one thinks of matter-particles as pointlike charges in motion – which is what we do[2] – then the fields that surround them will be dynamic and, therefore, be like a propeller too: at one particular point in space and in time, the field will have a magnitude and a direction that will not allow another particle (think of it as a bullet) to get through – as the field acts as a force on the charge – but ‘holes appear in the wall’, so to speak, and they do so in a regular fashion, and then the incoming particle’s kinetic energy – while lower than the average potential energy of the barrier – will carry it through. There is, therefore, nothing weird or mysterious about tunneling.
Many more examples may be mentioned, but then I would be rewriting my papers, and that is not the purpose of this one, which is to conclude my research by revisiting and commenting on the rather vast mass of paper I produced previously: 29 papers in just one year (April 2020 – April 2021). These papers did not bring me fame, but did generate enough of a readership to produce a decent RG score – as evidenced below (sorry if this looks egotistical: it is not meant that way[3]).
I have effectively been ridiculed by family, friends and – sadly – by quite a few fellow searchers for truth. But I have also been encouraged, and I prefer to remember the encouragements. One of my blog posts writes about the suicide of Paul Ehrenfest and other personal tragedies in the history of physics. It notes a remark from a former diplomat-friend of mine, who remarked this: “It is good you are studying physics only as a pastime. Professional physicists are often troubled people—miserable.”
I found it an interesting observation from a highly intelligent outsider who, as a diplomat, meets many people with very different backgrounds. I do understand this strange need to probe things at the deepest level—to be able to explain what might or might not be the case (I am using Wittgenstein’s definition of reality here). I also note all of the founding fathers of quantum mechanics ended up becoming pretty skeptical about the theory they had created. Even John Stewart Bell – one of the more famous figures in what may be referred to as the third generation of quantum physicists – did not like his own ‘No Go Theorem’ and thought that some “radical conceptual renewal”[4] might disprove his conclusions.
It sounds arrogant, but I think my papers are representative of such renewal. It is, as great thinkers in the past would have said, an idea whose time has come. Einstein’s ‘unfinished revolution’ – as Lee Smolin calls it – was finished quite a while ago, but mainstream researchers just refuse to accept that.[5] And those researchers who think quantum physicists are ‘lost in math’ are right but, unfortunately, usually make no effort by speaking up and showing the rather obvious way out. Sabine Hossenfelder uses as much guru-like talk as a Sean Carroll.[6]
In May this year, after finishing what I thought of as my last paper on quantum physics, I went to hospital for surgery. Last year, one of my brothers died from prostate cancer at a rather young age: 56, my age bracket. He had been diagnosed but opted for a more experimental treatment instead of the usual surgery that is done, because the consequences of the surgery are effectively very unpleasant and take a lot of joy out of life. I spent a week in a hospital bed, and then a month in my bed at home. I stopped writing. I gave up other things too: I stopped doing sports, and picked up smoking instead. It is a bad habit: Einstein was a smoker and – like me – did not drink, but smoking is bad for health. I feel it. I will quit smoking too, one day – but not now.
The point is: after a long break (more than six months), I did start to engage again in a few conversations, and I also looked at my 29 papers on my ResearchGate page again, and I realized some of them should really be re-written or re-packaged so as to ensure a good flow. I also note now that some of the approaches were more productive than others (some did not lead anywhere at all, actually), and so I felt like I should point those out. There are some errors in logic here and there too (small ones, I think, but errors nevertheless), and then quite some typos.[7] Hence, I thought I should, perhaps, produce an annotated version of these papers, with comments and corrections as mark-ups. Re-writing or re-structuring all of them would require too much work, so I do not want to go there.
So that is what this paper is about: I printed all of the papers, and I will quickly jot down some remarks so as to guide the reader through the package, and alert them to things I thought of good stuff at the time (otherwise I would not have written about it), but that I do think of as not-so-great now.
Before I do so, I should probably make a few general remarks. Let me separate those out in yet another introductory section of this paper.
1. The first remark is that I do repeat a few things quite a lot – across and within these papers. Too much, perhaps. However, there is one thing I just cannot repeat enough: one should not think of the matter-wave as something linear. It is an orbital oscillation. This is really where the Old Great Men went wrong. The paper that has been downloaded the most is, effectively, the one on what I refer to as de Broglie’s mistake: the intuition of the young Louis de Broglie that an electron has a frequency was a stroke of genius (and, fortunately, Einstein immediately saw this, so he could bring this young scientist under the attention of everyone else), but this frequency is an orbital frequency. That, I repeat a lot – because only a few people seem to get that (with ‘a few’, I mean the few thousand people who download that paper).
Having said that, I did not do a good job at pointing out the issues with Dirac’s wave equation: I sort of dismiss it out of hand referring to Oppenheimer and Dirac’s discussion at the occasion of the first post-WW II Solvay Conference in my brief history paper on quantum-mechanical ideas, during which they both agree it does not work but fail to provide a consistent alternative. However, I never elaborated on why the equation does not work, so let me do this now.
The reason that it does not work is, basically, the same as the reason why de Broglie’s wave-packet idea does not work: Dirac’s equation is based on the relativistic energy-momentum relation. Just look at Dirac’s 1933 Nobel Prize lecture, in which he gives us the basic equation he used to derive his (in)famous wave equation:
W2/c2 – pr2 – m2/c2 = 0
Dirac does not bother to tell us but this is, basically, just the relativistic energy-momentum relationship: m02c4 = E2 – p2c2 (see, for example, Feynman-I-16, formula 16.13). Indeed: just divide this formula by c2 and re-arrange and you get Dirac’s equation. That is why Dirac’s wave equation is essentially useless: it incorporates linear momentum only. As such, it repeats de Broglie’s mistake, and that is to interpret the ‘de Broglie’ wavelength as something linear. It is not: frequencies, wavelengths are orbital frequencies and orbital circumferences. So anything you would want to do with energy equations that are based on that, leads nowhere[8]: one has to incorporate the reality of spin from the start. Spin-zero particles do not exist and any modeling that starts off from modeling spin-zero particles, therefore, fails: you cannot put spin back in through the back door once you are done with the basic model, so to speak. It just does not work. It is what gives us, for example, those nonsensical 720-degree symmetries, which prevent us from understanding what is actually happening.
2. The second remark that I should make is that I did not pay enough attention to the analysis of light-particles: photons and neutrinos and, possibly, their antiforce or antimatter counterparts. Huh? Their anti-force counterparts? Yes. Remember: energy is measured as a force over a distance, and a force acts on a charge. And then Einstein’s energy-mass energy equivalence relation tells us we should think of mass in terms of energy. Hence, if we know the force, we have got everything. Electrons and protons have a very different charge/mass ratio (q/m) and, therefore, involve two very different forces, even if we think of these two very different forces – which we could refer to as ‘weak’ and ‘strong’ respectively, but that would generate too much confusion because these terms have already been used – as acting on the same charge.
I refer to my paper(s) on this: the hypothesis is, basically, that we have two different forces, indeed! One that keeps, say, the electron together, which is nothing but the electromagnetic force, and one that is much stronger and seems to have a somewhat different structure. That is the force that keeps a muon-electron or a proton together. The structure of this much stronger force is the same because it also acts on a charge, and we also have two field vectors: think of the magnetic field vector lagging the electric field by 90 degrees. However, it is also not the same because the form factor differs: orbital oscillations can be either planar or spherical (2D or 3D).
I will not go into the detail here – again, I would be rewriting the papers, which is not what I want to do here – but the point is that antimatter is defined by an antiforce, which sees the magnetic field vector preceding the electric field vector by the same phase difference (90 degrees). It is just an application of Occam’s Razor Principle: the very same principle which made Dirac predict the existence of the positron: if the math shows there is some possibility of something else existing – a positively charged ‘electron’, at the time – then that possibility must be real, and we must find ‘that thing’. The history of science has shown scientists always did.
That is all clear enough (or not), but so the point here is this: the lightlike particles (photons and neutrinos) that carry the electromagnetic and nuclear force respectively (I refer to that strong(er) force as ‘nuclear’ for rather obvious reasons[9]) must have anti-counterparts: antiphotons and antineutrinos. And so I regret that I did not do too much analysis on that. I am pretty sure, for example, that antiphotons must play a role in the creation of electron-positron pairs in experiments such as SLAC’s E144 experiment (pair production out of light-on-light (photonic) interaction).
In short, I regret I did not have enough time and/or inspiration to analyze such things much more in detail than I did in my paper on matter-antimatter pair production/annihilation, especially because that is a paper that gets a lot of downloads too, so I feel I should rework it to present more material and better analysis. It is unfortunate that energy and time is limited in a man’s life. The question is, effectively, very interesting because the ‘world view’ that emerges from my papers is a rather dualistic one: we have the concept of charge on the one hand, and the concept of a field on the other. Matter-antimatter pair creation/annihilation from/into photons suggest that charge may, after all, be reducible to something that is even more fundamental. That is why I bought a rather difficult book on chiral field theory (Lähde and Meißner, Nuclear Lattice Effective Field Theory, 2019), but an analysis of that will probably be a retirement project or something.
3. The remark above directly relates to something else I think I did not do so well, and that is to explain Mach-Zehnder interference by a model in which we think of circularly polarized photons (or elliptically polarized, I should say, to be somewhat more general) as consisting of two linear components, which we may actually split from each other by a beam splitter. That takes the mystery out of Mach-Zehnder interference, but I acknowledge my analysis in a paper like my ‘K-12 level paper’ on quantum behavior (which gives a one-page overview of the logic) may be too short to convince skeptical readers. The Annex to my rather philosophical paper on the difference between a theory, a calculation and an explanation is better, but even there I should have gone much further than I did.[10]
4. I wrote quite a few papers that aim to develop a credible neutron and/or deuteron model. I think of the neutron in very much the same way as Ernest Rutherford, the intellectual giant who first hypothesized the existence of the neutron based on cosmological research, thought about neutrons: a positively charged proton or other nuclear particle attached to some kind of deep electron.[11] It is worth quoting his instinct on this, as expressed at the occasion of the 1921 Solvay Conference, in response to a question during the discussions on Rutherford’s paper on the possibility of nuclear synthesis in stars or nebulae from the French physicist Jean Baptiste Perrin who, independently from the American chemist William Draper Harkins, had proposed the possibility of hydrogen fusion just the year before (1919):
“We can, in fact, think of enormous energies being released from hydrogen nuclei merging to form helium—much larger energies than what can come from the Kelvin-Helmholtz mechanism.[12] I have been thinking that the hydrogen in the nebulae might come from particles which we may refer to as ‘neutrons’: these would consist of a positive nucleus with an electron at an exceedingly small distance (“un noyau positif avec un électron à toute petite distance“[13]). These would mediate the assembly of the nuclei of more massive elements. It is, otherwise, difficult to understand how the positively charged particles could come together against the repulsive force that pushes them apart—unless we would envisage they are driven by enormous velocities.”
We may add that, just to make sure he gets this right, Rutherford is immediately requested to elaborate his point by the Danish physicist Martin Knudsen, who asks him this: “What’s the difference between a hydrogen atom and this neutron?” Rutherford simply answers as follows: “In a neutron, the electron would be very much closer to the nucleus.”
In light of the fact that it was only in 1932 that James Chadwick would experimentally prove the existence of neutrons (and positively charged protons), we should be deeply impressed by the foresightof Rutherford and the other pioneers here: the predictive powerof their theories and ideas is truly amazing by any standard—including today’s. It may have something to do with the fact that the distinction between theoretical and experimental physicists was not so clear then.[14] The point is this: we fully subscribe to Rutherford’s intuition that a neutron should, somehow, be a composite particle consisting of a proton and an electron, but we did not succeed in modeling that convincingly. We explored two ways to go about it:
One is to think of a free neutron which, we should remind ourselves, is a semi-stable particle only (its lifetime is a bit less than 15 minutes, which is an eternity in comparison to other non-stable particles). The challenge is then to build a credible n0 = p+ + e– model.
The other option is to try to build a neutron model based on its stability inside of the deuteron nucleus. Such model should probably be based on Schrödinger’s D+ = p+ + e– + p+Platzwechsel model, which thinks of the electron as a sort of glue holding the two positive charges together.
The first model is based on the assumption that we have two forces, very much like the centripetal and centrifugal force inside of a double-star. The difference – with a double-star model, that is – is that the charges have no rest mass.[15] The nature of those two forces is, therefore, very different than (1) the centripetal gravitational force that keeps the two stars together and (2) the centrifugal force that results from their kinetic energy and/or orbital momentum. We assumed the attractive force between the p+ and e– is the usual electromagnetic force between two opposite charges (so that keeps them together). However, because the two charges clearly do not just go and sit on top of each other, we also assumed a ‘nuclear’ force acts at very close distances, and we tried to model this by introducing a Yukawa-like nuclear potential.
We will discuss this more in detail when commenting on our papers in the next section, but the truth is that we feel we have not been able to develop a fully consistent model: it is not like our electron or proton model, which yields fully consistent calculations of the experimentally measured mass, radius, magnetic moment and other so-called intrinsic properties (e.g. the anomaly in the magnetic moment of the electron) of these two elementary particles. We could not do for the neutron. However, we hope some smart PhD student will try his or her hand at improving on our models and succeed where we did not.
As for the second model (the deuteron nucleus model), we did not work all that because that is, basically, an even more complicated problem than the math of a classical three-body problem which, as you know, has no analytical solution. So we inevitably have to lump two bodies together – the two protons might make for a nice massive pair, for example – but then you lose the idea of the neutron. In other words, it may give you a deuteron model, but nothing much in terms of a neutron model.
5. Those were the main frustrations, I think. We will probably point out others too in the more detailed paper-by-paper comments in the next section, but I would like to make one or two more remarks regarding style and conversation culture in physics now.
The main remark is this: I did some research in economics (various sub-disciplines ranging from micro-economics to the history of thought in economics) and I found the conversational style of fellow researchers in those fields much more congenial and friendly than in physics. It may have something to do with the fact such study was done while I was young (so that was almost 30 years ago and people were, quite simply, friendlier then, perhaps), but I also think there might be a different reason. I was (and still am) interested in quantum physics because I wanted to know: this search for truth in modeling (or whatever you want to call it) is rooted in a deep need or desire to understand reality. Personally, I think the Uncertainty Principle got elevated to some kind of metaphysical principle because some of the scientists wanted to reserve a space for God there. I am not religious at all, and if God exists, I am sure he would not to be hiding there but inside of our mind.
In any case, my point here is this: I think there is an emotional or religious aspect to discussions on fundamentals that is absent in the social sciences which, in most cases, turns these discussions quickly personal or even aggressive. As an example, I would refer to all these ‘relativity doubters’ that pop up in the more popular or general ResearchGate discussion threads on the ‘consistency’ of quantum physics, or the pros and cons of modern cosmological theories. I vented my frustration on that on my blog a few times (here is an example of my issues with SRT/GRT doubters), and so then I just stop arguing or contributing to these threads, but I do find it sad because a lot of people like me probably just do the same: they stop engaging, and that probably makes the ignorance even worse and then there is no progress at all, of course!
However, having said this, I also note unfriendliness is inversely proportional to expertise, knowledge and experience. In other words: never be put off by anyone. I did go through the trouble of contacting the PRad Research Lab and people like Dr. Randolf Pohl (Max Planck Institute), and I got curt but useful answers from them: answers that challenged me, but those challenges have helped me to think through my models and have contributed to solidifying my initial intuitions, which I would sum as follows: there is a logical interpretation of everything. I refer to it as a realist interpretation of quantum physics and, as far as I am concerned, it is pretty much the end of physics as a science. We do know it all now. There is no God throwing dices or tossing coins. Statistical determinism, yes, but it is all rooted in formulas and closed mathematical models representing real stuff in three-dimensional space and one-dimensional time.
Let me now (self-)criticize my own papers one by one. 😊
Note: I briefly tried to hyperlink the titles (of the papers) to the papers themselves, but the blog editor (WordPress) returned an error. I guess this blog post is quite long and has to many links already. In any case, the titles do refer to the papers on my RG site, and the reader can consult them there.
No comments. We think this paper gives a rather nice overview of what made sense to us. We also like the two annexes because they talk about quantum-mechanical operators and show why and how the argument of the wavefunction incorporates (special) relativity (SRT/GRT naysayers should definitely read this).
There is a remnant of one of the things we tried and did not yield much: a series expansion of kinetic and/or potential energy from Einstein’s energy-mass equivalence relation. That result from a discussion with researchers trying to model other deep electron orbitals (other than the ‘deep’ electron in a neutron or a deuteron nucleus): they were thinking of potentials in terms of first-, second-, third-, etc.-order terms, so as to simplify things. I went along with it for a while because I thought it might yield something. But so it did not. Hence, I would leave that out now, because the reader probably wonders what it is that I am trying to do, and rightly so!
This is one in a series of what I jokingly thought of as a better or more concise version of Feynman’s Lectures on Physics. I wrote six of these. Feynman once selected ten ‘easy pieces’ and ten ‘not-so-easy’ pieces from his own lectures, if I am not mistaken¾but so these should qualify as relatively ‘easy’ pieces (in comparison with other papers, that is).
It downplays the concept of the gyromagnetic ratio in quantum mechanics somewhat by focusing on the very different charge/mass ratio for the electron and a proton (q/m) only. For the rest, there is nothing much to say about it: if you are a student in physics, this is the math you surely need to master!
This paper is one of those attempts to be as short as I can be. I guess I wanted it to be some kind of memorandum or something. It still developed into five pages, and it does not add anything to all of the longer papers. Because it is short and has no real purpose besides providing some summary of everything, I know think its value is rather limited. I should probably take it down.
This is one of the papers on a neutron or deuteron model. I think the approach is not bad. The use of orbital energy equations to try to model the orbital trajectories of (zero rest-mass) charges instead of the usual massive objects in gravitational models is promising. However, it is difficult to define what the equivalent of the center of mass would be in such models. One might think it should be the center of ‘energy’, but the energy concepts are dynamic (potential and kinetic energy vary all the time). Hence, it is difficult to precisely define the reference point for the velocity vector(s) and all that. We refer to our general remarks for what we think these papers might have yielded, and what not. For the rest, we let the reader go through them and, hopefully, try to do better.
We like this paper very much because it shows why quaternion math should be used far more often than it is actually done in physics: it captures the geometry of proton and neutron models so nicely. We probably will want to delve into this more as yet another retirement project. We also like this paper because it is short and crispy.
Probably not our best paper, and one that should or could be merged with others covering the same topics. However, the philosophical reflections in this paper – on the arrow of time and what is absolute and relative in physics – are nice and can be readily understood. They would probably come first if ever we would want to write a textbook or something. We also recommend the primordial dimensional analysis of basic equations in physics: modern-day papers usually do not bother to check or comment on these.
This is one of these papers which shows the shortcomings of our approach to modeling anything ‘nuclear’. The idea of two or three charges holding and pushing each other apart simultaneous – with two opposite forces acting, just like the centripetal and centrifugal force in any gravitational model – is nice, and we think the substitution of mass by some combination of charge and mass in the orbital energy equation is brilliant (sorry if this sounds egotistical again) but, as mentioned above, it is difficult to define what the equivalent of the center of mass would be in such models.
Also, because of the distance functions involved (the ‘nuclear’ force in such a model varies with the square of the distance and is, therefore, non-linear), one does not get any definite solution to the system: we derived a lower limit for a ‘range’ factor for the nuclear force, for example (and its magnitude corresponds more or less to what mainstream physicists – rather randomly – use when using Yukawa-like potentials[17]).
It would be an interesting area for modeling if and when I would have more time and energy for these things, so I do hope others pick up on it and, hopefully, do better.
Same remarks as above: I like this paper because it is short. I also allow myself to blast away at quark-gluon theories (‘smoking gun physics’, as I call it[18]). There are also the explanations of useful derivatives of the wavefunction, which show why and how our geometric interpretation of the wavefunction makes sense.
We also quickly demonstrate the limitations of the scattering matrix approach to modeling unstable particle and particle system processes, despite the fact we do love it: the problem is just that you lose track of directions and that we, therefore, cannot explain even very simple stuff such as scattering angles in Compton scattering processes using that S-matrix approach. Here too, we hope some clever people might ‘augment’ the approach.
We like this paper. It deserves a lot more downloads than it gets, we think. It is the proper alternative to all kinds of new ‘conservation laws’ – and the associated new ‘strange’ properties of particles – that were invented to make sense of the growing ‘particle zoo’. The catalogue of the Particle Data Group should be rewritten, we feel. 😊
Of course, any physicist should be interested in cosmology – if only because any Big Bang theory uses pair creation/annihilation theories rather extensively. As mentioned in our general remarks, we still struggle with these theories and, yes, definitely on our list as a retirement project.
The main value of the paper is that it offers a consistent explanation of ‘dark matter’ in terms of antimatter, and also that it does not present the apparently accelerating pace of the expansion of the Universe as something that is necessarily incongruent: there may be other Universes around, beyond what we can observe. The paper also offers some other ‘common-sense’ explanations: none of them involves serious doubts on standard theory (we do not doubt anything like SRT and/or GRT). We, therefore, think that this paper shows that I am much more ‘mainstream’ and far less ‘crackpot’ than my ‘enemies’ pretend I am. 😊
This is definitely my worst paper in terms of structure. It has no flow and jumps from this to that. Even when I read it myself, I wonder what it is trying to say. I must have been in a rather weird mood when I wrote it, and then it got too long and I probably then suddenly had enough of it.[19] The conclusions do sound like I had gone mad: if my kids or someone else would have read it before I published it, they might have prevented me from doing so. Any case, it is there now. I will probably take it off one day.
Of course, I note the month of writing: my specialist had just confirmed my prostate cancer was very aggressive, and that I had to do the surgery sooner rather than later if I wanted to avoid what had killed my brother just months before: metastasis to kidneys and other organs. And my long-term girlfriend has just broke up – again. And I had just come back from yet another terrible consultancy job in Afghanistan. Looking into my diary of those days, I had probably relapsed into a bit of drinking, and too many parties with the ghosts of Oppenheimer and Ehrenfest. In short, I should take that paper of the web, but I will leave it there just for the record.
This paper is better than the one mentioned above but – at the same time – suffers from the same defects: no clear flow in the argument, ‘jumpy’, and lots of ‘deus ex machina’-like additions and sidekicks.[20] Its only advantage is that it does offer a rather clear explanation of what works and probably cannot work in Wheeler’s geometrodynamicsprogramme: mass-without-mass models are fine. The way to go: forces act on charges, and energy is force over a distance, and mass relates to energy through Einstein’s mass-energy equivalence relation. No problem. But the concept of charge is difficult to reduce. Chiral field theories may yet prove to do that, but I am rather skeptical. I bought the most recent book(s) on that, but I need to find time and energy to work myself through it.
This is a much more focused paper. However, I cannot believe I inserted remarks on the ‘elasticity’ of spacetime there: that stinks of what physicist and Nobel Prize winner Robert B. Laughlin wrote[21]:
“It is ironic that Einstein’s most creative work, the general theory of relativity, should boil down to conceptualizing space as a medium when his original premise [in special relativity] was that no such medium existed [..] The word ‘ether’ has extremely negative connotations in theoretical physics because of its past association with opposition to relativity. This is unfortunate because, stripped of these connotations, it rather nicely captures the way most physicists actually think about the vacuum. . . . Relativity actually says nothing about the existence or nonexistence of matter pervading the universe, only that any such matter must have relativistic symmetry. [..] It turns out that such matter exists. About the time relativity was becoming accepted, studies of radioactivity began showing that the empty vacuum of space had spectroscopic structure similar to that of ordinary quantum solids and fluids. Subsequent studies with large particle accelerators have now led us to understand that space is more like a piece of window glass than ideal Newtonian emptiness. It is filled with ‘stuff’ that is normally transparent but can be made visible by hitting it sufficiently hard to knock out a part. The modern concept of the vacuum of space, confirmed every day by experiment, is a relativistic ether. But we do not call it this because it is taboo.”
I was intrigued by that, because I was still struggling somewhat with the meaning of various ratios in my ‘oscillator’ model of elementary particles, but I now think any reference to an ‘aether-like’ quality of space time is not productive. Space and time are, effectively, categories of our mind – as Immanuel Kant had already pointed out about 240 years ago (it is interesting that the Wikipedia article on Einstein notes that Albert Einstein had digested all of Kant’s philosophy at the age of twelve) – and space and time are relativistically related (there is no ‘absolute’ time that ‘pervades’ all of 3D space) – but there is no reason whatsoever to think of relativistic spacetime as being aether-like. It is just the vacuum in which Maxwell’s electromagnetic waves propagate themselves. There is nothing more to it.
See the general remarks on my attempts to develop a decent model of the neutron and deuteron nucleus. They were triggered by interesting discussions with a Canadian astrophysicist (Andrew Meulenberg), an American retired SLAC researcher (Jerry Va’vra) and a French ‘cold fusion’ researcher (Jean-Luc Paillet). I was originally not very interested because these are aimed at proving a smaller version of the hydrogen (which is usually referred to as the ‘hydrino’) must exist, and that ‘hydrino’ would offer endless possibilities in terms of ‘new energy’ production. The whole enterprise is driven by one of the many crooks that give the field of ‘cold fusion’ a bad name, but managed to get lots of private funding nevertheless: Randell L. Mills, the promotor of the Brilliant Light Power company in New Jersey. The above-mentioned researchers are serious. I do not think as highly of Randell Mills, although I note he impresses people with his books on ‘classical quantum physics’. I note a lot of ‘hocus-pocus’ in these books.
This is one of those ‘Feynman-like’ lectures I wrote. I think of all of them as rather nice. I do not go into speculative things, and I take the trouble of writing everything out, so the reader does not have to do all that much thinking and just can ‘digest’ everything rather easily.
This is definitely one of the papers I wanted to further develop if ever I would have more time and energy. See my general remarks: SLAC’s E144 experiment (and similar experiments) are very intriguing because they do seem to indicate the quintessential concept of charge may be further reducible to ‘field-like’ oscillations. I must thank André Michaud here for kindly pointing that out to me.
I think of this paper as highly relevant and practical. It points out why the common view that Schrödinger’s wave equation would not be relativistically correct is erroneous: it is based on an erroneous simplification in the ‘heuristic’ derivation of this wave equation in the context of, yes, crystal lattices. Definitely one of the better papers when I look back at it now¾just like the other ‘lecture-like’ papers. The history of these ‘lecture-like’ papers is simple: I realized I needed to write more ‘K-12 level’ papers (although they are obviously not really K-12 level) so as to be able to communicate better on the ‘basics’ of my realist interpretation of quantum physics and the ‘essentials’ of my elementary particle models.
The paper usefully distinguishes concepts that are often used interchangeably, but must be distinguished clearly: waves, fields, oscillations, amplitudes and signals.
This is an oft-downloaded paper, and the number of downloads reflects its value: it does offer a rather clear overview of all of my work on ‘interpreting’ the wavefunction, and shows its geometrical meaning. Hence, I will not comment on it: it speaks for itself.
I like this paper. It wanted to present a sort of ‘short-cut’ for people who want to learn about physics fast and, therefore, will want to avoid all of the mistakes I made when trying to understand it.
Same remark as for the other ‘lecture-like’ papers: I think of this as a ‘nice’ paper covering all you would want and need to know about the concept of fields.
This paper talks about where Feynman went wrong in his Lectures. Parvus error in principio magnus est in fine (as Aquinas and, before him, Aristotle said so eloquently), and the ‘small mistake at the beginning’ is surely not a ‘happy’ one! I consider the discovery of this ‘mistake’ to be my greatest personal ‘discovery’ in terms of making sense of it all, and so I do recommend any interested reader to go through the paper.
This is like the other lectures: a rather straightforward treatment. Of the concept of probability amplitudes, and the related math and physics¾this time.
I appreciate this paper in the same vein: quite straightforward and to the point. It explains the basic ‘mysteries’ which are usually presented in the first course on quantum mechanics at any university in terms that are readily understandable, and shows these are not ‘mysteries’ after all!
This paper further expands on what I consider to be my best paper of all, which is the next one (on de Broglie’s matter-wave). It gets a fair amount of downloads, and so I am happy about that.
Of all papers, definitely the one I would recommend reading if you have time for only one. See my general remarks on why mainstream QED/QFT does not work. The only thing I should have added are the remarks on Dirac’s equation (this paper has an Annex on wave equations, and so I should have talked about Dirac’s too). But so I did that in the introductory section with general remarks on all of my papers above.
I like this paper too. It is not so technical as all of the others, so the ‘lay’ reader may want to go through this. It traces a rather ‘bad’ history of ideas that led nowhere¾but so that is useful to see what should work, and does work, in the field of quantum physics!
I like this one too. It should probably be read in combination with the above-mentioned paper on the bad ideas in the history of quantum physics.
It is fifty (50!) pages, though. But it has some really interesting things, such as much more consistent presentation of why Mach-Zehnder interference (‘one-photon’ diffraction, or the so-called ‘interference with a photon with itself’) is not so mysterious as it appears to be. It surely should not be explained in terms of nonsensical concepts such as non-locality, entanglement and what have you in modern-day gibberish.
This was my very first ‘entry’ on ResearchGate. It is based on the 60-odd papers and the hundreds of blog posts I had published in the decades before, on sites such as viXra.org that are not considered to be mainstream and, therefore, shunned by most. In fact, in the very beginning, I copied my papers on three sites: ResearchGate, viXra.org and academia.org. I stopped doing that when things picked up on RG. I do think of it as the more serious site of the three. 😊
[…]
Well… That is it! If you got here, congratulations for your perseverance!
Jean Louis Van Belle, 6 December 2021
[1] I downloaded the image from a website selling Christmas presents long time ago, and I have not been able to trace back from where I have got it. If someone recognizes this as their picture, please let us know and we will acknowledge the source or remove it.
[2] Particles are small – very small – but not infinitesimally small: they have a non-zero spatial dimension, and structure! Only light-like particles – photons and neutrinos – are truly pointlike, but even they do have a structure as they propagate in relativistic spacetime.
[3] I got the label of ‘crackpot theorist’ or the reproach of ‘not understanding the basics’ a bit all too often, and too often from people who do have better academic credentials in the field, but a publication record which is far less impressive¾or in an unrelated field.
[4] See: John Stewart Bell, Speakable and unspeakable in quantum mechanics, pp. 169–172, Cambridge University Press, 1987 (quoted from Wikipedia). J.S. Bell died from a cerebral hemorrhage in 1990 – the year he was nominated for the Nobel Prize in Physics and which he, therefore, did not receive (Nobel Prizes are not awarded posthumously). He was just 62 years old then.
[5] We think the latest revision of SI units (2019) consecrates that: that revision completes physics. It defines a very precise number of constants in Nature, and simplifies the system such that the system is complete without redundancy. It, therefore, respects Occam’s Razor Principle: the number of degrees of freedom in the description matches that which we find in Nature. Besides prof. dr. Pohl’s contributions to solving the proton radius puzzle, his role in the relevant committees on this revision probably also make him one of the truly great scientists of our era.
[6] We contacted both. Ms. Hossenfelder never reacted to our emails. Mr. Carroll quoted some lines from John Baez’ ‘crackpot index’. I had heard such jokes before so I did not find them so amusing anymore.
[7] Sometimes I find an error even in a formula. That is annoying, but then it is also good: it makes readers double-check and look at the material more carefully. It makes them think for themselves, which is what they should do.
[8] Dirac basically expands this basic energy-momentum relation into a series, but the mathematical conditions for which such expansion is valid are, apparently, not there. The first-, second-, third-, fourth-, etc.-order terms do not converge, and one gets those ‘infinities’ which blow it all up¾which is why Dirac, nearing the end of his life, got so critical and annoyed by the very theory his wave equation led to: quantum field theory. Reading between the lines, a number of Nobel Prize winners in physics do seem to reject some of the theories for which they got the award. W.E. Lamb is one of them: he wrote a highly critical paper of the concept of a photon at rather old age, despite the fact that his contributions to this field of study had yielded him a Nobel Prize! Richard Feynman is another example: he got a Nobel Prize for a number of modern contributions, but his analysis of ‘properties’ such as ‘ strangeness’ in his 1963 Lectures on Physics can be read as being highly critical of the ‘ontologizing’ of concepts such as quarks and gluons, which he seems to think of as being mathematical concepts only. I talk a bit about that in my paper on the alternative to modern-day QED and QFT (a new S-matrix programme), so I will not say more about this here.
[10] I think I do a much better job at explaining interference and/or diffraction of electrons in the mentioned papers, although the reader may also be hungry for more detail there.
[11] The reader should note that, although the mass of an electron is only about 1/2000 of that of a proton, the radius of a (free) electron is actually much larger than the radius of a proton. That is a strange thing but it is what it is: a proton is very massive because of that very strong (nuclear) force inside. Hence, when trying to visualize these n = p + e models, one should think of something like an electron cloud with a massive positive charge whirling around in it¾rather than the other way around.
[12] The interested reader can google what this is about.
[13] It is a weird coincidence of history that the proceedings of the Solvay Conferences are publicly available in French, even if many papers must have been written in English. The young Louis de Broglie was one of those young secretaries tasked with translations in what was then a very prominent scientific language: French. It got him hooked, obviously.
[14] When reading modern-day articles in journals, one gets the impression a lot of people theorize an awful lot about very little empirical or experimental data.
[15] The idea is that the pointlike charge itself has no inertial mass. It, therefore, goes round and round at the speed of light. However, while doing so, it acquires an effective mass, which is (usually) half of the total mass of the particle as a whole. This ½ factor confuses many, but should not do so. It comes directly out of the energy equipartition principle, and can also be derived from rather straightforward relativistically correct oscillator energy calculations (see p. 9 of our paper on the meaning of the wavefunction).
[17] We get value that is twice as large as the usual 2.8 fm range. By the way, we think of the latter value as being ‘rather random’ because it is just the deuteron radius. Indeed, if, as a nuclear scientist, you do not have any idea about what range to use for a nuclear scale factor (which is pretty much the case), then that is surely a number that would come in handy, because it is empirical rather than theoretical. We honestly think there is nothing more to it, but I think academics will probably cry wolf and say that their models are much more sophisticated than what I suggest here. I will be frank: can you show me why and how, not approximately but exactly?
[18] If you click on the link, you will see my blog post on it, which also thinks of the Higgs particle – a ‘scalar’ particle, really? – as a figment of the mind. My criticism on these theories which can never really be proven goes back years ago, but has not softened. On the contrary.
[19] This is also a paper with a fair amount of types. On page 36, I talk of the prediction of the proton, for example. Of course, I meant to say: the prediction of the existence of the positron. Such typos are bad. I am ashamed.
[20] Some of these ‘sidekicks’ do get more attention in later papers (e.g. this paper has the early thinking on using orbital energy equations to model orbitals of pointlike charges instead of masses), but they come across as rather chaotic and not well thought-through in this paper, because they were chaotic and not well thought-through at that point in time.
After a long break (more than six months), I have started to engage again in a few conversations. I also looked at the 29 papers on my ResearchGate page, and I realize some of them would need to be re-written or re-packaged so as to ensure a good flow. Also, some of the approaches were more productive than others (some did not lead anywhere at all, actually), and I would need to point those out. I have been thinking about how to approach this, and I think I am going to produce an annotated version of these papers, with comments and corrections as mark-ups. Re-writing or re-structuring all of them would require to much work.
The mark-up of those papers is probably going to be based on some ‘quick-fire’ remarks (a succession of thoughts triggered by one and the same question) which come out of the conversation below, so I thank these thinkers for having kept me in the loop of a discussion I had followed but not reacted to. It is an interesting one – on the question of ‘deep electron orbitals’ (read: the orbitals of negative charge inside of a nucleus exist and, if so, how one can model them. If one could solve that question, one would have a theoretical basis for what is referred to as low-energy nuclear reactions. That was known formerly as cold fusion, but that got a bit of a bad name because of a number of crooks spoiling the field, unfortunately.
PS: I leave the family names of my correspondents in the exchange below out so they cannot be bothered. One of them, Jerry, is a former American researcher at SLAC. Andrew – the key researcher on DEPs – is a Canadian astrophysicist, and the third one – Jean-Luc – is a rather prominent French scientist in LENR.]
From: Jean Louis Van Belle Sent: 18 November 2021 22:51 Subject: Staying engaged (5)
Oh – and needless to say, Dirac’s basic equation can, of course, be expanded using the binomial expansion – just like the relativistic energy-momentum relation, and then one can ‘cut off’ the third-, fourth-, etc-order terms and keep the first and second-order terms only. Perhaps it is equations like that kept you puzzled (I should check your original emails). In any case, this way of going about energy equations for elementary particles is a bit the same as those used in perturbation equations in which – as Dirac complained – one randomly selects terms that seem to make sense and discard others because they do not seem to make sense. Of course, Dirac criticized perturbation theory much more severely than this – and rightly so. 😊 😊 JL
From: Jean Louis Van Belle Sent: 18 November 2021 22:10 Subject: Staying engaged (4)
Also – I remember you had some questions on an energy equation – not sure which one – but so I found Dirac’s basic equation (based on which he derives the ‘Dirac’ wave equation) is essentially useless because it incorporates linear momentum only. As such, it repeats de Broglie’s mistake, and that is to interpret the ‘de Broglie’ wavelength as something linear. It is not: frequencies, wavelengths are orbital frequencies and orbital circumferences. So anything you would want to do with energy equations that are based on that, lead nowhere – in my not-so-humble opinion, of course. To illustrate the point, compare the relativistic energy-momentum relation and Dirac’s basic equation in his Nobel Prize lecture (I hope the subscripts/superscripts get through your email system so they display correctly):
Divide the above by c2 and re-arrange and you get Dirac’s equation: W2/c2 – pr2 – m2/c2 = 0 (see his 1933 Nobel Prize Lecture)
So that cannot lead anywhere. It’s why I totally discard Dirac’s wave equation (it has never yielded any practical explanation of a real-life phenomenon anyway, if I am not mistaken).
Cheers – JL
From: Jean Louis Van Belle Sent: 18 November 2021 21:49 Subject: Staying engaged (3)
Just on ‘retarded sources’ and ‘retarded fields’ – I have actually tried to think of the ‘force mechanism’ inside of an electron or a proton (what keeps the pointlike charge in this geometric orbit around a center of mass?). I thought long and hard about some kind of model in which we have the charge radiate out a sub-Planck field, and that its ‘retarded effects’ might arrive ‘just in time’ to the other side of the orbital (or whatever other point on the orbital) so as to produce the desired ‘course correction’ might explain it. I discarded it completely: I am now just happy that we have ‘reduced’ the mystery to this ‘Planck-scale quantum-mechanical oscillation’ (in 2D or 3D orbitals) without the need for an ‘aether’, or quantized spacetime, or ‘virtual particles’ actually ‘holding the thing together’.
Also, a description in terms of four-vectors (scalar and vector potential) does not immediately call for ‘retarded time’ variables and all that, so that is another reason why I think one should somehow make the jump from E-B fields to scalar and vector potential, even if the math is hard to visualize. If we want to ‘visualize’ things, Feynman’s discussion of the ‘energy’ and ‘momentum’ flow in https://www.feynmanlectures.caltech.edu/II_27.html might make sense, because I think analyses in terms of Poynting vectors are relativistically current, aren’t they? It is just an intuitive idea…
Cheers – JL
From: Jean Louis Van Belle Sent: 18 November 2021 21:28 Subject: Staying engaged (2)
But so – in the shorter run – say, the next three-six months, I want to sort out those papers on ResearchGate. The one on the de Broglie’s matter-wave (interpreting the de Broglie wavelength as the circumference of a loop rather than as a linear wavelength) is the one that gets most downloads, and rightly so. The rest is a bit of a mess – mixing all kinds of things I tried, some of which worked, but other things did not. So I want to ‘clean’ that up… 😊 JL
From: Jean Louis Van Belle Sent: 18 November 2021 21:21 Subject: Staying engaged…
Please do include me in the exchanges, Andrew – even if I do not react, I do read them because I do need some temptation and distraction. As mentioned, I wanted to focus on building a credible n = p + e model (for free neutrons but probably more focused on a Schrodinger-like D = p + e + p Platzwechsel model, because the deuteron nucleus is stable). But so I will not do that the way I studied the zbw model of the electron and proton (I believe that is sound now) – so that’s with not putting in enough sleep. I want to do it slowly now. I find a lot of satisfaction in the fact that I think there is no need for complicated quantum field theories (fields are quantized, but in a rather obvious way: field oscillations – just like matter-particles – pack Planck’s quantum of (physical) action which – depending on whether you freeze time or positions as a variable, expresses itself as a discrete amount of energy or, alternatively, as a discrete amount of momentum), nor is there any need for this ‘ontologization’ of virtual field interactions (sub-Planck scale) – the quark-gluon nonsense.
Also, it makes sense to distinguish between an electromagnetic and a ‘strong’ or ‘nuclear’ force: the electron and proton have different form factors (2D versus 3D oscillations, but that is a bit of a non-relativistic shorthand for what might be the case) but, in addition, there is clearly a much stronger force at play within the proton – whose strength is the same kind of ‘scale’ as the force that gives the muon-electron its rather enormous mass. So that is my ‘belief’ and the ‘heuristic’ models I build (a bit of ‘numerology’ according to Dr Pohl’s rather off-hand remarks) support it sufficiently for me to make me feel at peace about all these ‘Big Questions’.
I am also happy I figured out these inconsistencies around 720-degree symmetries (just the result of a non-rigorous application of Occam’s Razor: if you use all possible ‘signs’ in the wavefunction, then the wavefunction may represent matter as well as anti-matter particles, and these 720-degree weirdness dissolves). Finally, the kind of ‘renewed’ S-matrix programme for analyzing unstable particles (adding a transient factor to wavefunctions) makes sense to me, but even the easiest set of equations look impossible to solve – so I may want to dig into the math of that if I feel like having endless amounts of time and energy (which I do not – but, after this cancer surgery, I know I will only die on some ‘moral’ or ‘mental’ battlefield twenty or thirty years from now – so I am optimistic).
So, in short, the DEP question does intrigue me – and you should keep me posted, but I will only look at it to see if it can help me on that deuteron model. 😊 That is the only ‘deep electron orbital’ I actually believe in. Sorry for the latter note.
Cheers – JL
From: Andrew Sent: 16 November 2021 19:05 To: Jean-Luc; Jerry; Jean Louis Subject: Re: retarded potential?
Dear Jean-Louis,
Congratulations on your new position. I understand your present limitations, despite your incredible ability to be productive. They must be even worse than those imposed by my young kids and my age. Do you wish for us to not include you in our exchanges on our topic? Even with no expectation of your contributing at this point, such emails might be an unwanted temptation and distraction.
Dear Jean-Luc,
Thank you for the Wiki-Links. They are useful. I agree that the 4-vector potential should be considered. Since I am now considering the nuclear potentials as well as the deep orbits, it makes sense to consider the nuclear vector potentials to have an origin in the relativistic Coulomb potentials. I am facing this in my attempts to calculate the deep orbits from contributions to the potential energies that have a vector component, which non-rel Coulomb potentials do not have.
For examples: do we include the losses in Vcb (e.g., from the binding energy BE) when we make the relativistic correction to the potential; or, how do we relativistically treat pseudo potentials such as that of centrifugal force? We know that for equilibrium, the average forces must cancel. However, I’m not sure that it is possible to write out a proper expression for “A” to fit such cases.
Best regards to all,
Andrew
_ _ _
On Fri, Nov 12, 2021 at 1:42 PM Jean-Luc wrote:
Dear all,
I totally agree with the sentence of Jean-Louis, which I put in bold in his message, about vector potential and scalar potential, combined into a 4-vector potential A, for representing EM field in covariant formulation. So EM representation by 4-vector A has been very developed, as wished by JL, in the framework of QED.
We can see the reality of vector potential in the Aharonov-Bohm effect: https://en.wikipedia.org/wiki/Aharonov-Bohm_effect. In fact, we can see that vector potential contains more information than E,B fields. Best regards
Jean-Luc Le 12/11/2021 à 05:43, Jean Louis Van Belle a écrit :
Hi All – I’ve been absent in the discussion, and will remain absent for a while. I’ve been juggling a lot of work – my regular job at the Ministry of Interior (I got an internal promotion/transfer, and am working now on police and security sector reform) plus consultancies on upcoming projects in Nepal. In addition, I am still recovering from my surgery – I got a bad flue (not C19, fortunately) and it set back my auto-immune system, I feel. I have a bit of a holiday break now (combining the public holidays of 11 and 15 November in Belgium with some days off to bridge so I have a rather nice super-long weekend – three in one, so to speak).
As for this thread, I feel like it is not ‘phrasing’ the discussion in the right ‘language’. Thinking of E-fields and retarded potential is thinking in terms of 3D potential, separating out space and time variables without using the ‘power’ of four-vectors (four-vector potential, and four-vector space-time). It is important to remind ourselves that we are measuring fields in continuous space and time (but, again, this is relativistic space-time – so us visualizing a 3D potential at some point in space is what it is: we visualize something because our mind needs that – wants that). The fields are discrete, however: a field oscillation packs one unit of Planck – always – and Planck’s quantum of action combines energy and momentum: we should not think of energy and momentum as truly ‘separate’ (discrete) variables, just like we should not think of space and time as truly ‘separate’ (continuous) variables.
I do not quite know what I want to say here – or how I should further work it out. I am going to re-read my papers. I think I should further develop the last one (https://www.researchgate.net/publication/351097421_The_concepts_of_charge_elementary_ring_currents_potential_potential_energy_and_field_oscillations), in which I write that the vector potential is more real than the electric field and the scalar potential should be further developed, and probably it is the combined scalar and vector potential that are the ’real’ things. Not the electric and magnetic field. Hence, illustrations like below – in terms of discs and cones in space – do probably not go all that far in terms of ‘understanding’ what it is going on… It’s just an intuition…
Cheers – JL
From: Andrew Sent: 23 September 2021 17:17 To: Jean-Luc; Jerry; Jean Louis Subject: retarded potential?
Dear Jean-Luc,
Becasue of the claim that gluons are tubal, I have been looking at the disk-shaped E-field lines of the highly-relativistic electron and comparing it to the retarded potential, which, based on timing, would seem to give a cone rather than a disk (see figure). This makes a difference when we consider a deep-orbiting electron. It even impacts selection of the model for impact of an electron when considering diffraction and interference.
Even if the field appears to be spreading out as a cone, the direction of the field lines are that of a disk from the retarded source. However, how does it interact with the radial field of a stationary charge?
Do you have any thoughts on the matter.
Best regards,
Andrew
_ _ _
On Thu, Sep 23, 2021 at 5:05 AM Jean-Luc wrote:
Dear Andrew, Thank you for the references. Best regards, Jean-Luc
Le 18/09/2021 à 17:32, Andrew a écrit : > This might have useful thoughts concerning the question of radiation > decay to/from EDOs. > > Quantum Optics Electrons see the quantum nature of light > Ian S. Osborne > We know that light is both a wave and a particle, and this duality > arises from the classical and quantum nature of electromagnetic > excitations. Dahan et al. observed that all experiments to date in > which light interacts with free electrons have been described with > light considered as a wave (see the Perspective by Carbone). The > authors present experimental evidence revealing the quantum nature of > the interaction between photons and free electrons. They combine an > ultrafast transmission electron microscope with a silicon-photonic > nanostructure that confines and strengthens the interaction between > the light and the electrons. The “quantum” statistics of the photons > are imprints onto the propagating electrons and are seen directly in > their energy spectrum. > Science, abj7128, this issue p. 1324; see also abl6366, p. 1309
The meaning of life in 15 pages !🙂 [Or… Well… At least a short description of the Universe… Not sure it helps in sense-making.] 🙂
Post scriptum (25 March 2021): Because this post is so extremely short and happy, I want to add a sad anecdote which illustrates what I have come to regard as the sorry state of physics as a science.
A few days ago, an honest researcher put me in cc of an email to a much higher-brow researcher. I won’t reveal names, but the latter – I will call him X – works at a prestigious accelerator lab in the US. The gist of the email was a question on an article of X: “I am still looking at the classical model for the deep orbits. But I have been having trouble trying to determine if the centrifugal and spin-orbit potentials have the same relativistic correction as the Coulomb potential. I have also been having trouble with the Ademko/Vysotski derivation of the Veff = V×E/mc2 – V2/2mc2 formula.”
I was greatly astonished to see X answer this: “Hello – What I know is that this term comes from the Bethe-Salpeter equation, which I am including (#1). The authors say in their book that this equation comes from the Pauli’s theory of spin. Reading from Bethe-Salpeter’s book [Quantum mechanics of one and two electron atoms]: “If we disregard all but the first three members of this equation, we obtain the ordinary Schroedinger equation. The next three terms are peculiar to the relativistic Schroedinger theory”. They say that they derived this equation from covariant Dirac equation, which I am also including (#2). They say that the last term in this equation is characteristic for the Dirac theory of spin ½ particles. I simplified the whole thing by choosing just the spin term, which is already used for hyperfine splitting of normal hydrogen lines. It is obviously approximation, but it gave me a hope to satisfy the virial theorem. Of course, now I know that using your Veff potential does that also. That is all I know.” [I added the italics/bold in the quote.]
So I see this answer while browsing through my emails on my mobile phone, and I am disgusted – thinking: Seriously? You get to publish in high-brow journals, but so you do not understand the equations, and you just drop terms and pick the ones that suit you to make your theory fit what you want to find? And so I immediately reply to all, politely but firmly: “All I can say, is that I would not use equations which I do not fully understand. Dirac’s wave equation itself does not make much sense to me. I think Schroedinger’s original wave equation is relativistically correct. The 1/2 factor in it has nothing to do with the non-relativistic kinetic energy, but with the concept of effective mass and the fact that it models electron pairs (two electrons – neglect of spin). Andre Michaud referred to a variant of Schroedinger’s equation including spin factors.”
Now X replies this, also from his iPhone: “For me the argument was simple. I was desperate trying to satisfy the virial theorem after I realized that ordinary Coulomb potential will not do it. I decided to try the spin potential, which is in every undergraduate quantum mechanical book, starting with Feynman or Tippler, to explain the hyperfine hydrogen splitting. They, however, evaluate it at large radius. I said, what happens if I evaluate it at small radius. And to my surprise, I could satisfy the virial theorem. None of this will be recognized as valid until one finds the small hydrogen experimentally.That is my main aim. To use theory only as a approximate guidance. After it is found, there will be an explosion of “correct” theories.” A few hours later, he makes things even worse by adding: “I forgot to mention another motivation for the spin potential. I was hoping that a spin flip will create an equivalent to the famous “21cm line” for normal hydrogen, which can then be used to detect the small hydrogen in astrophysics. Unfortunately, flipping spin makes it unstable in all potential configurations I tried so far.”
I have never come across a more blatant case of making a theory fit whatever you want to prove (apparently, X believes Mills’ hydrinos (hypothetical small hydrogen) are not a fraud), and it saddens me deeply. Of course, I do understand one will want to fiddle and modify equations when working on something, but you don’t do that when these things are going to get published by serious journals. Just goes to show how physicists effectively got lost in math, and how ‘peer reviews’ actually work: they don’t.