Certainty and uncertainty

A lot of the Uncertainty in quantum mechanics is suspiciously certain. For example, we know an electron will always have its spin up or down, in any direction along which we choose to measure it, and the value of the angular momentum will, accordingly, be measured as plus or minus ħ/2. That doesn’t sound uncertain to me. In fact, it sounds remarkably certain, doesn’t it? We know – we are sure, in fact, because of countless experiments – that the electron will be in either of those two states, and we also know that these two states are separated by ħ, Planck’s quantum of action, exactly.

Of course, the corollary of this is that the idea of the direction of the angular momentum is a rather fuzzy concept. As Feynman convincingly demonstrates, it is ‘never completely along any direction’. Why? Well… Perhaps it can be explained by the idea of precession?

In fact, the idea of precession might also explain the weird 720° degree symmetry of the wavefunction.

Hmm… Now that is an idea to look into ! 🙂

Advertisements

A Survivor’s Guide to Quantum Mechanics?

When modeling electromagnetic waves, the notion of left versus right circular polarization is quite clear and fully integrated in the mathematical treatment. In contrast, quantum math sticks to the very conventional idea that the imaginary unit (i) is – always! – a counter-clockwise rotation by 90 degrees. We all know that –i would do just as an imaginary unit as i, because the definition of the imaginary unit says the only requirement is that its square has to be equal to –1, and (–i)2 is also equal to –1.

So we actually have two imaginary units: i and –i. However, physicists stubbornly think there is only one direction for measuring angles, and that is counter-clockwise. That’s a mathematical convention, Professor: it’s something in your head only. It is not real. Nature doesn’t care about our conventions and, therefore, I feel the spin ‘up’ versus spin ‘down’ should correspond to the two mathematical possibilities: if the ‘up’ state is represented by some complex function, then the ‘down’ state should be represented by its complex conjugate.

This ‘additional’ rule wouldn’t change the basic quantum-mechanical rules – which are written in terms of state vectors in a Hilbert space (and, yes, a Hilbert space is an unreal as it sounds: its rules just say you should separate cats and dogs while adding them – which is very sensible advice, of course). However, they would, most probably (just my intuition – I need to prove it), avoid these crazy 720 degree symmetries which inspire the likes of Penrose to say there is no physical interpretation on the wavefunction.

Oh… As for the title of my post… I think it would be a great title for a book – because I’ll need some space to work it all out. 🙂

Quantum math: garbage in, garbage out?

This post is basically a continuation of my previous one but – as you can see from its title – it is much more aggressive in its language, as I was inspired by a very thoughtful comment on my previous post. Another advantage is that it avoids all of the math. 🙂 It’s… Well… I admit it: it’s just a rant. 🙂 [Those who wouldn’t appreciate the casual style of what follows, can download my paper on it – but that’s much longer and also has a lot more math in it – so it’s a much harder read than this ‘rant’.]

My previous post was actually triggered by an attempt to re-read Feynman’s Lectures on Quantum Mechanics, but in reverse order this time: from the last chapter to the first. [In case you doubt, I did follow the correct logical order when working my way through them for the first time because… Well… There is no other way to get through them otherwise. 🙂 ] But then I was looking at Chapter 20. It’s a Lecture on quantum-mechanical operators – so that’s a topic which, in other textbooks, is usually tackled earlier on. When re-reading it, I realize why people quickly turn away from the topic of physics: it’s a lot of mathematical formulas which are supposed to reflect reality but, in practice, few – if any – of the mathematical concepts are actually being explained. Not in the first chapters of a textbook, not in its middle ones, and… Well… Nowhere, really. Why? Well… To be blunt: I think most physicists themselves don’t really understand what they’re talking about. In fact, as I have pointed out a couple of times already, Feynman himself admits so much:

“Atomic behavior appears peculiar and mysterious to everyone—both to the novice and to the experienced physicist. Even the experts do not understand it the way they would like to.”

So… Well… If you’d be in need of a rather spectacular acknowledgement of the shortcomings of physics as a science, here you have it: if you don’t understand what physicists are trying to tell you, don’t worry about it, because they don’t really understand it themselves. 🙂

Take the example of a physical state, which is represented by a state vector, which we can combine and re-combine using the properties of an abstract Hilbert space. Frankly, I think the word is very misleading, because it actually doesn’t describe an actual physical state. Why? Well… If we look at this so-called physical state from another angle, then we need to transform it using a complicated set of transformation matrices. You’ll say: that’s what we need to do when going from one reference frame to another in classical mechanics as well, isn’t it?

Well… No. In classical mechanics, we’ll describe the physics using geometric vectors in three dimensions and, therefore, the base of our reference frame doesn’t matter: because we’re using real vectors (such as the electric of magnetic field vectors E and B), our orientation vis-á-vis the object – the line of sight, so to speak – doesn’t matter.

In contrast, in quantum mechanics, it does: Schrödinger’s equation – and the wavefunction – has only two degrees of freedom, so to speak: its so-called real and its imaginary dimension. Worse, physicists refuse to give those two dimensions any geometric interpretation. Why? I don’t know. As I show in my previous posts, it would be easy enough, right? We know both dimensions must be perpendicular to each other, so we just need to decide if both of them are going to be perpendicular to our line of sight. That’s it. We’ve only got two possibilities here which – in my humble view – explain why the matter-wave is different from an electromagnetic wave.

I actually can’t quite believe the craziness when it comes to interpreting the wavefunction: we get everything we’d want to know about our particle through these operators (momentum, energy, position, and whatever else you’d need to know), but mainstream physicists still tell us that the wavefunction is, somehow, not representing anything real. It might be because of that weird 720° symmetry – which, as far as I am concerned, confirms that those state vectors are not the right approach: you can’t represent a complex, asymmetrical shape by a ‘flat’ mathematical object!

Huh? Yes. The wavefunction is a ‘flat’ concept: it has two dimensions only, unlike the real vectors physicists use to describe electromagnetic waves (which we may interpret as the wavefunction of the photon). Those have three dimensions, just like the mathematical space we project on events. Because the wavefunction is flat (think of a rotating disk), we have those cumbersome transformation matrices: each time we shift position vis-á-vis the object we’re looking at (das Ding an sich, as Kant would call it), we need to change our description of it. And our description of it – the wavefunction – is all we have, so that’s our reality. However, because that reality changes as per our line of sight, physicists keep saying the wavefunction (or das Ding an sich itself) is, somehow, not real.

Frankly, I do think physicists should take a basic philosophy course: you can’t describe what goes on in three-dimensional space if you’re going to use flat (two-dimensional) concepts, because the objects we’re trying to describe (e.g. non-symmetrical electron orbitals) aren’t flat. Let me quote one of Feynman’s famous lines on philosophers: “These philosophers are always with us, struggling in the periphery to try to tell us something, but they never really understand the subtleties and depth of the problem.” (Feynman’s Lectures, Vol. I, Chapter 16)

Now, I love Feynman’s Lectures but… Well… I’ve gone through them a couple of times now, so I do think I have an appreciation of the subtleties and depth of the problem now. And I tend to agree with some of the smarter philosophers: if you’re going to use ‘flat’ mathematical objects to describe three- or four-dimensional reality, then such approach will only get you where we are right now, and that’s a lot of mathematical mumbo-jumbo for the poor uninitiated. Consistent mumbo-jumbo, for sure, but mumbo-jumbo nevertheless. 🙂 So, yes, I do think we need to re-invent quantum math. 🙂 The description may look more complicated, but it would make more sense.

I mean… If physicists themselves have had continued discussions on the reality of the wavefunction for almost a hundred years now (Schrödinger published his equation in 1926), then… Well… Then the physicists have a problem. Not the philosophers. 🙂 As to how that new description might look like, see my papers on viXra.org. I firmly believe it can be done. This is just a hobby of mine, but… Well… That’s where my attention will go over the coming years. 🙂 Perhaps quaternions are the answer but… Well… I don’t think so either – for reasons I’ll explain later. 🙂

Post scriptum: There are many nice videos on Dirac’s belt trick or, more generally, on 720° symmetries, but this links to one I particularly like. It clearly shows that the 720° symmetry requires, in effect, a special relation between the observer and the object that is being observed. It is, effectively, like there is a leather belt between them or, in this case, we have an arm between the glass and the person who is holding the glass. So it’s not like we are walking around the object (think of the glass of water) and making a full turn around it, so as to get back to where we were. No. We are turning it around by 360°! That’s a very different thing than just looking at it, walking around it, and then looking at it again. That explains the 720° symmetry: we need to turn it around twice to get it back to its original state. So… Well… The description is more about us and what we do with the object than about the object itself. That’s why I think the quantum-mechanical description is defective.

Should we reinvent wavefunction math?

Preliminary note: This post may cause brain damage. 🙂 If you haven’t worked yourself through a good introduction to physics – including the math – you will probably not understand what this is about. So… Well… Sorry. 😦 But if you have… Then this should be very interesting. Let’s go. 🙂

If you know one or two things about quantum math – Schrödinger’s equation and all that – then you’ll agree the math is anything but straightforward. Personally, I find the most annoying thing about wavefunction math are those transformation matrices: every time we look at the same thing from a different direction, we need to transform the wavefunction using one or more rotation matrices – and that gets quite complicated !

Now, if you have read any of my posts on this or my other blog, then you know I firmly believe the wavefunction represents something real or… Well… Perhaps it’s just the next best thing to reality: we cannot know das Ding an sich, but the wavefunction gives us everything we would want to know about it (linear or angular momentum, energy, and whatever else we have an operator for). So what am I thinking of? Let me first quote Feynman’s summary interpretation of Schrödinger’s equation (Lectures, III-16-1):

“We can think of Schrödinger’s equation as describing the diffusion of the probability amplitude from one point to the next. […] But the imaginary coefficient in front of the derivative makes the behavior completely different from the ordinary diffusion such as you would have for a gas spreading out along a thin tube. Ordinary diffusion gives rise to real exponential solutions, whereas the solutions of Schrödinger’s equation are complex waves.”

Feynman further formalizes this in his Lecture on Superconductivity (Feynman, III-21-2), in which he refers to Schrödinger’s equation as the “equation for continuity of probabilities”. His analysis there is centered on the local conservation of energy, which makes me think Schrödinger’s equation might be an energy diffusion equation. I’ve written about this ad nauseam in the past, and so I’ll just refer you to one of my papers here for the details, and limit this post to the basics, which are as follows.

The wave equation (so that’s Schrödinger’s equation in its non-relativistic form, which is an approximation that is good enough) is written as:formula 1The resemblance with the standard diffusion equation (shown below) is, effectively, very obvious:formula 2As Feynman notes, it’s just that imaginary coefficient that makes the behavior quite different. How exactly? Well… You know we get all of those complicated electron orbitals (i.e. the various wave functions that satisfy the equation) out of Schrödinger’s differential equation. We can think of these solutions as (complex) standing waves. They basically represent some equilibrium situation, and the main characteristic of each is their energy level. I won’t dwell on this because – as mentioned above – I assume you master the math. Now, you know that – if we would want to interpret these wavefunctions as something real (which is surely what want to do!) – the real and imaginary component of a wavefunction will be perpendicular to each other. Let me copy the animation for the elementary wavefunction ψ(θ) = a·ei∙θ = a·ei∙(E/ħ)·t = a·cos[(E/ħ)∙t]  i·a·sin[(E/ħ)∙t] once more:

Circle_cos_sin

So… Well… That 90° angle makes me think of the similarity with the mathematical description of an electromagnetic wave. Let me quickly show you why. For a particle moving in free space – with no external force fields acting on it – there is no potential (U = 0) and, therefore, the Vψ term – which is just the equivalent of the the sink or source term S in the diffusion equation – disappears. Therefore, Schrödinger’s equation reduces to:

∂ψ(x, t)/∂t = i·(1/2)·(ħ/meff)·∇2ψ(x, t)

Now, the key difference with the diffusion equation – let me write it for you once again: ∂φ(x, t)/∂t = D·∇2φ(x, t) – is that Schrödinger’s equation gives us two equations for the price of one. Indeed, because ψ is a complex-valued function, with a real and an imaginary part, we get the following equations:

  1. Re(∂ψ/∂t) = −(1/2)·(ħ/meffIm(∇2ψ)
  2. Im(∂ψ/∂t) = (1/2)·(ħ/meffRe(∇2ψ)

Huh? Yes. These equations are easily derived from noting that two complex numbers a + i∙b and c + i∙d are equal if, and only if, their real and imaginary parts are the same. Now, the ∂ψ/∂t = i∙(ħ/meff)∙∇2ψ equation amounts to writing something like this: a + i∙b = i∙(c + i∙d). Now, remembering that i2 = −1, you can easily figure out that i∙(c + i∙d) = i∙c + i2∙d = − d + i∙c. [Now that we’re getting a bit technical, let me note that the meff is the effective mass of the particle, which depends on the medium. For example, an electron traveling in a solid (a transistor, for example) will have a different effective mass than in an atom. In free space, we can drop the subscript and just write meff = m.] 🙂 OK. Onwards ! 🙂

The equations above make me think of the equations for an electromagnetic wave in free space (no stationary charges or currents):

  1. B/∂t = –∇×E
  2. E/∂t = c2∇×B

Now, these equations – and, I must therefore assume, the other equations above as well – effectively describe a propagation mechanism in spacetime, as illustrated below:

propagation

You know how it works for the electromagnetic field: it’s the interplay between circulation and flux. Indeed, circulation around some axis of rotation creates a flux in a direction perpendicular to it, and that flux causes this, and then that, and it all goes round and round and round. 🙂 Something like that. 🙂 I will let you look up how it goes, exactly. The principle is clear enough. Somehow, in this beautiful interplay between linear and circular motion, energy is borrowed from one place and then returns to the other, cycle after cycle.

Now, we know the wavefunction consist of a sine and a cosine: the cosine is the real component, and the sine is the imaginary component. Could they be equally real? Could each represent half of the total energy of our particle? I firmly believe they do. The obvious question then is the following: why wouldn’t we represent them as vectors, just like E and B? I mean… Representing them as vectors (I mean real vectors here – something with a magnitude and a direction in a real space – as opposed to state vectors from the Hilbert space) would show they are real, and there would be no need for cumbersome transformations when going from one representational base to another. In fact, that’s why vector notation was invented (sort of): we don’t need to worry about the coordinate frame. It’s much easier to write physical laws in vector notation because… Well… They’re the real thing, aren’t they? 🙂

What about dimensions? Well… I am not sure. However, because we are – arguably – talking about some pointlike charge moving around in those oscillating fields, I would suspect the dimension of the real and imaginary component of the wavefunction will be the same as that of the electric and magnetic field vectors E and B. We may want to recall these:

  1. E is measured in newton per coulomb (N/C).
  2. B is measured in newton per coulomb divided by m/s, so that’s (N/C)/(m/s).

The weird dimension of B is because of the weird force law for the magnetic force. It involves a vector cross product, as shown by Lorentz’ formula:

F = qE + q(v×B)

Of course, it is only one force (one and the same physical reality), as evidenced by the fact that we can write B as the following vector cross-product: B = (1/c)∙ex×E, with ex the unit vector pointing in the x-direction (i.e. the direction of propagation of the wave). [Check it, because you may not have seen this expression before. Just take a piece of paper and think about the geometry of the situation.] Hence, we may associate the (1/c)∙ex× operator, which amounts to a rotation by 90 degrees, with the s/m dimension. Now, multiplication by i also amounts to a rotation by 90° degrees. Hence, if we can agree on a suitable convention for the direction of rotation here, we may boldly write:

B = (1/c)∙ex×E = (1/c)∙iE

This is, in fact, what triggered my geometric interpretation of Schrödinger’s equation about a year ago now. I have had little time to work on it, but think I am on the right track. Of course, you should note that, for an electromagnetic wave, the magnitudes of E and B reach their maximum, minimum and zero point simultaneously (as shown below). So their phase is the same.

E and B

In contrast, the phase of the real and imaginary component of the wavefunction is not the same, as shown below.wavefunction

In fact, because of the Stern-Gerlach experiment, I am actually more thinking of a motion like this:

Wavefunction 2But that shouldn’t distract you. 🙂 The question here is the following: could we possibly think of a new formulation of Schrödinger’s equation – using vectors (again, real vectors – not these weird state vectors) rather than complex algebra?

I think we can, but then I wonder why the inventors of the wavefunction – Heisenberg, Born, Dirac, and Schrödinger himself, of course – never thought of that. 🙂

Hmm… I need to do some research here. 🙂

Post scriptum: You will, of course, wonder how and why the matter-wave would be different from the electromagnetic wave if my suggestion that the dimension of the wavefunction component is the same is correct. The answer is: the difference lies in the phase difference and then, most probably, the different orientation of the angular momentum. Do we have any other possibilities? 🙂

P.S. 2: I also published this post on my new blog: https://readingeinstein.blog/. However, I thought the followers of this blog should get it first. 🙂