# Tag Archives: reality of the wavefunction

# Quantum math: garbage in, garbage out?

This post is basically a continuation of my previous one but – as you can see from its title – it is much more aggressive in its language, as I was inspired by a very thoughtful comment on my previous post. Another advantage is that it avoids all of the math. đ It’s… Well… I admit it: it’s just a rant. đ [Those who wouldn’t appreciate the casual style of what follows, can download my paper on it – but that’s much longer and also has a lot more math in it – so it’s a much harder read than this ‘rant’.]

My previous post was actually triggered by an attempt to re-read Feynman’s Lectures on Quantum Mechanics, but in reverse order this time: from the last chapter to the first. [In case you doubt, I did follow the correct logical order when working my way through them for the first time because… Well… There is no other way to get through them otherwise. đ ] But then I was looking at Chapter 20. It’s a Lecture on quantum-mechanical operators – so that’s a topic which, in other textbooks, is usually tackled earlier on. When re-reading it, I realize why people quickly turn away from the topic of physics: it’s a lot of mathematical formulas which are supposed to reflect reality but, in practice, few – if any – of the mathematical concepts are actually being explained. Not in the first chapters of a textbook, not in its middle ones, and… Well… Nowhere, really. Why? Well… To be blunt: I think most physicists themselves don’t really understand what they’re talking about. In fact, as I have pointed out a couple of times already, Feynman himself admits so much:

âAtomic behaviorÂ appears peculiar and mysterious to everyoneâboth to the novice and to the experienced physicist.Â *Even the experts do not understand it the way they would like to*.â

SoâŠ WellâŠ If youâd be in need of a rather spectacular acknowledgement of the shortcomings of physics as a science, here you have it: if you don’t understand what physicists are trying to tell you, don’t worry about it, because they donât really understand it themselves. đ

Take the example of aÂ *physical state*, which is represented by aÂ *state vector*, which we can combine and re-combine using the properties of an abstractÂ *Hilbert space*.Â Frankly, I think the word is very misleading, because it actually doesn’t describe an *actual* physical state. Why? Well… If we look at this so-called physical state from another angle, then we need to *transform *it using a complicated set of transformation matrices. You’ll say: that’s what we need to do when going from one reference frame to another in classical mechanics as well, isn’t it?

Well… No. In classical mechanics, we’ll describe the physics using geometric vectors in three dimensions and, therefore, theÂ *baseÂ *of our reference frame doesn’t matter: because we’re usingÂ *realÂ *vectors (such as the electric of magnetic field vectors **E** and **B**), our orientation *vis-ĂĄ-vis* the object – theÂ *line of sight*, so to speak – doesn’t matter.

In contrast, in quantum mechanics, it does: SchrĂ¶dinger’s equation – and the wavefunction – has only two degrees of freedom, so to speak: its so-called real and its imaginary dimension. Worse, physicists refuse to give those two dimensions anyÂ *geometricÂ *interpretation. Why? I don’t know. As I show in my previous posts, it would be easy enough, right? We know both dimensions must be perpendicular to each other, so we just need to decide ifÂ *bothÂ *of them are going to be perpendicular to our line of sight. That’s it. We’ve only got two possibilities here which – in my humble view – explain why the matter-wave is different from an electromagnetic wave.

I actually can’t quite believe the craziness when it comes to interpreting the wavefunction: we get everything we’d want to know about our particle through these operators (momentum, energy, position, and whatever else you’d need to know), but mainstream physicists still tell us that the wavefunction is, somehow, not representing anything real. It might be because of that weird 720Â° symmetry – which, as far as I am concerned, confirms that those state vectors are not the right approach: you can’t represent a complex, asymmetrical shape by a ‘flat’ mathematical object!

* Huh?Â *Yes.Â TheÂ wavefunction is a ‘flat’ concept: it has two dimensions only, unlike theÂ

*realÂ*vectors physicists use to describe electromagnetic waves (which we may interpret as the wavefunction of the photon). Those have three dimensions, just like the mathematical space we project on events. Because the wavefunction is flat (think of a rotating disk), we have those cumbersome transformation matrices: each time we shift positionÂ

*vis-ĂĄ-vis*the object we’re looking at (

*das Ding an sich*, as Kant would call it), we need to change our description of it. And our description of it – the wavefunction – is all we have, so that’sÂ

*ourÂ*reality. However, because that reality changes as per our line of sight, physicists keep saying the wavefunction (orÂ

*das Ding an sichÂ*itself) is, somehow, not real.

Frankly,Â I do think physicists should take a basic philosophy course: you can’t describe what goes on in three-dimensional space if you’re going to use flat (two-dimensional) concepts, because the objects we’re trying to describe (e.g. non-symmetrical electron orbitals) aren’t flat. Let me quote one of Feynman’s famous lines on philosophers:Â âThese philosophersÂ areÂ alwaysÂ with us, struggling in the periphery toÂ tryÂ toÂ tell us something, but they never really understand the subtleties and depth of the problem.â (Feynman’s Lectures, Vol. I, Chapter 16)

Now, IÂ *loveÂ *Feynman’s Lectures but…Â Well… I’ve gone through them a couple of times now, so I do think I have an appreciation of the subtleties and depth of the problem now. And I tend to agree with some of the smarter philosophers: if you’re going to use ‘flat’ mathematical objects to describe three- or four-dimensional reality, then such approach will only get you where we are right now, and that’s a lot of mathematical* mumbo-jumbo*Â for the poor uninitiated. *Consistent* mumbo-jumbo, for sure, but mumbo-jumbo nevertheless. đ So, yes, I do think we need to re-invent quantum math. đ The description may look more complicated, but it would make more sense.

I mean… If physicists themselves have had continued discussions on the reality of the wavefunction for almost a hundred years now (SchrĂ¶dinger published his equation in 1926), then… Well… Then the physicists have a problem. Not the philosophers. đ As to how that new description might look like, see my papers on viXra.org. I firmly believe it can be done. This is just a hobby of mine, but… Well… That’s where my attention will go over the coming years. đ Perhaps quaternions are the answer but… Well… I don’t think so either – for reasons I’ll explain later. đ

**Post scriptum**: There are many nice videos on Dirac’s belt trick or, more generally, on 720Â° symmetries, but this links to one I particularly like. It clearly shows that the 720Â° symmetry requires, in effect, a special relation between the observer and the object that is being observed. It is, effectively, like there is a leather belt between them or, in this case, we have an arm between the glass and the person who is holding the glass. So it’s not like we are walking around the object (think of the glass of water) and making a full turn around it, so as to get back to where we were. No. *We are turning it around by 360Â°!Â *That’s a very different thing than just looking at it, walking around it, and then looking at it again. That explains the 720Â° symmetry: we need to turn it around twice to get it back to its original state. So… Well… The description is more about us and what we do with the object than about the object itself.Â That’s why I think the quantum-mechanical description is defective.

# Should we reinvent wavefunction math?

**Preliminary note**: This post may cause brain damage. đ If you haven’t worked yourself through a good introduction to physics – including the math – you will probably not understand what this is about. So… Well… Sorry. đŠ But if you *have*… Then this should be *very* interesting. Let’s go. đ

If you know one or two things about quantum math – SchrĂ¶dinger’s equation and all that – then you’ll agree the math is anything but straightforward. Personally, I find the most annoying thing about wavefunction math are those transformation matrices: every time we look at the same thing from a different direction, we need to transform the wavefunction using one or more rotation matrices – and that gets quite complicated !

Now, if you have read any of my posts on this or my other blog, then you know I firmly believe the wavefunction represents somethingÂ *real*Â or… Well… Perhaps it’s just the next best thing to reality: we cannot know *das Ding an sich*, but the wavefunction gives us everything we would want to know about it (linear or angular momentum, energy, and whatever else we have an *operator* for). So what am I thinking of? Let me first quote Feynman’s summary interpretation ofÂ SchrĂ¶dinger’s equationÂ (*Lectures*, III-16-1):

âWe can think of SchrĂ¶dingerâs equation as describing the diffusion of the probability amplitude from one point to the next. [âŠ] But the imaginary coefficient in front of the derivative makes the behavior completely different from the ordinary diffusion such as you would have for a gas spreading out along a thin tube. Ordinary diffusion gives rise to real exponential solutions, whereas the solutions of SchrĂ¶dingerâs equation are complex waves.â

Feynman further formalizes this in his *Lecture on Superconductivity *(Feynman, III-21-2), in which he refers to SchrĂ¶dingerâs equation as the âequation for continuity of probabilitiesâ. His analysis there is centered on the *local *conservation of energy, which makes *me* think SchrĂ¶dingerâs equation might be an energy diffusion equation. I’ve written about thisÂ *ad nauseamÂ *in the past, and so I’ll just refer you to one of my papers here for the details, and limit this post to the basics, which are as follows.

The wave equation (so that’s SchrĂ¶dinger’s equation in its non-relativistic form, which is an approximation that is good enough)Â isÂ written as:The resemblance with the standard diffusion equation (shown below) is, effectively, very obvious:As Feynman notes, it’s just that imaginary coefficient that makes the behavior quite different.Â *HowÂ *exactly? Well… You know we get all of those complicated electron orbitals (i.e. the various wave *functionsÂ *that satisfy the equation) out of SchrĂ¶dinger’s differential equation. We can think of these solutions as (complex)Â *standing waves*. They basically represent someÂ *equilibriumÂ *situation, and the main characteristic of each is theirÂ *energy level*. I won’t dwell on this because – as mentioned above – I assume you master the math. Now, you know that – if we would want to interpret these wavefunctions as something real (which is surely whatÂ *IÂ *want to do!) – the real and imaginary component of a wavefunction will be perpendicular to each other. Let me copy the animation for theÂ *elementaryÂ *wavefunction Ï(Îž) =Â *aÂ·e*^{âiâÎž}Â =Â *aÂ·e*^{âiâ(E/Ä§)Â·t}Â *= a*Â·cos[(E/Ä§)ât]Â *â**Â i*Â·aÂ·sin[(E/Ä§)ât] once more:

So… Well… That 90Â° angle makes me think of the similarity with the mathematical description of an electromagnetic wave. Let me quickly show you why. For a particle moving in free space â with no external force fields acting on it â there is no potential (U = 0) and, therefore, the VÏ term – which is just the equivalent of the theÂ *sinkÂ *or *sourceÂ *term S in the diffusion equation – disappears. Therefore, SchrĂ¶dingerâs equation reduces to:

âÏ(**x**, t)/ât =Â *i*Â·(1/2)Â·(Ä§/m_{eff})Â·â^{2}Ï(**x**, t)

Now, the key difference with the diffusion equation – let me write it for you once again: âÏ(**x**, t)/ât = DÂ·â^{2}Ï(**x**, t) – is thatÂ SchrĂ¶dingerâs equation gives usÂ *twoÂ *equations for the price of one. Indeed, because Ï is a complex-valued function, with aÂ *realÂ *and anÂ *imaginaryÂ *part, we get the following equations:

*Re*(âÏ/ât) = â(1/2)Â·(Ä§/m_{eff})Â·*Im*(â^{2}Ï)*Im*(âÏ/ât) = (1/2)Â·(Ä§/m_{eff})Â·*Re*(â^{2}Ï)

** Huh?Â **Yes. These equations are easily derived from noting that two complex numbers a +Â

*i*âb and c +Â

*i*âd are equal if, and

*only*if, their real and imaginary parts are the same. Now, the âÏ/ât =Â

*i*â(Ä§/m

_{eff})ââ

^{2}Ï equation amounts to writing something like this: a +Â

*i*âb =Â

*i*â(c +Â

*i*âd). Now, remembering thatÂ

*i*

^{2}Â = â1, you can easily figure out thatÂ

*i*â(c +Â

*i*âd) =Â

*i*âc +Â

*i*

^{2}âd = â d +Â

*i*âc. [Now that we’re getting a bit technical, let me note that theÂ m

_{eff}is the

*effective*mass of the particle, which depends on the medium. For example, an electron traveling in a solid (a transistor, for example) will have a different effective mass than in an atom. In free space, we can drop the subscript and just write m

_{eff}= m.] đ OK.Â

*Onwards !Â*đ

The equations above make me think of the equations for an electromagnetic wave in free space (no stationary charges or currents):

- â
**B**/ât = ââĂ**E** - â
**E**/ât =Â*c*^{2}âĂ**B**

Now, these equations – and, I must therefore assume, the other equations above as well – effectively describe a *propagation *mechanism in spacetime, as illustrated below:

You know how it works for the electromagnetic field: it’s the interplay between circulation and flux. Indeed, circulation around some axis of rotation creates a flux in a direction perpendicular to it, and that flux causes this, and then that, and it all goes round and round and round. đ Something like that. đ I will let you look up how it goes,Â *exactly*. The principle is clear enough.Â Somehow, in this beautiful interplay between linear and circular motion, energy is borrowed from one place and then returns to the other, cycle after cycle.

Now, we know the wavefunction consist of a sine and a cosine: the cosine is the real component, and the sine is the imaginary component. Could they be equally real? Could each represent *half *of the total energy of our particle? I firmly believe they do. The obvious question then is the following: why wouldn’t we represent them asÂ *vectors*, just like **E** and **B**? I mean… Representing them as vectorsÂ (I meanÂ *realÂ *vectors here – something with a magnitude and a direction in aÂ *realÂ *space – as opposed to *state *vectors from the Hilbert space) wouldÂ *showÂ *they are real, and there would be no need for cumbersome transformations when going from one representationalÂ *baseÂ *to another. In fact, that’s why vector notation was invented (sort of): we don’t need to worry about the coordinate frame. It’s much easier to write physical laws in vector notation because… Well… They’re theÂ *realÂ *thing, aren’t they? đ

What about dimensions? Well… I am not sure. However, because we are – arguably – talking about some pointlike charge moving around in those oscillating fields, I would suspect the dimension of the real and imaginary component of the wavefunction will be the same as that of the electric and magnetic field vectors **E** and **B**. We may want to recall these:

**E**Â is measured inÂ*newton per coulombÂ*(N/C).**B**Â is measured in newton per coulomb divided by m/s, so that’s (N/C)/(m/s).

The weird dimension of **B**Â is because of the weird force law for the magnetic force. It involves a vector cross product, as shown by Lorentz’ formula:

**F** = qE + q(** v**Ă

**B**)

Of course, it is onlyÂ *oneÂ *force (one and the same physical reality), as evidenced by the fact that we can write **B** as the following vector cross-product: **B**Â = (1/*c*)â**e****_{x}**Ă

**E**, withÂ

**e****Â the unit vector pointing in the**

_{x}*x*-direction (i.e. the direction of propagation of the wave). [Check it, because you may not have seen this expression before. Just take a piece of paper and think about the geometry of the situation.] Hence, we may associate the (1/

*c*)â

**e****Ă**

_{x}*operator*, which amounts to a rotation by 90 degrees, with the s/m dimension. Now, multiplication by

*i*also amounts to a rotation by 90Â° degrees. Hence, if we can agree on a suitable convention for the

*directionÂ*of rotation here,Â we may boldly write:

**B**Â = (1/*c*)â**e****_{x}**Ă

**E**= (1/

*c*)â

*i*â

**E**

This is, in fact, what triggered my geometric interpretation of SchrĂ¶dingerâs equation about a year ago now. I have had little time to work on it, but think I am on the right track. Of course, you should note that, for anÂ electromagnetic wave, the magnitudes of **E** and **B** reach their maximum, minimum and zero point *simultaneously*Â (as shown below). So theirÂ *phaseÂ *is the same.

In contrast, the phase of the real and imaginary component of the wavefunction is not the same, as shown below.

In fact, because of the Stern-Gerlach experiment, I am actually more thinking of a motion like this:

But that shouldn’t distract you. đ The question here is the following: could we possibly think of a new formulation of SchrĂ¶dinger’s equation – usingÂ *vectors *(again,Â *realÂ *vectors – not these weirdÂ *state *vectors)Â rather than complex algebra?

I think we can, but then I wonder why theÂ *inventorsÂ *of the wavefunction – Heisenberg, Born, Dirac, and SchrĂ¶dinger himself, of course – never thought of that. đ

Hmm… I need to do some research here. đ

**Post scriptum**: You will, of course, wonder how and why the matter-wave would be different from the electromagnetic wave if my suggestion that the dimension of the wavefunction component is the same is correct. The answer is: the difference lies in the phase difference and then, most probably, the different orientation of the angular momentum. Do we have any other possibilities? đ

P.S. 2: I also published this post on my new blog:Â https://readingeinstein.blog/. However, I thought the followers of this blog should get it first. đ

# Some thoughts on the nature of reality

Some other comment on an article on my other blog, inspired me to structure some thoughts that are spread over various blog posts. What follows below, is probably the first draft of an article or a paper I plan to write. Or, who knows, I might re-write my two introductory books on quantum physics and publish a new edition soon. đ

## Physical dimensions and Uncertainty

The physical dimension of the quantum of action (*h *or*Â Ä§ =* *h*/2Ï) is force (expressed in *newton*)Â times distance (expressed in *meter*)Â times time (expressed in *seconds*): NÂ·mÂ·s. Now, you may think this NÂ·mÂ·s dimension is kinda hard to *imagine*. We can imagine its individual components, right? Force, distance and time. We know what they are. But the product of all three? What is it, *really*?

It shouldn’t be all that hard to *imagine *what it might be, right? The NÂ·mÂ·s unit is also the unit in which angular momentum is expressed – and you can sort of imagine what that is, right? Think of a spinning top, or a gyroscope. We may also think of the following:

- [
*h*] = NÂ·mÂ·s = (NÂ·m)Â·s = [E]Â·[t] - [
*h*] = NÂ·mÂ·s = (NÂ·s)Â·m = [p]Â·[x]

Hence, the physical dimension of action is that of *energy *(E) multiplied by *time* (t) or, alternatively, that of *momentum *(p) times *distance *(x). To be precise, the second dimensional equation should be written as [*h*] = [**p**]Â·[**x**], because both the momentum and the distance traveled will be associated with some *direction*. It’s a moot point for the discussion at the moment, though. Let’s think about the first equation first:Â [*h*] = [E]Â·[t]. What does it mean?

Energy… Hmm… InÂ real life, we are usually not interested in the energy of a system as such, but by the energy it can *deliver*, or *absorb*, **per second**. This is referred to as theÂ *powerÂ *of a system, and it’s expressed in J/s, or *watt*. Power is also defined as the (time) *rate *at which *work *is done. Hmm… But so here we’re *multiplying *energy and time. So what’s that? After Hiroshima and Nagasaki, we can sort of imagine the energy of an atomic bomb. We can also sort of imagine the *power *that’s being released by the Sun in light and other forms of *radiation*, which is about 385Ă10^{24}* joule *per *second*. But energy times time? What’s that?

I am not sure. If we think of the Sun as a huge reservoir of energy, then the physical dimension of action is just like having that reservoir of energy guaranteed for some time, *regardless of how fast or how slow we use it*. So, in short, **it’s just like the Sun – or the Earth, or the Moon, or whatever object – just being there, for someÂ definiteÂ amount of time**. So, yes: someÂ

*definite*amount of mass or energy (E) for someÂ

*definiteÂ*amount of time (t).

Let’s bring the mass-energy equivalence formula in here: E = m*c*^{2}. Hence, the physical dimension of action can also be written as [*h*] = [E]Â·[t] = [m*c*]^{2}Â·[t] = (kgÂ·m^{2}/s^{2})Â·s =Â kgÂ·m^{2}/s.Â What does that say? Not all that much – for the time being, at least. We can get thisÂ [*h*] = kgÂ·m^{2}/s through some other substitution as well. A force of one newton will give a mass of 1 kg an acceleration of 1 m/s per second. Therefore, 1 N = 1 kgÂ·m/s^{2}Â and, hence, the physical dimension of *h*, or the unit of angular momentum, may also be written as 1 NÂ·mÂ·s = 1 (kgÂ·m/s^{2})Â·mÂ·s = 1 kgÂ·m^{2}/s, i.e. the product of mass, velocity and distance.

Hmm… What can we do with that? Nothing much for the moment: our first reading of it is just that it reminds us of the definition of angular momentum – some *mass* with some *velocity* rotating around an axis. What about the distance? Oh… The *distance* here is just the distance from the axis, right? Right. But… Well… It’s like having some amount of linear momentum available over some distance – or in some *space*, right? That’s sufficiently significant as an interpretation for the moment, I’d think…

## Fundamental units

This makes one think about what units would be fundamental – and what units we’d consider as being derived. Formally, theÂ *newton* is aÂ *derivedÂ *unit in the metric system, as opposed to the units of mass, length and time (kg, m, s). Nevertheless, I personally like to think of force as being *fundamental*:Â a force is what causes an object to deviate from its straight trajectory in spacetime. Hence, we may want to think of theÂ quantum of action as representing *three* fundamental physical dimensions: (1)Â *force*, (2)Â *time* and (3) distance – or *space*. We may then look at energy and (linear) momentum as physical quantities combining (1) force and distance and (2) force and time respectively.

Let me write this out:

- Force times length (think of a force that isÂ
*acting*on some object over some distance) is energy: 1*jouleÂ*(J) =Â 1(N). Hence, we may think of the concept of energy as a*newton*Â·*meter**projectionÂ*of action in space only: we make abstraction of time. The physical dimension of the quantum of action should then be written as [*h*] = [E]Â·[t]. [Note the square brackets tell us we are looking at aÂ*dimensionalÂ*equation only, so [t] is just the physical dimension of the time variable. It’s a bit confusing because I also use square brackets as parentheses.] - Conversely, the magnitude of linear momentum (p = mÂ·
*v*) is expressed in: 1 kgÂ·m/s = 1 (kgÂ·m/s*newton*Â·*seconds*^{2})Â·s = 1 NÂ·s. Hence, we may think of (linear) momentum as a projection of action in time only: we make abstraction of its spatial dimension. Think of a force that is acting on some objectÂ*during some time*.Â The physical dimension of the quantum of action should then be written as [*h*] = [p]Â·[x]

Of course, a force that is acting on some object during some time, will usually also act on the same object over some distance but… Well… Just *try*, for once, to make abstraction of one of the two dimensions here: timeÂ *orÂ *distance.

It is a difficult thing to do because, when everything is said and done, we don’t live in space or in time alone, but in *spacetime* and, hence, such abstractions are not easy. [Of course, now you’ll say that it’s easy to think of something that moves in time only: an object that is standing still does just that – but then we know movement is relative, so there is no such thing as an object that is standing still in spaceÂ *in an absolute sense*: Hence, objects never stand still in *spacetime*.] In any case, we should try such abstractions, if only because of the principle of least actionÂ is so essential and deep in physics:

- In classical physics, the path of some object in a force field will
*minimizeÂ*the total action (which is usually written as S) along that path. - In quantum mechanics, the same action integral will give us various values S – each corresponding to a particular path – and each path (and, therefore, each value of S, really) will be associated with a probability amplitude that will be proportional to some constant times
*e*^{âiÂ·Îž}Â =Â*e*^{iÂ·(S/Ä§)}. Because*Ä§*is so tiny, even a small change in S will give a completely different phase angle Îž. Therefore, most amplitudes will cancel each other out as we take the sum of the amplitudes over all possible paths: only the paths that*nearlyÂ*give the same phase matter. In practice, these are the paths that are associated with a variation in S of an order of magnitude that is equal to*Ä§*.

The paragraph above summarizes, in essence, Feynman’s path integral formulation of quantum mechanics. We may, therefore, think of the quantum of actionÂ *expressingÂ *itself (1) in time only, (2) in space only, or – much more likely – (3) expressing itself in both dimensions at the same time. Hence, if the quantum of action gives us the *order of magnitudeÂ *of the uncertainty – think of writing something like S Â± *Ä§*, we may re-write our dimensional [*Ä§*] = [E]Â·[t] and [*Ä§*] = [p]Â·[x] equations as the uncertainty equations:

- ÎEÂ·Ît =
*Ä§**Â* - ÎpÂ·Îx =
*Ä§*

You should note here that it is best to think of the uncertainty relations as aÂ *pairÂ *of equations, if only because you should also think of the concept of energy and momentum as representing different *aspectsÂ *of the same reality, as evidenced by the (relativistic) energy-momentum relation (E^{2}Â = p^{2}*c*^{2}Â â *m*_{0}^{2}*c*^{4}). Also, as illustrated below, the actual path – or, to be more precise, what we might associate with the concept of the actual path – is likely to be some mix of Îx and Ît. If Ît is very small, then Îx will be very large. In order to move over such distance, our particle will require a larger energy, so ÎE will be large. Likewise, if Ît is very large, then Îx will be very small and, therefore, ÎE will be very small. You can also reason in terms of Îx, and talk about momentum rather than energy. You will arrive at the same conclusions: the ÎEÂ·Ît = *h *and ÎpÂ·Îx = *hÂ *relations represent two aspects of the same reality – or, at the very least, what we mightÂ *thinkÂ of *as reality.

Also think of the following: ifÂ ÎEÂ·Ît =Â *hÂ *and ÎpÂ·Îx =Â *h*, thenÂ ÎEÂ·Ît =Â ÎpÂ·Îx and, therefore,Â ÎE/Îp must be equal to Îx/Ît. Hence, theÂ *ratioÂ *of the uncertainty about x (the distance) and the uncertainty about t (the time) equals theÂ *ratioÂ *of the uncertainty about E (the energy) and the uncertainty about p (the momentum).

Of course, you will note that the *actual* uncertainty relations have a factor 1/2 in them. This may be explained by thinking of both negative as well as positive variations in space and in time.

We will obviously want to do some more thinking about those physical dimensions. **The idea of a force implies the idea of some object – of some mass on which the force is acting**. Hence, let’s think about the concept of mass now. But… Well… Mass and energy are supposed to be equivalent, right? So let’s look at the concept of energyÂ *too*.

## Action, energy and mass

What *isÂ *energy, really? InÂ real life, we are usually not interested in the energy of a system as such, but by the energy it can *deliver*, or *absorb*, per second. This is referred to as theÂ *powerÂ *of a system, and it’s expressed in J/s. However, in physics, we always talk energy – not power – so… Well… What *is* the energy of a system?

According to the *de BroglieÂ *and Einstein – and so many other eminent physicists, of course – we should not only think of the *kinetic* energy of its parts, but also of their *potential* energy, and their *restÂ *energy, and – for an atomic system – we may add some internal energy, which may be binding energy, or excitation energy (think of a hydrogen atom in an excited state, for example). A lot of stuff. đ But, obviously, Einstein’s mass-equivalence formula comes to mind here, and summarizes it all:

E = mÂ·*c*^{2}

The m in this formula refers to mass – not to meter, obviously. Stupid remark, of course… But… Well… What is energy, *really*? What is mass,Â *really*? **What’s thatÂ ***equivalenceÂ ***between mass and energy,Â ***really***?**

I don’t have the definite answer to that question (otherwise I’d be famous), but… Well… I do think physicists and mathematicians should invest more in exploring some basic intuitions here. As I explained in several posts, it is very tempting to think of energy as some kind of two-dimensional oscillation of mass. A force over some distance will cause a mass to accelerate. This is reflected in theÂ dimensional analysis:

[E] = [m]Â·[*c*^{2}] = 1 kgÂ·m^{2}/s^{2}Â = 1 kgÂ·m/s^{2}Â·m = 1 NÂ·m

The kg and m/s^{2Â }factors make this abundantly clear: m/s^{2}Â is the physical dimension of acceleration: (the change in) velocity per time unit.

Other formulas now come to mind, such as the Planck-Einstein relation: E = hÂ·*f* = ÏÂ·Ä§. We could also write: E = h/T. Needless to say, T = 1/*f*Â is theÂ *periodÂ *of the oscillation. So we could say, for example, that the energy of some particle times the period of the oscillation gives us Planck’s constant again. What does that mean? Perhaps it’s easier to think of it the other way around: E/*f* = h = 6.626070040(81)Ă10^{â34}Â JÂ·s. Now, *fÂ *is the number of oscillationsÂ *per second*. Let’s write it asÂ *fÂ *= *n*/s, so we get:

E/*fÂ *= E/(*n*/s) = EÂ·s/*n*Â = 6.626070040(81)Ă10^{â34}Â JÂ·s â E/*nÂ *= 6.626070040(81)Ă10^{â34}Â J

What an amazing result! Our wavicle – be it a photon or a matter-particle – will *alwaysÂ *packÂ 6.626070040(81)Ă10^{â34}Â *jouleÂ *inÂ *oneÂ *oscillation, so that’s the *numericalÂ *value of Planck’s constant which, of course, depends on our *fundamentalÂ *units (i.e. kg, meter, second, etcetera in the SI system).

Of course, the obvious question is: what’s *oneÂ *oscillation? If it’s a wave packet, the oscillations may not have the same amplitude, and we may also not be able to define an exact period. In fact, we should *expect* the amplitude and duration of each oscillation to be slightly different, shouldn’t we? And then…

Well… What’s an oscillation? We’re used toÂ *countingÂ *them:Â *nÂ *oscillations per second, so that’sÂ *per time unit*. How many do we have *in total*? We wrote about that in our posts on the shape and size of a photon. We know photons are emitted by atomic oscillators – or, to put it simply, just atoms going from one energy level to another. Feynman calculated the Q of these atomic oscillators: itâs of the order of 10^{8Â }(see hisÂ *Lectures,Â *I-33-3: itâs a wonderfully simple exercise, and one that really shows his greatness as a physics teacher), so… Well… This wave train will last about 10^{â8Â }seconds (thatâs the time it takes for the radiation to die out by a factor 1/*e*). To give a somewhat more precise example,Â for sodium light, which has a frequency of 500 THz (500Ă10^{12Â }oscillations per second) and a wavelength of 600 nm (600Ă10^{â9Â }meter), the radiation will lasts about 3.2Ă10^{â8Â }seconds. [In fact, thatâs the time it takes for the radiationâs *energy* to die out *by a factor 1/e*, so(i.e. the so-called decay time Ï), so the wavetrain will actually lastÂ *longer*, but so the amplitude becomes quite small after that time.]Â So… Well… Thatâs a very short time but… Still, taking into account the rather spectacular frequency (500 THz) of sodium light, that makes for some 16 million oscillations and, taking into the account the rather spectacular speed of light (3Ă10^{8Â }m/s), that makes for a wave train with a length of, roughly,Â 9.6 meter. *Huh? 9.6 meter!? But a photon is supposed to be pointlike, isn’it it? It has no length, does it?*

That’s where relativity helps us out: as I wrote in one of my posts, relativistic length contraction may explain the apparent paradox. *Using the reference frame of the photon*Â – so if we’d be traveling at speed *c*,â ridingâ with the photon, so to say, as itâs being emitted – then we’d âseeâ the electromagnetic transient as itâs being radiated into space.

However, while we can associate some massÂ *with the energy of the photon*, none of what I wrote above explains what the (rest) mass of a matter-particle could possibly be.*Â *There is no real answer to that, I guess. You’ll think of the Higgs field now but… Then… Well. The Higgs field is a scalar field. Very simple: some *number* that’s associated with some position in spacetime. That doesn’t explain very much, does it? đŠ When everything is said and done, the scientists who, in 2013 only, got the Nobel Price for their theory on the Higgs mechanism, simply tell us mass is some number. That’s something we knew already, right? đ

## The reality of the wavefunction

The wavefunction is, obviously, a mathematical construct: aÂ *descriptionÂ *of reality using a very specific language. What language? Mathematics, of course! Math may not be universal (aliens might not be able to decipher our mathematical models) but it’s pretty good as a *globalÂ *tool of communication, at least.

The *realÂ *question is: is the descriptionÂ *accurate*? Does it match reality and, if it does, howÂ *goodÂ *is the match? For example, the wavefunction for an electron in a hydrogen atom looks as follows:

Ï(* r*,

*t*) =

*e*

^{âiÂ·(E/Ä§)Â·t}Â·

*f*(

*)*

**r**As I explained in previous posts (see, for example, my recent postÂ on reality and perception), theÂ *f*(* r*) function basically provides some envelope for the two-dimensional

*e*

^{âiÂ·Îž}Â =Â

*e*

^{âiÂ·(E/Ä§)Â·t}Â =

*cos*Îž +

*i*Â·

*sin*ÎžÂ oscillation, with

*= (*

**r**Â*x*,

*y*,

*z*),Â Îž = (E/Ä§)Â·

*t*Â = ÏÂ·

*t*Â and Ï = E/Ä§. So it presumes theÂ duration of each oscillation is some constant. Why? Well… Look at the formula: this thing has a constant frequency in time. It’s only the amplitude that is varying as a function of the

*= (x, y, z) coordinates. đ So… Well… If each oscillation is to*

**r**Â*alwaysÂ*packÂ 6.626070040(81)Ă10

^{â34}Â

*joule*, but the amplitude of the oscillation varies from point to point, then… Well… We’ve got a problem. The wavefunction above is likely to be an approximation of reality only. đ The associated energy is the same, but… Well… Reality is probablyÂ

*notÂ*the nice geometrical shape we associate with those wavefunctions.

In addition, we should think of theÂ Uncertainty Principle: thereÂ *mustÂ *be some uncertainty in the energy of the photons when our hydrogen atom makes a transition from one energy level to another. But then… Well… If our photon packs something like 16 million oscillations, and the order of magnitude of the uncertainty is only of the order ofÂ *h*Â (or *Ä§ =* *h*/2Ï) which, as mentioned above, is the (average) energy of *oneÂ *oscillation only, then we don’t have much of a problem here, do we? đ

**Post scriptum**: In previous posts, we offered some analogies – or metaphors – to a two-dimensional oscillation (remember the V-2 engine?). Perhaps it’s all relatively simple. If we have some tiny little ball of mass – and its center of mass has to stay where it is – then any rotation – around any axis – will be some combination of a rotation around *ourÂ *x- and z-axis – as shown below. Two axes only. So we may want to think of a two-dimensionalÂ oscillation as an oscillation of the polar and azimuthal angle. đ