This post is basically a continuation of my previous one but – as you can see from its title – it is much more aggressive in its language, as I was inspired by a very thoughtful comment on my previous post. Another advantage is that it avoids all of the math. 🙂 It’s… Well… I admit it: it’s just a rant. 🙂 [Those who wouldn’t appreciate the casual style of what follows, can download my paper on it – but that’s much longer and also has a lot more math in it – so it’s a much harder read than this ‘rant’.]
My previous post was actually triggered by an attempt to re-read Feynman’s Lectures on Quantum Mechanics, but in reverse order this time: from the last chapter to the first. [In case you doubt, I did follow the correct logical order when working my way through them for the first time because… Well… There is no other way to get through them otherwise. 🙂 ] But then I was looking at Chapter 20. It’s a Lecture on quantum-mechanical operators – so that’s a topic which, in other textbooks, is usually tackled earlier on. When re-reading it, I realize why people quickly turn away from the topic of physics: it’s a lot of mathematical formulas which are supposed to reflect reality but, in practice, few – if any – of the mathematical concepts are actually being explained. Not in the first chapters of a textbook, not in its middle ones, and… Well… Nowhere, really. Why? Well… To be blunt: I think most physicists themselves don’t really understand what they’re talking about. In fact, as I have pointed out a couple of times already, Feynman himself admits so much:
“Atomic behavior appears peculiar and mysterious to everyone—both to the novice and to the experienced physicist. Even the experts do not understand it the way they would like to.”
So… Well… If you’d be in need of a rather spectacular acknowledgement of the shortcomings of physics as a science, here you have it: if you don’t understand what physicists are trying to tell you, don’t worry about it, because they don’t really understand it themselves. 🙂
Take the example of a physical state, which is represented by a state vector, which we can combine and re-combine using the properties of an abstract Hilbert space. Frankly, I think the word is very misleading, because it actually doesn’t describe an actual physical state. Why? Well… If we look at this so-called physical state from another angle, then we need to transform it using a complicated set of transformation matrices. You’ll say: that’s what we need to do when going from one reference frame to another in classical mechanics as well, isn’t it?
Well… No. In classical mechanics, we’ll describe the physics using geometric vectors in three dimensions and, therefore, the base of our reference frame doesn’t matter: because we’re using real vectors (such as the electric of magnetic field vectors E and B), our orientation vis-á-vis the object – the line of sight, so to speak – doesn’t matter.
In contrast, in quantum mechanics, it does: Schrödinger’s equation – and the wavefunction – has only two degrees of freedom, so to speak: its so-called real and its imaginary dimension. Worse, physicists refuse to give those two dimensions any geometric interpretation. Why? I don’t know. As I show in my previous posts, it would be easy enough, right? We know both dimensions must be perpendicular to each other, so we just need to decide if both of them are going to be perpendicular to our line of sight. That’s it. We’ve only got two possibilities here which – in my humble view – explain why the matter-wave is different from an electromagnetic wave.
I actually can’t quite believe the craziness when it comes to interpreting the wavefunction: we get everything we’d want to know about our particle through these operators (momentum, energy, position, and whatever else you’d need to know), but mainstream physicists still tell us that the wavefunction is, somehow, not representing anything real. It might be because of that weird 720° symmetry – which, as far as I am concerned, confirms that those state vectors are not the right approach: you can’t represent a complex, asymmetrical shape by a ‘flat’ mathematical object!
Huh? Yes. The wavefunction is a ‘flat’ concept: it has two dimensions only, unlike the real vectors physicists use to describe electromagnetic waves (which we may interpret as the wavefunction of the photon). Those have three dimensions, just like the mathematical space we project on events. Because the wavefunction is flat (think of a rotating disk), we have those cumbersome transformation matrices: each time we shift position vis-á-vis the object we’re looking at (das Ding an sich, as Kant would call it), we need to change our description of it. And our description of it – the wavefunction – is all we have, so that’s our reality. However, because that reality changes as per our line of sight, physicists keep saying the wavefunction (or das Ding an sich itself) is, somehow, not real.
Frankly, I do think physicists should take a basic philosophy course: you can’t describe what goes on in three-dimensional space if you’re going to use flat (two-dimensional) concepts, because the objects we’re trying to describe (e.g. non-symmetrical electron orbitals) aren’t flat. Let me quote one of Feynman’s famous lines on philosophers: “These philosophers are always with us, struggling in the periphery to try to tell us something, but they never really understand the subtleties and depth of the problem.” (Feynman’s Lectures, Vol. I, Chapter 16)
Now, I love Feynman’s Lectures but… Well… I’ve gone through them a couple of times now, so I do think I have an appreciation of the subtleties and depth of the problem now. And I tend to agree with some of the smarter philosophers: if you’re going to use ‘flat’ mathematical objects to describe three- or four-dimensional reality, then such approach will only get you where we are right now, and that’s a lot of mathematical mumbo-jumbo for the poor uninitiated. Consistent mumbo-jumbo, for sure, but mumbo-jumbo nevertheless. 🙂 So, yes, I do think we need to re-invent quantum math. 🙂 The description may look more complicated, but it would make more sense.
I mean… If physicists themselves have had continued discussions on the reality of the wavefunction for almost a hundred years now (Schrödinger published his equation in 1926), then… Well… Then the physicists have a problem. Not the philosophers. 🙂 As to how that new description might look like, see my papers on viXra.org. I firmly believe it can be done. This is just a hobby of mine, but… Well… That’s where my attention will go over the coming years. 🙂 Perhaps quaternions are the answer but… Well… I don’t think so either – for reasons I’ll explain later. 🙂
Post scriptum: There are many nice videos on Dirac’s belt trick or, more generally, on 720° symmetries, but this links to one I particularly like. It clearly shows that the 720° symmetry requires, in effect, a special relation between the observer and the object that is being observed. It is, effectively, like there is a leather belt between them or, in this case, we have an arm between the glass and the person who is holding the glass. So it’s not like we are walking around the object (think of the glass of water) and making a full turn around it, so as to get back to where we were. No. We are turning it around by 360°! That’s a very different thing than just looking at it, walking around it, and then looking at it again. That explains the 720° symmetry: we need to turn it around twice to get it back to its original state. So… Well… The description is more about us and what we do with the object than about the object itself. That’s why I think the quantum-mechanical description is defective.
Preliminary note: This post may cause brain damage. 🙂 If you haven’t worked yourself through a good introduction to physics – including the math – you will probably not understand what this is about. So… Well… Sorry. 😦 But if you have… Then this should be very interesting. Let’s go. 🙂
If you know one or two things about quantum math – Schrödinger’s equation and all that – then you’ll agree the math is anything but straightforward. Personally, I find the most annoying thing about wavefunction math are those transformation matrices: every time we look at the same thing from a different direction, we need to transform the wavefunction using one or more rotation matrices – and that gets quite complicated !
Now, if you have read any of my posts on this or my other blog, then you know I firmly believe the wavefunction represents something real or… Well… Perhaps it’s just the next best thing to reality: we cannot know das Ding an sich, but the wavefunction gives us everything we would want to know about it (linear or angular momentum, energy, and whatever else we have an operator for). So what am I thinking of? Let me first quote Feynman’s summary interpretation of Schrödinger’s equation (Lectures, III-16-1):
“We can think of Schrödinger’s equation as describing the diffusion of the probability amplitude from one point to the next. […] But the imaginary coefficient in front of the derivative makes the behavior completely different from the ordinary diffusion such as you would have for a gas spreading out along a thin tube. Ordinary diffusion gives rise to real exponential solutions, whereas the solutions of Schrödinger’s equation are complex waves.”
Feynman further formalizes this in his Lecture on Superconductivity (Feynman, III-21-2), in which he refers to Schrödinger’s equation as the “equation for continuity of probabilities”. His analysis there is centered on the local conservation of energy, which makes me think Schrödinger’s equation might be an energy diffusion equation. I’ve written about this ad nauseam in the past, and so I’ll just refer you to one of my papers here for the details, and limit this post to the basics, which are as follows.
The wave equation (so that’s Schrödinger’s equation in its non-relativistic form, which is an approximation that is good enough) is written as:The resemblance with the standard diffusion equation (shown below) is, effectively, very obvious:As Feynman notes, it’s just that imaginary coefficient that makes the behavior quite different. How exactly? Well… You know we get all of those complicated electron orbitals (i.e. the various wave functions that satisfy the equation) out of Schrödinger’s differential equation. We can think of these solutions as (complex) standing waves. They basically represent some equilibrium situation, and the main characteristic of each is their energy level. I won’t dwell on this because – as mentioned above – I assume you master the math. Now, you know that – if we would want to interpret these wavefunctions as something real (which is surely what I want to do!) – the real and imaginary component of a wavefunction will be perpendicular to each other. Let me copy the animation for the elementary wavefunction ψ(θ) = a·e−i∙θ = a·e−i∙(E/ħ)·t = a·cos[(E/ħ)∙t] − i·a·sin[(E/ħ)∙t] once more:
So… Well… That 90° angle makes me think of the similarity with the mathematical description of an electromagnetic wave. Let me quickly show you why. For a particle moving in free space – with no external force fields acting on it – there is no potential (U = 0) and, therefore, the Vψ term – which is just the equivalent of the the sink or source term S in the diffusion equation – disappears. Therefore, Schrödinger’s equation reduces to:
∂ψ(x, t)/∂t = i·(1/2)·(ħ/meff)·∇2ψ(x, t)
Now, the key difference with the diffusion equation – let me write it for you once again: ∂φ(x, t)/∂t = D·∇2φ(x, t) – is that Schrödinger’s equation gives us two equations for the price of one. Indeed, because ψ is a complex-valued function, with a real and an imaginary part, we get the following equations:
- Re(∂ψ/∂t) = −(1/2)·(ħ/meff)·Im(∇2ψ)
- Im(∂ψ/∂t) = (1/2)·(ħ/meff)·Re(∇2ψ)
Huh? Yes. These equations are easily derived from noting that two complex numbers a + i∙b and c + i∙d are equal if, and only if, their real and imaginary parts are the same. Now, the ∂ψ/∂t = i∙(ħ/meff)∙∇2ψ equation amounts to writing something like this: a + i∙b = i∙(c + i∙d). Now, remembering that i2 = −1, you can easily figure out that i∙(c + i∙d) = i∙c + i2∙d = − d + i∙c. [Now that we’re getting a bit technical, let me note that the meff is the effective mass of the particle, which depends on the medium. For example, an electron traveling in a solid (a transistor, for example) will have a different effective mass than in an atom. In free space, we can drop the subscript and just write meff = m.] 🙂 OK. Onwards ! 🙂
The equations above make me think of the equations for an electromagnetic wave in free space (no stationary charges or currents):
- ∂B/∂t = –∇×E
- ∂E/∂t = c2∇×B
Now, these equations – and, I must therefore assume, the other equations above as well – effectively describe a propagation mechanism in spacetime, as illustrated below:
You know how it works for the electromagnetic field: it’s the interplay between circulation and flux. Indeed, circulation around some axis of rotation creates a flux in a direction perpendicular to it, and that flux causes this, and then that, and it all goes round and round and round. 🙂 Something like that. 🙂 I will let you look up how it goes, exactly. The principle is clear enough. Somehow, in this beautiful interplay between linear and circular motion, energy is borrowed from one place and then returns to the other, cycle after cycle.
Now, we know the wavefunction consist of a sine and a cosine: the cosine is the real component, and the sine is the imaginary component. Could they be equally real? Could each represent half of the total energy of our particle? I firmly believe they do. The obvious question then is the following: why wouldn’t we represent them as vectors, just like E and B? I mean… Representing them as vectors (I mean real vectors here – something with a magnitude and a direction in a real space – as opposed to state vectors from the Hilbert space) would show they are real, and there would be no need for cumbersome transformations when going from one representational base to another. In fact, that’s why vector notation was invented (sort of): we don’t need to worry about the coordinate frame. It’s much easier to write physical laws in vector notation because… Well… They’re the real thing, aren’t they? 🙂
What about dimensions? Well… I am not sure. However, because we are – arguably – talking about some pointlike charge moving around in those oscillating fields, I would suspect the dimension of the real and imaginary component of the wavefunction will be the same as that of the electric and magnetic field vectors E and B. We may want to recall these:
- E is measured in newton per coulomb (N/C).
- B is measured in newton per coulomb divided by m/s, so that’s (N/C)/(m/s).
The weird dimension of B is because of the weird force law for the magnetic force. It involves a vector cross product, as shown by Lorentz’ formula:
F = qE + q(v×B)
Of course, it is only one force (one and the same physical reality), as evidenced by the fact that we can write B as the following vector cross-product: B = (1/c)∙ex×E, with ex the unit vector pointing in the x-direction (i.e. the direction of propagation of the wave). [Check it, because you may not have seen this expression before. Just take a piece of paper and think about the geometry of the situation.] Hence, we may associate the (1/c)∙ex× operator, which amounts to a rotation by 90 degrees, with the s/m dimension. Now, multiplication by i also amounts to a rotation by 90° degrees. Hence, if we can agree on a suitable convention for the direction of rotation here, we may boldly write:
B = (1/c)∙ex×E = (1/c)∙i∙E
This is, in fact, what triggered my geometric interpretation of Schrödinger’s equation about a year ago now. I have had little time to work on it, but think I am on the right track. Of course, you should note that, for an electromagnetic wave, the magnitudes of E and B reach their maximum, minimum and zero point simultaneously (as shown below). So their phase is the same.
In contrast, the phase of the real and imaginary component of the wavefunction is not the same, as shown below.
In fact, because of the Stern-Gerlach experiment, I am actually more thinking of a motion like this:
But that shouldn’t distract you. 🙂 The question here is the following: could we possibly think of a new formulation of Schrödinger’s equation – using vectors (again, real vectors – not these weird state vectors) rather than complex algebra?
I think we can, but then I wonder why the inventors of the wavefunction – Heisenberg, Born, Dirac, and Schrödinger himself, of course – never thought of that. 🙂
Hmm… I need to do some research here. 🙂
Post scriptum: You will, of course, wonder how and why the matter-wave would be different from the electromagnetic wave if my suggestion that the dimension of the wavefunction component is the same is correct. The answer is: the difference lies in the phase difference and then, most probably, the different orientation of the angular momentum. Do we have any other possibilities? 🙂
P.S. 2: I also published this post on my new blog: https://readingeinstein.blog/. However, I thought the followers of this blog should get it first. 🙂
Some other comment on an article on my other blog, inspired me to structure some thoughts that are spread over various blog posts. What follows below, is probably the first draft of an article or a paper I plan to write. Or, who knows, I might re-write my two introductory books on quantum physics and publish a new edition soon. 🙂
Physical dimensions and Uncertainty
The physical dimension of the quantum of action (h or ħ = h/2π) is force (expressed in newton) times distance (expressed in meter) times time (expressed in seconds): N·m·s. Now, you may think this N·m·s dimension is kinda hard to imagine. We can imagine its individual components, right? Force, distance and time. We know what they are. But the product of all three? What is it, really?
It shouldn’t be all that hard to imagine what it might be, right? The N·m·s unit is also the unit in which angular momentum is expressed – and you can sort of imagine what that is, right? Think of a spinning top, or a gyroscope. We may also think of the following:
- [h] = N·m·s = (N·m)·s = [E]·[t]
- [h] = N·m·s = (N·s)·m = [p]·[x]
Hence, the physical dimension of action is that of energy (E) multiplied by time (t) or, alternatively, that of momentum (p) times distance (x). To be precise, the second dimensional equation should be written as [h] = [p]·[x], because both the momentum and the distance traveled will be associated with some direction. It’s a moot point for the discussion at the moment, though. Let’s think about the first equation first: [h] = [E]·[t]. What does it mean?
Energy… Hmm… In real life, we are usually not interested in the energy of a system as such, but by the energy it can deliver, or absorb, per second. This is referred to as the power of a system, and it’s expressed in J/s, or watt. Power is also defined as the (time) rate at which work is done. Hmm… But so here we’re multiplying energy and time. So what’s that? After Hiroshima and Nagasaki, we can sort of imagine the energy of an atomic bomb. We can also sort of imagine the power that’s being released by the Sun in light and other forms of radiation, which is about 385×1024 joule per second. But energy times time? What’s that?
I am not sure. If we think of the Sun as a huge reservoir of energy, then the physical dimension of action is just like having that reservoir of energy guaranteed for some time, regardless of how fast or how slow we use it. So, in short, it’s just like the Sun – or the Earth, or the Moon, or whatever object – just being there, for some definite amount of time. So, yes: some definite amount of mass or energy (E) for some definite amount of time (t).
Let’s bring the mass-energy equivalence formula in here: E = mc2. Hence, the physical dimension of action can also be written as [h] = [E]·[t] = [mc]2·[t] = (kg·m2/s2)·s = kg·m2/s. What does that say? Not all that much – for the time being, at least. We can get this [h] = kg·m2/s through some other substitution as well. A force of one newton will give a mass of 1 kg an acceleration of 1 m/s per second. Therefore, 1 N = 1 kg·m/s2 and, hence, the physical dimension of h, or the unit of angular momentum, may also be written as 1 N·m·s = 1 (kg·m/s2)·m·s = 1 kg·m2/s, i.e. the product of mass, velocity and distance.
Hmm… What can we do with that? Nothing much for the moment: our first reading of it is just that it reminds us of the definition of angular momentum – some mass with some velocity rotating around an axis. What about the distance? Oh… The distance here is just the distance from the axis, right? Right. But… Well… It’s like having some amount of linear momentum available over some distance – or in some space, right? That’s sufficiently significant as an interpretation for the moment, I’d think…
This makes one think about what units would be fundamental – and what units we’d consider as being derived. Formally, the newton is a derived unit in the metric system, as opposed to the units of mass, length and time (kg, m, s). Nevertheless, I personally like to think of force as being fundamental: a force is what causes an object to deviate from its straight trajectory in spacetime. Hence, we may want to think of the quantum of action as representing three fundamental physical dimensions: (1) force, (2) time and (3) distance – or space. We may then look at energy and (linear) momentum as physical quantities combining (1) force and distance and (2) force and time respectively.
Let me write this out:
- Force times length (think of a force that is acting on some object over some distance) is energy: 1 joule (J) = 1 newton·meter (N). Hence, we may think of the concept of energy as a projection of action in space only: we make abstraction of time. The physical dimension of the quantum of action should then be written as [h] = [E]·[t]. [Note the square brackets tell us we are looking at a dimensional equation only, so [t] is just the physical dimension of the time variable. It’s a bit confusing because I also use square brackets as parentheses.]
- Conversely, the magnitude of linear momentum (p = m·v) is expressed in newton·seconds: 1 kg·m/s = 1 (kg·m/s2)·s = 1 N·s. Hence, we may think of (linear) momentum as a projection of action in time only: we make abstraction of its spatial dimension. Think of a force that is acting on some object during some time. The physical dimension of the quantum of action should then be written as [h] = [p]·[x]
Of course, a force that is acting on some object during some time, will usually also act on the same object over some distance but… Well… Just try, for once, to make abstraction of one of the two dimensions here: time or distance.
It is a difficult thing to do because, when everything is said and done, we don’t live in space or in time alone, but in spacetime and, hence, such abstractions are not easy. [Of course, now you’ll say that it’s easy to think of something that moves in time only: an object that is standing still does just that – but then we know movement is relative, so there is no such thing as an object that is standing still in space in an absolute sense: Hence, objects never stand still in spacetime.] In any case, we should try such abstractions, if only because of the principle of least action is so essential and deep in physics:
- In classical physics, the path of some object in a force field will minimize the total action (which is usually written as S) along that path.
- In quantum mechanics, the same action integral will give us various values S – each corresponding to a particular path – and each path (and, therefore, each value of S, really) will be associated with a probability amplitude that will be proportional to some constant times e−i·θ = ei·(S/ħ). Because ħ is so tiny, even a small change in S will give a completely different phase angle θ. Therefore, most amplitudes will cancel each other out as we take the sum of the amplitudes over all possible paths: only the paths that nearly give the same phase matter. In practice, these are the paths that are associated with a variation in S of an order of magnitude that is equal to ħ.
The paragraph above summarizes, in essence, Feynman’s path integral formulation of quantum mechanics. We may, therefore, think of the quantum of action expressing itself (1) in time only, (2) in space only, or – much more likely – (3) expressing itself in both dimensions at the same time. Hence, if the quantum of action gives us the order of magnitude of the uncertainty – think of writing something like S ± ħ, we may re-write our dimensional [ħ] = [E]·[t] and [ħ] = [p]·[x] equations as the uncertainty equations:
- ΔE·Δt = ħ
- Δp·Δx = ħ
You should note here that it is best to think of the uncertainty relations as a pair of equations, if only because you should also think of the concept of energy and momentum as representing different aspects of the same reality, as evidenced by the (relativistic) energy-momentum relation (E2 = p2c2 – m02c4). Also, as illustrated below, the actual path – or, to be more precise, what we might associate with the concept of the actual path – is likely to be some mix of Δx and Δt. If Δt is very small, then Δx will be very large. In order to move over such distance, our particle will require a larger energy, so ΔE will be large. Likewise, if Δt is very large, then Δx will be very small and, therefore, ΔE will be very small. You can also reason in terms of Δx, and talk about momentum rather than energy. You will arrive at the same conclusions: the ΔE·Δt = h and Δp·Δx = h relations represent two aspects of the same reality – or, at the very least, what we might think of as reality.
Also think of the following: if ΔE·Δt = h and Δp·Δx = h, then ΔE·Δt = Δp·Δx and, therefore, ΔE/Δp must be equal to Δx/Δt. Hence, the ratio of the uncertainty about x (the distance) and the uncertainty about t (the time) equals the ratio of the uncertainty about E (the energy) and the uncertainty about p (the momentum).
Of course, you will note that the actual uncertainty relations have a factor 1/2 in them. This may be explained by thinking of both negative as well as positive variations in space and in time.
We will obviously want to do some more thinking about those physical dimensions. The idea of a force implies the idea of some object – of some mass on which the force is acting. Hence, let’s think about the concept of mass now. But… Well… Mass and energy are supposed to be equivalent, right? So let’s look at the concept of energy too.
Action, energy and mass
What is energy, really? In real life, we are usually not interested in the energy of a system as such, but by the energy it can deliver, or absorb, per second. This is referred to as the power of a system, and it’s expressed in J/s. However, in physics, we always talk energy – not power – so… Well… What is the energy of a system?
According to the de Broglie and Einstein – and so many other eminent physicists, of course – we should not only think of the kinetic energy of its parts, but also of their potential energy, and their rest energy, and – for an atomic system – we may add some internal energy, which may be binding energy, or excitation energy (think of a hydrogen atom in an excited state, for example). A lot of stuff. 🙂 But, obviously, Einstein’s mass-equivalence formula comes to mind here, and summarizes it all:
E = m·c2
The m in this formula refers to mass – not to meter, obviously. Stupid remark, of course… But… Well… What is energy, really? What is mass, really? What’s that equivalence between mass and energy, really?
I don’t have the definite answer to that question (otherwise I’d be famous), but… Well… I do think physicists and mathematicians should invest more in exploring some basic intuitions here. As I explained in several posts, it is very tempting to think of energy as some kind of two-dimensional oscillation of mass. A force over some distance will cause a mass to accelerate. This is reflected in the dimensional analysis:
[E] = [m]·[c2] = 1 kg·m2/s2 = 1 kg·m/s2·m = 1 N·m
The kg and m/s2 factors make this abundantly clear: m/s2 is the physical dimension of acceleration: (the change in) velocity per time unit.
Other formulas now come to mind, such as the Planck-Einstein relation: E = h·f = ω·ħ. We could also write: E = h/T. Needless to say, T = 1/f is the period of the oscillation. So we could say, for example, that the energy of some particle times the period of the oscillation gives us Planck’s constant again. What does that mean? Perhaps it’s easier to think of it the other way around: E/f = h = 6.626070040(81)×10−34 J·s. Now, f is the number of oscillations per second. Let’s write it as f = n/s, so we get:
E/f = E/(n/s) = E·s/n = 6.626070040(81)×10−34 J·s ⇔ E/n = 6.626070040(81)×10−34 J
What an amazing result! Our wavicle – be it a photon or a matter-particle – will always pack 6.626070040(81)×10−34 joule in one oscillation, so that’s the numerical value of Planck’s constant which, of course, depends on our fundamental units (i.e. kg, meter, second, etcetera in the SI system).
Of course, the obvious question is: what’s one oscillation? If it’s a wave packet, the oscillations may not have the same amplitude, and we may also not be able to define an exact period. In fact, we should expect the amplitude and duration of each oscillation to be slightly different, shouldn’t we? And then…
Well… What’s an oscillation? We’re used to counting them: n oscillations per second, so that’s per time unit. How many do we have in total? We wrote about that in our posts on the shape and size of a photon. We know photons are emitted by atomic oscillators – or, to put it simply, just atoms going from one energy level to another. Feynman calculated the Q of these atomic oscillators: it’s of the order of 108 (see his Lectures, I-33-3: it’s a wonderfully simple exercise, and one that really shows his greatness as a physics teacher), so… Well… This wave train will last about 10–8 seconds (that’s the time it takes for the radiation to die out by a factor 1/e). To give a somewhat more precise example, for sodium light, which has a frequency of 500 THz (500×1012 oscillations per second) and a wavelength of 600 nm (600×10–9 meter), the radiation will lasts about 3.2×10–8 seconds. [In fact, that’s the time it takes for the radiation’s energy to die out by a factor 1/e, so(i.e. the so-called decay time τ), so the wavetrain will actually last longer, but so the amplitude becomes quite small after that time.] So… Well… That’s a very short time but… Still, taking into account the rather spectacular frequency (500 THz) of sodium light, that makes for some 16 million oscillations and, taking into the account the rather spectacular speed of light (3×108 m/s), that makes for a wave train with a length of, roughly, 9.6 meter. Huh? 9.6 meter!? But a photon is supposed to be pointlike, isn’it it? It has no length, does it?
That’s where relativity helps us out: as I wrote in one of my posts, relativistic length contraction may explain the apparent paradox. Using the reference frame of the photon – so if we’d be traveling at speed c,’ riding’ with the photon, so to say, as it’s being emitted – then we’d ‘see’ the electromagnetic transient as it’s being radiated into space.
However, while we can associate some mass with the energy of the photon, none of what I wrote above explains what the (rest) mass of a matter-particle could possibly be. There is no real answer to that, I guess. You’ll think of the Higgs field now but… Then… Well. The Higgs field is a scalar field. Very simple: some number that’s associated with some position in spacetime. That doesn’t explain very much, does it? 😦 When everything is said and done, the scientists who, in 2013 only, got the Nobel Price for their theory on the Higgs mechanism, simply tell us mass is some number. That’s something we knew already, right? 🙂
The reality of the wavefunction
The wavefunction is, obviously, a mathematical construct: a description of reality using a very specific language. What language? Mathematics, of course! Math may not be universal (aliens might not be able to decipher our mathematical models) but it’s pretty good as a global tool of communication, at least.
The real question is: is the description accurate? Does it match reality and, if it does, how good is the match? For example, the wavefunction for an electron in a hydrogen atom looks as follows:
ψ(r, t) = e−i·(E/ħ)·t·f(r)
As I explained in previous posts (see, for example, my recent post on reality and perception), the f(r) function basically provides some envelope for the two-dimensional e−i·θ = e−i·(E/ħ)·t = cosθ + i·sinθ oscillation, with r = (x, y, z), θ = (E/ħ)·t = ω·t and ω = E/ħ. So it presumes the duration of each oscillation is some constant. Why? Well… Look at the formula: this thing has a constant frequency in time. It’s only the amplitude that is varying as a function of the r = (x, y, z) coordinates. 🙂 So… Well… If each oscillation is to always pack 6.626070040(81)×10−34 joule, but the amplitude of the oscillation varies from point to point, then… Well… We’ve got a problem. The wavefunction above is likely to be an approximation of reality only. 🙂 The associated energy is the same, but… Well… Reality is probably not the nice geometrical shape we associate with those wavefunctions.
In addition, we should think of the Uncertainty Principle: there must be some uncertainty in the energy of the photons when our hydrogen atom makes a transition from one energy level to another. But then… Well… If our photon packs something like 16 million oscillations, and the order of magnitude of the uncertainty is only of the order of h (or ħ = h/2π) which, as mentioned above, is the (average) energy of one oscillation only, then we don’t have much of a problem here, do we? 🙂
Post scriptum: In previous posts, we offered some analogies – or metaphors – to a two-dimensional oscillation (remember the V-2 engine?). Perhaps it’s all relatively simple. If we have some tiny little ball of mass – and its center of mass has to stay where it is – then any rotation – around any axis – will be some combination of a rotation around our x- and z-axis – as shown below. Two axes only. So we may want to think of a two-dimensional oscillation as an oscillation of the polar and azimuthal angle. 🙂