# Freewheeling once more…

You remember the elementary wavefunction Ψ(x, t) = Ψ(θ), with θ = ω·t−k∙x = (E/ħ)·t − (p/ħ)∙x = (E·t−p∙x)/ħ. Now, we can re-scale θ and define a new argument, which we’ll write as:

φ = θ/ħ = E·t−p∙x

The Ψ(θ) function can now be written as:

Ψ(x, t) = Ψ(θ) = [ei·(θ/ħ)]ħ = Φ(φ) = [ei·φ]ħ with φ = E·t−p∙x

This doesn’t change the fundamentals: we’re just re-scaling E and p here, by measuring them in units of ħ.

You’ll wonder: can we do that? We’re talking physics here, so our variables represent something real. Not all we can do in math, should be done in physics, right? So what does it mean? We need to look at the dimensions of our variables. Does it affect our time and distance units, i.e. the second and the meter? Well… I’d say it’s OK.

Energy is expressed in joule: 1 J = 1 N·m. [In SI base units, we write: J = N·m = (kg·m/s2)·m = kg·(m/s)2.] So if we divide it by ħ, whose dimension is joule-second (J·s), we get some value expressed per second, i.e. a (temporal) frequency. That’s what we want, as we’re multiplying it with t in the argument of our wavefunction!

Momentum is expressed in newton-second (N·s). Now, 1 J = 1 N·m, so 1 N = 1 J/m. Hence, if we divide the momentum value by ħ, we get some value expressed per meter: N·s/J·s = N/J = N/N·m = 1/m. So we get a spatial frequency here. That’s what we want, as we’re multiplying it with x!

So the answer is yes: we can re-scale energy and momentum and we get a temporal and spatial frequency respectively, which we can multiply with t and x respectively: we do not need to change our time and distance units when re-scaling E and p by dividing by ħ!

The next question is: if we express energy and momentum as temporal and spatial frequencies, do our E = m·cand p = m·formulas still apply? They should: both and v are expressed in meter per second (m/s) and, as mentioned above, the re-scaling does not affect our time and distance units. Hence, the energy-mass equivalence relation, and the definition of p (p = m·v), imply that we can re-write the argument (φ) of our ‘new’ wavefunction – i.e. Φ(φ) – as:

φ = E·t−p∙x = m·c2∙t − m∙v·x = m·c2[t – (v/c)∙(x/c)] = m·c2[t – (v/c)∙(x/c)]

In effect, when re-scaling our energy and momentum values, we’ve also re-scaled our unit of inertia, i.e. the unit in which we measure the mass m, which is directly related to both energy as well as momentum. To be precise, from a math point of view, m is nothing but a proportionality constant in both the E = m·cand p = m·formulas.

The next step is to fiddle with the time and distance units. If we

1. measure x and t in equivalent units (so c = 1);
2. denote v/c by β; and
3. re-use the x symbol to denote x/c (that’s just to simplify by saving symbols);

we get:

φ = m·(t–β∙x)

This argument is the product of two factors: (1) m and (2) t–β∙x.

1. The first factor – i.e. the mass m – is an inherent property of the particle that we’re looking at: it measures its inertia, i.e. the key variable in any dynamical model (i.e. any model – classical or quantum-mechanical – representing the motion of the particle).
2. The second factor – i.e. t–v∙x – reminds one of the argument of the wavefunction that’s used in classical mechanics, i.e. x–vt, with v the velocity of the wave. Of course, we should note two major differences between the t–β∙x and x–vt expressions:
1. β is a relative velocity (i.e. a ratio between 0 and 1), while v is an absolute velocity (i.e. a number between 0 and ≈ 299,792,458 m/s).
2. The t–β∙x expression switches the time and distance variables as compared to the x–vt expression, and vice versa.

Both differences are important, but let’s focus on the second one. From a math point of view, the t–β∙x and x–vt expressions are equivalent. However, time is time, and distance is distance—in physics, that is. So what can we conclude here? To answer that question, let’s re-analyze the x–vt expression. Remember its origin: if we have some wave function F(x–vt), and we add some time Δt to its argument – so we’re looking at F[x−v(t+Δt)] now, instead of F(x−vt) – then we can restore it to its former value by also adding some distance Δx = v∙Δt to the argument: indeed, if we do so, we get F[x+Δx−v(t+Δt)] = F(x+vΔt–vt−vΔt) = F(x–vt). Of course, we can do the same analysis the other way around, so we add some Δx and then… Well… You get the idea.

Can we do that for for the F(t–β∙x) expression too? Sure. If we add some Δt to its argument, then we can restore it to its former value by also adding some distance Δx = Δt/β. Just check it: F[(t+Δt)–β(x+Δx)] = F(t+Δt–βx−βΔx) = F(t+Δt–βx−βΔt/β) = F(t–β∙x).

So the mathematical equivalence between the t–β∙x and x–vt expressions is surely meaningful. The F(x–vt) function uniquely determines the waveform and, as part of that determination (or definition, if you want), it also defines its velocity v. Likewise, we can say that the Φ(φ) = Φ[m·(t–β∙x)] function defines the (relative) velocity (β) of the particle that we’re looking at—quantum-mechanically, that is.

You’ll say: we’ve got two variables here: m and β. Well… Yes and no. We can look at m as an independent variable here. In fact, if you want, we could define yet another variable –χ = φ/m = t–β∙x – and, hence, yet another wavefunction here:

Ψ(θ) = [ei·(θ/ħ)]ħ = [ei·φ]ħ = Φ(φ) = Χ(χ) = [ei·φ/m]ħ·m = [ei·χ]ħ·m = [ei·θ/(ħ·m)]ħ·m

Does that make sense? Maybe. Think of it: the spatial dimension of the wave pulse F(x–vt) – if you don’t know what I am talking about: just think of its ‘position’ – is defined by its velocity v = x/t, which – from a math point of view – is equivalent to stating: x – v∙t = 0. Likewise, if we look at our wavefunction as some pulse in space, then its spatial dimension would also be defined by its (relative) velocity, which corresponds to the classical (relative) velocity of the particle we’re looking at. So… Well… As I said, I’ll let you think of all this.

Post Scriptum:

1. You may wonder what that ħ·m factor in that Χ(χ) = [ei·χ]ħ·m = [ei·(t–β∙x)/(ħ·m)]ħ·m function actually stands for. Well… If we measure time and distance in equivalent units (so = 1 and, therefore, E = m), and if we measure energy in units of ħ, then ħ·m corresponds to our old energy unit, i.e. E measured in joule, rather than in terms of ħ. So… Well… I don’t think we can say much more about it.
2. Another thing you may want to think about is the relativistic transformation of the wavefunction. You know that we should correct Newton’s Law of Motion for velocities approaching c. We do so by integrating the Lorentz factor. In light of the fact that we’re using the relative velocity (β) in our wave function, do you think we still need to apply such corrections for the wavefunction? What’s your guess? 🙂

# The Hamiltonian revisited

I want to come back to something I mentioned in a previous post: when looking at that formula for those Uij amplitudes—which I’ll jot down once more:

Uij(t + Δt, t) = δij + ΔUij(t + Δt, t) = δij + Kij(t)·Δt ⇔ Uij(t + Δt, t) = δij − (i/ħ)·Hij(t)·Δt

—I noted that it resembles the general y(t + Δt) = y(t) + Δy = y(t) + (dy/dt)·Δt formula. So we can look at our Kij(t) function as being equal to the time derivative of the Uij(t + Δt, t) function. I want to re-visit that here, as it triggers a whole range of questions, which may or may not help to understand quantum math somewhat more intuitively.  Let’s quickly sum up what we’ve learned so far: it’s basically all about quantum-mechanical stuff that does not move in space. Hence, the x in our wavefunction ψ(x, t) is some fixed point in space and, therefore, our elementary wavefunction—which we wrote as:

ψ(x, t) = a·ei·θ a·ei·(ω·t − k∙x) = a·ei·[(E/ħ)·t − (p/ħ)∙x]

—reduces to ψ(t) = a·ei·ω·t = a·ei·[(E/ħ)·t.

Unlike what you might think, we’re not equating x with zero here. No. It’s the p = m·v factor that becomes zero, because our reference frame is that of the system that we’re looking at, so its velocity is zero: it doesn’t move in our reference frame. That immediately answers an obvious question: does our wavefunction look any different when choosing another reference frame? The answer is obviously: yes! It surely matters if the system moves or not, and it also matters how fast it moves, because it changes the energy and momentum values from E and p to some E’ and p’. However, we’ll not consider such complications here: that’s the realm of relativistic quantum mechanics. Let’s start with the simplest of situations.

#### A simple two-state system

One of the simplest examples of a quantum-mechanical system that does not move in space, is the textbook example of the ammonia molecule. The picture was as simple as the one below: an ammonia molecule consists of one nitrogen atom and three hydrogen atoms, and the nitrogen atom could be ‘up’ or ‘down’ with regard to the motion of the NH3 molecule around its axis of symmetry, as shown below. It’s important to note that this ‘up’ or ‘down’ direction is, once again, defined with respect to the reference frame of the system itself. The motion of the molecule around its axis of symmetry is referred to as its spin—a term that’s used in a variety of contexts and, therefore, is annoyingly ambiguous. When we use the term ‘spin’ (up or down) to describe an electron state, for example, we’d associate it with the direction of its magnetic moment. Such magnetic moment arises from the fact that, for all practical purposes, we can think of an electron as a spinning electric charge. Now, while our ammonia molecule is electrically neutral, as a whole, the two states are actually associated with opposite electric dipole moments, as illustrated below. Hence, when we’d apply an electric field (denoted as ε) below, the two states are effectively associated with different energy levels, which we wrote as E0 ± εμ. But we’re getting ahead of ourselves here. Let’s revert to the system in free space, i.e. without an electromagnetic force field—or, what amounts to saying the same, without potential. Now, the ammonia molecule is a quantum-mechanical system, and so there is some amplitude for the nitrogen atom to tunnel through the plane of hydrogens. I told you before that this is the key to understanding quantum mechanics really: there is an energy barrier there and, classically, the nitrogen atom should not sneak across. But it does. It’s like it can borrow some energy – which we denote by A – to penetrate the energy barrier.

In quantum mechanics, the dynamics of this system are modeled using a set of two differential equations. These differential equations are really the equivalent of Newton’s classical Law of Motion (I am referring to the F = m·(dv/dt) = m·a equation here) in quantum mechanics, so I’ll have to explain them—which is not so easy as explaining Newton’s Law, because we’re talking complex-valued functions, but… Well… Let me first insert the solution of that set of differential equations: This graph shows how the probability of the nitrogen atom (or the ammonia molecule itself) being in state 1 (i.e. ‘up’) or, else, in state 2 (i.e. ‘down’), varies sinusoidally in time. Let me also give you the equations for the amplitudes to be in state 1 or 2 respectively:

1. C1(t) = 〈 1 | ψ 〉 = (1/2)·e(i/ħ)·(E− A)·t + (1/2)·e(i/ħ)·(E+ A)·t = e(i/ħ)·E0·t·cos[(A/ħ)·t]
2. C2(t) = 〈 2 | ψ 〉 = (1/2)·e(i/ħ)·(E− A)·t – (1/2)·e(i/ħ)·(E+ A)·t = i·e(i/ħ)·E0·t·sin[(A/ħ)·t]

So the P1(t) and P2(t) probabilities above are just the absolute square of these C1(t) and C2(t) functions. So as to help you understand what’s going on here, let me quickly insert the following technical remarks:

• In case you wonder how we go from those exponentials to a simple sine and cosine factor, remember that the sum of complex conjugates, i.e eiθ eiθ reduces to 2·cosθ, while eiθ − eiθ reduces to 2·i·sinθ.
• As for how to take the absolute square… Well… I shouldn’t be explaining that here, but you should be able to work that out remembering that (i) |a·b·c|2 = |a|2·|b|2·|c|2; (ii) |eiθ|2 = |e−iθ|= 12 = 1 (for any value of θ); and (iii) |i|2 = 1.
• As for the periodicity of both probability functions, note that the period of the squared sine and cosine functions is equal to π. Hence, the argument of our sine and cosine function will be equal to 0, π, 2π, 3π etcetera if (A/ħ)·t = 0, π, 2π, 3π etcetera, i.e. if t = 0·ħ/A, π·ħ/A, 2π·ħ/A, 3π·ħ/A etcetera. So that’s why we measure time in units of ħ/A above.

The graph above is actually tricky to interpret, as it assumes that we know in what state the molecule starts out with at t = 0. This assumption is tricky because we usually do not know that: we have to make some observation which, curiously enough, will always yield one of the two states—nothing in-between. Or, else, we can use a state selector—an inhomogeneous electric field which will separate the ammonia molecules according to their state. It’s a weird thing really, and it summarizes all of the ‘craziness’ of quantum-mechanics: as long as we don’t measure anything – by applying that force field – our molecule is in some kind of abstract state, which mixes the two base states. But when we do make the measurement, always along some specific direction (which we usually take to be the z-direction in our reference frame), we’ll always find the molecule is either ‘up’ or, else, ‘down’. We never measure it as something in-between. Personally, I like to think the measurement apparatus – I am talking the electric field here – causes the nitrogen atom to sort of ‘snap into place’. However, physicists use more precise language here: they would say that the electric field does result in the two positions having very different energy levels (E0 + εμ and E0 – εμ, to be precise) and that, as a result, the amplitude for the nitrogen atom to flip back and forth has little effect. Now how do we model that?

#### The Hamiltonian equations

I shouldn’t be using the term above, as it usually refers to a set of differential equations describing classical systems. However, I’ll also use it for the quantum-mechanical analog, which amounts to the following for our simple two-state example above: Don’t panic. We’ll explain. The equations above are all the same but use different formats: the first block writes them as a set of equations, while the second uses the matrix notation, which involves the use of that rather infamous Hamiltonian matrix, which we denote by H = [Hij]. Now, we’ve postponed a lot of technical stuff, so… Well… We can’t avoid it any longer. Let’s look at those Hamiltonian coefficients Hij first. Where do they come from?

You’ll remember we thought of time as some kind of apparatus, with particles entering in some initial state φ and coming out in some final state χ. Both are to be described in terms of our base states. To be precise, we associated the (complex) coefficients C1 and C2 with |φ〉 and D1 and D2 with |χ〉. However, the χ state is a final state, so we have to write it as 〈χ| = |χ〉† (read: chi dagger). The dagger symbol tells us we need to take the conjugate transpose of |χ〉, so the column vector becomes a row vector, and its coefficients are the complex conjugate of D1 and D2, which we denote as D1* and D2*. We combined this with Dirac’s bra-ket notation for the amplitude to go from one base state to another, as a function in time (or a function of time, I should say):

Uij(t + Δt, t) = 〈i|U(t + Δt, t)|j〉

This allowed us to write the following matrix equation: To see what it means, you should write it all out:

〈χ|U(t + Δt, t)|φ〉 = D1*·(U11(t + Δt, t)·C1 + U12(t + Δt, t)·C2) + D2*·(U21(t + Δt, t)·C1 + U22(t + Δt, t)·C2)

= D1*·U11(t + Δt, t)·C+ D1*·U12(t + Δt, t)·C+ D2*·U21(t + Δt, t)·C+ D2*·U22(t + Δt, t)·C2

It’s a horrendous expression, but it’s a complex-valued amplitude or, quite simply, a complex number. So this is not nonsensical. We can now take the next step, and that’s to go from those Uij amplitudes to the Hij amplitudes of the Hamiltonian matrix. The key is to consider the following: if Δt goes to zero, nothing happens, so we write: Uij = 〈i|U|j〉 → 〈i|j〉 = δij for Δt → 0, with δij = 1 if i = j, and δij = 0 if i ≠ j. We then assume that, for small t, those Uij amplitudes should differ from δij (i.e. from 1 or 0) by amounts that are proportional to Δt. So we write:

Uij(t + Δt, t) = δij + ΔUij(t + Δt, t) = δij + Kij(t)·Δt

We then equated those Kij(t) factors with − (i/ħ)·Hij(t), and we were done: Uij(t + Δt, t) = δij − (i/ħ)·Hij(t)·Δt. […] Well… I show you how we get those differential equations in a moment. Let’s pause here for a while to see what’s going on really. You’ll probably remember how one can mathematically ‘construct’ the complex exponential eiθ by using the linear approximation eiε = 1 + iε near θ = 0 and for infinitesimally small values of ε. In case you forgot, we basically used the definition of the derivative of the real exponential eε for ε going to zero: So we’ve got something similar here for U11(t + Δt, t) = 1 − i·[H11(t)/ħ]·Δt and U22(t + Δt, t) = 1 − i·[H22(t)/ħ]·Δt. Just replace the ε in eiε = 1 + iε by ε = − (E0/ħ)·Δt. Indeed, we know that H11 = H22 = E0, and E0/ħ is, of course, just the energy measured in (reduced) Planck units, i.e. in its natural unit. Hence, if our ammonia molecule is in one of the two base states, we start at θ = 0 and then we just start moving on the unit circle, clockwise, because of the minus sign in eiθ. Let’s write it out:

U11(t + Δt, t) = 1 − i·[H11(t)/ħ]·Δt = 1 − i·[E0/ħ]·Δt and

U22(t + Δt, t) = 1 − i·[H22(t)/ħ]·Δt = 1 − i·[E0/ħ]·Δt

But what about U12 and U21? Is there a similar interpretation? Let’s write those equations down and think about them:

U12(t + Δt, t) = 0 − i·[H12(t)/ħ]·Δt = 0 + i·[A/ħ]·Δt and

U21(t + Δt, t) = 0 − i·[H21(t)/ħ]·Δt = 0 + i·[A/ħ]·Δt

We can visualize this as follows: Let’s remind ourselves of the definition of the derivative of a function by looking at the illustration below: The f(x0) value in this illustration corresponds to the Uij(t, t), obviously. So now things make somewhat more sense: U11(t, t) = U11(t, t) = 1, obviously, and U12(t, t) = U21(t, t) = 0. We then add the ΔUij(t + Δt, t) to Uij(t, t). Hence, we can, and probably should, think of those Kij(t) coefficients as the derivative of the Uij(t, t) functions with respect to time. So we can write something like this: These derivatives are pure imaginary numbers. That does not mean that the Uij(t + Δt, t) functions are purely imaginary: U11(t + Δt, t) and U22(t + Δt, t) can be approximated by 1 − i·[E0/ħ]·Δt for small Δt, so they do have a real part. In contrast, U12(t + Δt, t) and U21(t + Δt, t) are, effectively, purely imaginary (for small Δt, that is).

I can’t help thinking these formulas reflect a deep and beautiful geometry, but its meaning escapes me so far. 😦 When everything is said and done, none of the reflections above makes things somewhat more intuitive: these wavefunctions remain as mysterious as ever.

I keep staring at those P1(t) and P2(t) functions, and the C1(t) and C2(t) functions that ‘generate’ them, so to speak. They’re not independent, obviously. In fact, they’re exactly the same, except for a phase difference, which corresponds to the phase difference between the sine and cosine. So it’s all one reality, really: all can be described in one single functional form, so to speak. I hope things become more obvious as I move forward. Post scriptum: I promised I’d show you how to get those differential equations but… Well… I’ve done that in other posts, so I’ll refer you to one of those. Sorry for not repeating myself. 🙂