N-state systems

On the 10th of December, last year, I wrote that my next post would generalize the results we got for two-state systems. That didn’t happen: I didn’t write the ‘next post’—not till now, that is. No. Instead, I started digging—as you can see from all the posts in-between this one and the 10 December piece. And you may also want to take a look at my new Essentials page. 🙂 In any case, it is now time to get back to Feynman’s Lectures on quantum mechanics. Remember where we are: halfway, really. The first half was all about stuff that doesn’t move in space. The second half, i.e. all that we’re going to study now, is about… Well… You guessed it. 🙂 That’s going to be about stuff that does move in space. To see how that works, we first need to generalize the two-state model to an N-state model. Let’s do it.

You’ll remember that, in quantum mechanics, we describe stuff by saying it’s in some state which, as long as we don’t measure in what state exactly, is written as some linear combination of a set of base states. [And please do think about what I highlight here: some state, measureexactly. It all matters. Think about it!] The coefficients in that linear combination are complex-valued functions, which we referred to as wavefunctions, or (probability) amplitudes. To make a long story short, we wrote:

eq1

These Ci coefficients are a shorthand for 〈 i | ψ(t) 〉 amplitudes. As such, they give us the amplitude of the system to be in state i as a function of time. Their dynamics (i.e. the way they evolve in time) are governed by the Hamiltonian equations, i.e.:

Eq2

The Hij coefficients in this set of equations are organized in the Hamiltonian matrix, which Feynman refers to as the energy matrix, because these coefficients do represent energies indeed. So we applied all of this to two-state systems and, hence, things should not be too hard now, because it’s all the same, except that we have N base states now, instead of just two.

So we have a N×N matrix whose diagonal elements Hij are real numbers. The non-diagonal elements may be complex numbers but, if they are, the following rule applies: Hij* = Hji. [In case you wonder: that’s got to do with the fact that we can write any final 〈χ| or 〈φ| state as the conjugate transpose of the initial |χ〉 or |φ〉 state, so we can write: 〈χ| = |χ〉*, or 〈φ| = |φ〉*.]

As usual, the trick is to find those N Ci(t) functions: we do so by solving that set of N equations, assuming we know those Hamiltonian coefficients. [As you may suspect, the real challenge is to determine the Hamiltonian, which we assume to be given here. But… Well… You first need to learn how to model stuff. Once you get your degree, you’ll be paid to actually solve problems using those models. 🙂 ] We know the complex exponential is a functional form that usually does that trick. Hence, generalizing the results from our analysis of two-state systems once more, the following general solution is suggested:

Ci(t) = ai·ei·(E/ħ)·t 

Note that we introduce only one E variable here, but N ai coefficients, which may be real- or complex-valued. Indeed, my examples – see my previous posts – often involved real coefficients, but that’s not necessarily the case. Think of the C2(t) = i·e(i/ħ)·E0·t·sin[(A/ħ)·t] function describing one of the two base state amplitudes for the ammonia molecule—for example. 🙂

Now, that proposed general solution allows us to calculate the derivatives in our Hamiltonian equations (i.e. the d[Ci(t)]/dt functions) as follows:

d[Ci(t)]/dt = −i·(E/ħ)·ai·ei·(E/ħ)·t 

You can now double-check that the set of equations reduces to the following:

Eq4

Please do write it out: because we have one E only, the ei·(E/ħ)·t factor is common to all terms, and so we can cancel it. The other stuff is plain arithmetic: i·i = i2 = 1, and the ħ constants cancel out too. So there we are: we’ve got a very simple set of N equations here, with N unknowns (i.e. these a1, a2,…, aN coefficients, to be specific). We can re-write this system as:

Eq5

The δij here is the Kronecker delta, of course (it’s one for i = j and zero for j), and we are now looking at a homogeneous system of equations here, i.e. a set of linear equations in which all the constant terms are zero. You should remember it from your high school math course. To be specific, you’d write it as Ax = 0, with A the coefficient matrix. The trivial solution is the zero solution, of course: all a1, a2,…, aN coefficients are zero. But we don’t want the trivial solution. Now, as Feynman points out – tongue-in-cheek, really – we actually have to be lucky to have a non-trivial solution. Indeed, you may or may not remember that the zero solution was actually the only solution if the determinant of the coefficient matrix was not equal to zero. So we only had a non-trivial solution if the determinant of A was equal to zero, i.e. if Det[A] = 0. So A has to be some so-called singular matrix. You’ll also remember that, in that case, we got an infinite number of solutions, to which we could apply the so-called superposition principle: if x and y are two solutions to the homogeneous set of equations Ax = 0, then any linear combination of x and y is also a solution. I wrote an addendum to this post (just scroll down and you’ll find it), which explains what systems of linear equations are all about, so I’ll refer you to that in case you’d need more detail here. I need to continue our story here. The bottom line is: the [Hij–δijE] matrix needs to be singular for the system to have meaningful solutions, so we will only have a non-trivial solution for those values of E for which

Det[Hij–δijE] = 0

Let’s spell it out. The condition above is the same as writing:

Eq7

So far, so good. What’s next? Well… The formula for the determinant is the following:

det physicists

That looks like a monster, and it is, but, in essence, what we’ve got here is an expression for the determinant in terms of the permutations of the matrix elements. This is not a math course so I’ll just refer you Wikipedia for a detailed explanation of this formula for the determinant. The bottom line is: if we write it all out, then Det[Hij–δijE] is just an Nth order polynomial in E. In other words: it’s just a sum of products with powers of E up to EN, and so our Det[Hij–δijE] = 0 condition amounts to equating it with zero.

In general, we’ll have N roots, but – sorry you need to remember so much from your high school math classes here – some of them may be multiple roots (i.e. two or more roots may be equal). We’ll call those roots—you guessed it:

EI, EII,…, En,…, EN

Note I am following Feynman’s exposé, and so he uses n, rather than k, to denote the nth Roman numeral (as opposed to Latin numerals). Now, I know your brain is near the melting point… But… Well… We’re not done yet. Just hang on. For each of these values E = EI, EII,…, En,…, EN, we have an associated set of solutions ai. As Feynman puts it: you get a set which belongs to En. In order to not forget that, for each En, we’re talking a set of N coefficients ai (= 1, 2,…, N), we denote that set not by ai(n) but by ai(n). So that’s why we use boldface for our index n: it’s special—and not only because it denotes a Roman numeral! It’s just one of Feynman’s many meaningful conventions.

Now remember that Ci(t) = ai·ei·(E/ħ)·t formula. For each set of ai(n) coefficients, we’ll have a set of Ci(n) functions which, naturally, we can write as:

Ci(n) = ai(nei·(En/ħ)·t

So far, so good. We have N ai(n) coefficients and N Ci(n) functions. That’s easy enough to understand. Now we’ll define also define a set of N new vectors,  which we’ll write as |n〉, and which we’ll refer to as the state vectors that describe the configuration of the definite energy states En (n = I, II,… N). [Just breathe right now: I’ll (try to) explain this in a moment.] Moreover, we’ll write our set of coefficients ai(n) as 〈i|n〉. Again, the boldface n reminds us we’re talking a set of N complex numbers here. So we re-write that set of N Ci(n) functions as follows:

Ci(n) = 〈i|n〉·ei·(En/ħ)·t

We can expand this as follows:

Ci(n) = 〈 i | ψn(t) 〉 = 〈 i | 〉·ei·(En/ħ)·t

which, of course, implies that:

| ψn(t) 〉 = |n〉·ei·(En/ħ)·t

So now you may understand Feynman’s description of those |n〉 vectors somewhat better. As he puts it:

“The |n〉 vectors – of which there are N – are the state vectors that describe the configuration of the definite energy states En (n = I, II,… N), but have the time dependence factored out.”

Hmm… I know. This stuff is hard to swallow, but we’re not done yet: if your brain hasn’t melted yet, it may do so now. You’ll remember we talked about eigenvalues and eigenvectors in our post on the math behind the quantum-mechanical model of our ammonia molecule. Well… We can generalize the results we got there:

  1. The energies EI, EII,…, En,…, EN are the eigenvalues of the Hamiltonian matrix H.
  2. The state vectors |n〉 that are associated with each energy En, i.e. the set of vectors |n〉, are the corresponding eigenstates.

So… Well… That’s it! We’re done! This is all there is to it. I know it’s a lot but… Well… We’ve got a general description of N-state systems here, and so that’s great!

Let me make some concluding remarks though.

First, note the following property: if we let the Hamiltonian matrix act on one of those state vectors |n〉, the result is just En times the same state. We write:

Eq-12

We’re writing nothing new here really: it’s just a consequence of the definition of eigenstates and eigenvalues. The more interesting thing is the following. When describing our two-state systems, we saw we could use the states that we associated with the Eand EII as a new base set. The same is true for N-state systems: the state vectors |n〉 can also be used as a base set. Of course, for that to be the case, all of the states must be orthogonal, meaning that for any two of them, say |n〉 and |m〉, the following equation must hold:

n|m〉 = 0

Feynman shows this will be true automatically if all the energies are different. If they’re not – i.e. if our polynomial in E would accidentally have two (or more) roots with the same energy – then things are more complicated. However, as Feynman points out, this problem can be solved by ‘cooking up’ two new states that do have the same energy but are also orthogonal. I’ll refer you to him for the detail, as well as for the proof of that 〈n|m〉 = 0 equation.

Finally, you should also note that – because of the homogeneity principle – it’s possible to multiply the N ai(n) coefficients by a suitable factor so that all the states are normalized, by which we mean:

n|n〉 = 1

Well… We’re done! For today, at least! 🙂

Addendum on Systems of Linear Equations

It’s probably good to briefly remind you of your high school math class on systems of linear equations. First note the difference between homogeneous and non-homogeneous equations. Non-homogeneous equations have a non-zero constant term. The following three equations are an example of a non-homogeneous set of equations:

  • 3x + 2y − z = 1
  • 2x − 2y + 4z = −2
  • −x + y/2 − z = 0

We have a point solution here: (x, y, z) = (1, −2, −2). The geometry of the situation is something like this:

Secretsharing_3-point

One of the equations may be a linear combination of the two others. In that case, that equation can be removed without affecting the solution set. For the three-dimensional case, we get a line solution, as illustrated below.  Intersecting_Planes_2

Homogeneous and non-homogeneous sets of linear equations are closely related. If we write a homogeneous set as Ax = 0, then a non-homogeneous set of equations can be written as Ax = b. They are related. More in particular, the solution set for Ax = b is going to be a translation of the solution set for Ax = 0. We can write that more formally as follows:

If p is any specific solution to the linear system Ax = b, then the entire solution set can be described as {p + v|v is any solution to Ax = 0}

The solution set for a homogeneous system is a linear subspace. In the example above, which had three variables and, hence, for which the vector space was three-dimensional, there were three possibilities: a point, line or plane solution. All are (linear) subspaces—although you’d want to drop the term ‘linear’ for the point solution, of course. 🙂 Formally, a subspace is defined as follows: if V is a vector space, then W is a subspace if and only if:

  1. The zero vector (i.e. 0) is in W.
  2. If x is an element of W, then any scalar multiple ax will be an element of W too (this is often referred to as the property of homogeneity).
  3. If x and y are elements of W, then the sum of x and y (i.e. x + y) will be an element of W too (this is referred to as the property of additivity).

As you can see, the superposition principle actually combines the properties of homogeneity and additivity: if x and y are solutions, then any linear combination of them will be a solution too.

The solution set for a non-homogeneous system of equations is referred to as a flat. It’s a subset too, so it’s like a subspace, except that it need not pass through the origin. Again, the flats in two-dimensional space are points and lines, while in three-dimensional space we have points, lines and planes. In general, we’ll have flats, and subspaces, of every dimension from 0 to n−1 in n-dimensional space.

OK. That’s clear enough, but what is all that talk about eigenstates and eigenvalues about? Mathematically, we define eigenvectors, aka as characteristic vectors, as follows:

  • The non-zero vector v is an eigenvector of a square matrix A if Av is a scalar multiple of v, i.e. Av = λv.
  • The associated scalar λ is known as the eigenvalue (or characteristic value) associated with the eigenvector v.

Now, in physics, we talk states, rather than vectors—although our states are vectors, of course. So we’ll call them eigenstates, rather than eigenvectors. But the principle is the same, really. Now, I won’t copy what you can find elsewhere—especially not in an addendum to a post, like this one. So let me just refer you elswhere. Paul’s Online Math Notes, for example, are quite good on this—especially in the context of solving a set of differential equations, which is what we are doing here. And you can also find a more general treatment in the Wikipedia article on eigenvalues and eigenstates which, while being general, highlights their particular use in quantum math.

Advertisements

The Hamiltonian revisited

I want to come back to something I mentioned in a previous post: when looking at that formula for those Uij amplitudes—which I’ll jot down once more:

Uij(t + Δt, t) = δij + ΔUij(t + Δt, t) = δij + Kij(t)·Δt ⇔ Uij(t + Δt, t) = δij − (i/ħ)·Hij(t)·Δt

—I noted that it resembles the general y(t + Δt) = y(t) + Δy = y(t) + (dy/dt)·Δt formula. So we can look at our Kij(t) function as being equal to the time derivative of the Uij(t + Δt, t) function. I want to re-visit that here, as it triggers a whole range of questions, which may or may not help to understand quantum math somewhat more intuitively.  Let’s quickly sum up what we’ve learned so far: it’s basically all about quantum-mechanical stuff that does not move in space. Hence, the x in our wavefunction ψ(x, t) is some fixed point in space and, therefore, our elementary wavefunction—which we wrote as:

ψ(x, t) = a·ei·θ a·ei·(ω·t − k∙x) = a·ei·[(E/ħ)·t − (p/ħ)∙x]

—reduces to ψ(t) = a·ei·ω·t = a·ei·[(E/ħ)·t.

Unlike what you might think, we’re not equating x with zero here. No. It’s the p = m·v factor that becomes zero, because our reference frame is that of the system that we’re looking at, so its velocity is zero: it doesn’t move in our reference frame. That immediately answers an obvious question: does our wavefunction look any different when choosing another reference frame? The answer is obviously: yes! It surely matters if the system moves or not, and it also matters how fast it moves, because it changes the energy and momentum values from E and p to some E’ and p’. However, we’ll not consider such complications here: that’s the realm of relativistic quantum mechanics. Let’s start with the simplest of situations.

A simple two-state system

One of the simplest examples of a quantum-mechanical system that does not move in space, is the textbook example of the ammonia molecule. The picture was as simple as the one below: an ammonia molecule consists of one nitrogen atom and three hydrogen atoms, and the nitrogen atom could be ‘up’ or ‘down’ with regard to the motion of the NH3 molecule around its axis of symmetry, as shown below.

Capture

It’s important to note that this ‘up’ or ‘down’ direction is, once again, defined with respect to the reference frame of the system itself. The motion of the molecule around its axis of symmetry is referred to as its spin—a term that’s used in a variety of contexts and, therefore, is annoyingly ambiguous. When we use the term ‘spin’ (up or down) to describe an electron state, for example, we’d associate it with the direction of its magnetic moment. Such magnetic moment arises from the fact that, for all practical purposes, we can think of an electron as a spinning electric charge. Now, while our ammonia molecule is electrically neutral, as a whole, the two states are actually associated with opposite electric dipole moments, as illustrated below. Hence, when we’d apply an electric field (denoted as ε) below, the two states are effectively associated with different energy levels, which we wrote as E0 ± εμ.

ammonia

But we’re getting ahead of ourselves here. Let’s revert to the system in free space, i.e. without an electromagnetic force field—or, what amounts to saying the same, without potential. Now, the ammonia molecule is a quantum-mechanical system, and so there is some amplitude for the nitrogen atom to tunnel through the plane of hydrogens. I told you before that this is the key to understanding quantum mechanics really: there is an energy barrier there and, classically, the nitrogen atom should not sneak across. But it does. It’s like it can borrow some energy – which we denote by A – to penetrate the energy barrier.

In quantum mechanics, the dynamics of this system are modeled using a set of two differential equations. These differential equations are really the equivalent of Newton’s classical Law of Motion (I am referring to the F = m·(dv/dt) = m·a equation here) in quantum mechanics, so I’ll have to explain them—which is not so easy as explaining Newton’s Law, because we’re talking complex-valued functions, but… Well… Let me first insert the solution of that set of differential equations:

graph

This graph shows how the probability of the nitrogen atom (or the ammonia molecule itself) being in state 1 (i.e. ‘up’) or, else, in state 2 (i.e. ‘down’), varies sinusoidally in time. Let me also give you the equations for the amplitudes to be in state 1 or 2 respectively:

  1. C1(t) = 〈 1 | ψ 〉 = (1/2)·e(i/ħ)·(E− A)·t + (1/2)·e(i/ħ)·(E+ A)·t = e(i/ħ)·E0·t·cos[(A/ħ)·t]
  2. C2(t) = 〈 2 | ψ 〉 = (1/2)·e(i/ħ)·(E− A)·t – (1/2)·e(i/ħ)·(E+ A)·t = i·e(i/ħ)·E0·t·sin[(A/ħ)·t]

So the P1(t) and P2(t) probabilities above are just the absolute square of these C1(t) and C2(t) functions. So as to help you understand what’s going on here, let me quickly insert the following technical remarks:

  • In case you wonder how we go from those exponentials to a simple sine and cosine factor, remember that the sum of complex conjugates, i.e eiθ eiθ reduces to 2·cosθ, while eiθ − eiθ reduces to 2·i·sinθ.
  • As for how to take the absolute square… Well… I shouldn’t be explaining that here, but you should be able to work that out remembering that (i) |a·b·c|2 = |a|2·|b|2·|c|2; (ii) |eiθ|2 = |e−iθ|= 12 = 1 (for any value of θ); and (iii) |i|2 = 1.
  • As for the periodicity of both probability functions, note that the period of the squared sine and cosine functions is equal to π. Hence, the argument of our sine and cosine function will be equal to 0, π, 2π, 3π etcetera if (A/ħ)·t = 0, π, 2π, 3π etcetera, i.e. if t = 0·ħ/A, π·ħ/A, 2π·ħ/A, 3π·ħ/A etcetera. So that’s why we measure time in units of ħ/A above.

The graph above is actually tricky to interpret, as it assumes that we know in what state the molecule starts out with at t = 0. This assumption is tricky because we usually do not know that: we have to make some observation which, curiously enough, will always yield one of the two states—nothing in-between. Or, else, we can use a state selector—an inhomogeneous electric field which will separate the ammonia molecules according to their state. It’s a weird thing really, and it summarizes all of the ‘craziness’ of quantum-mechanics: as long as we don’t measure anything – by applying that force field – our molecule is in some kind of abstract state, which mixes the two base states. But when we do make the measurement, always along some specific direction (which we usually take to be the z-direction in our reference frame), we’ll always find the molecule is either ‘up’ or, else, ‘down’. We never measure it as something in-between. Personally, I like to think the measurement apparatus – I am talking the electric field here – causes the nitrogen atom to sort of ‘snap into place’. However, physicists use more precise language here: they would say that the electric field does result in the two positions having very different energy levels (E0 + εμ and E0 – εμ, to be precise) and that, as a result, the amplitude for the nitrogen atom to flip back and forth has little effect. Now how do we model that?

The Hamiltonian equations

I shouldn’t be using the term above, as it usually refers to a set of differential equations describing classical systems. However, I’ll also use it for the quantum-mechanical analog, which amounts to the following for our simple two-state example above:

Hamiltonian maser

Don’t panic. We’ll explain. The equations above are all the same but use different formats: the first block writes them as a set of equations, while the second uses the matrix notation, which involves the use of that rather infamous Hamiltonian matrix, which we denote by H = [Hij]. Now, we’ve postponed a lot of technical stuff, so… Well… We can’t avoid it any longer. Let’s look at those Hamiltonian coefficients Hij first. Where do they come from?

You’ll remember we thought of time as some kind of apparatus, with particles entering in some initial state φ and coming out in some final state χ. Both are to be described in terms of our base states. To be precise, we associated the (complex) coefficients C1 and C2 with |φ〉 and D1 and D2 with |χ〉. However, the χ state is a final state, so we have to write it as 〈χ| = |χ〉† (read: chi dagger). The dagger symbol tells us we need to take the conjugate transpose of |χ〉, so the column vector becomes a row vector, and its coefficients are the complex conjugate of D1 and D2, which we denote as D1* and D2*. We combined this with Dirac’s bra-ket notation for the amplitude to go from one base state to another, as a function in time (or a function of time, I should say):

Uij(t + Δt, t) = 〈i|U(t + Δt, t)|j〉

This allowed us to write the following matrix equation:

U coefficients

To see what it means, you should write it all out:

〈χ|U(t + Δt, t)|φ〉 = D1*·(U11(t + Δt, t)·C1 + U12(t + Δt, t)·C2) + D2*·(U21(t + Δt, t)·C1 + U22(t + Δt, t)·C2)

= D1*·U11(t + Δt, t)·C+ D1*·U12(t + Δt, t)·C+ D2*·U21(t + Δt, t)·C+ D2*·U22(t + Δt, t)·C2

It’s a horrendous expression, but it’s a complex-valued amplitude or, quite simply, a complex number. So this is not nonsensical. We can now take the next step, and that’s to go from those Uij amplitudes to the Hij amplitudes of the Hamiltonian matrix. The key is to consider the following: if Δt goes to zero, nothing happens, so we write: Uij = 〈i|U|j〉 → 〈i|j〉 = δij for Δt → 0, with δij = 1 if i = j, and δij = 0 if i ≠ j. We then assume that, for small t, those Uij amplitudes should differ from δij (i.e. from 1 or 0) by amounts that are proportional to Δt. So we write:

Uij(t + Δt, t) = δij + ΔUij(t + Δt, t) = δij + Kij(t)·Δt

We then equated those Kij(t) factors with − (i/ħ)·Hij(t), and we were done: Uij(t + Δt, t) = δij − (i/ħ)·Hij(t)·Δt. […] Well… I show you how we get those differential equations in a moment. Let’s pause here for a while to see what’s going on really. You’ll probably remember how one can mathematically ‘construct’ the complex exponential eiθ by using the linear approximation eiε = 1 + iε near θ = 0 and for infinitesimally small values of ε. In case you forgot, we basically used the definition of the derivative of the real exponential eε for ε going to zero:

FormulaSo we’ve got something similar here for U11(t + Δt, t) = 1 − i·[H11(t)/ħ]·Δt and U22(t + Δt, t) = 1 − i·[H22(t)/ħ]·Δt. Just replace the ε in eiε = 1 + iε by ε = − (E0/ħ)·Δt. Indeed, we know that H11 = H22 = E0, and E0/ħ is, of course, just the energy measured in (reduced) Planck units, i.e. in its natural unit. Hence, if our ammonia molecule is in one of the two base states, we start at θ = 0 and then we just start moving on the unit circle, clockwise, because of the minus sign in eiθ. Let’s write it out:

U11(t + Δt, t) = 1 − i·[H11(t)/ħ]·Δt = 1 − i·[E0/ħ]·Δt and

U22(t + Δt, t) = 1 − i·[H22(t)/ħ]·Δt = 1 − i·[E0/ħ]·Δt

But what about U12 and U21? Is there a similar interpretation? Let’s write those equations down and think about them:

U12(t + Δt, t) = 0 − i·[H12(t)/ħ]·Δt = 0 + i·[A/ħ]·Δt and

U21(t + Δt, t) = 0 − i·[H21(t)/ħ]·Δt = 0 + i·[A/ħ]·Δt

We can visualize this as follows:

circle

Let’s remind ourselves of the definition of the derivative of a function by looking at the illustration below:izvodThe f(x0) value in this illustration corresponds to the Uij(t, t), obviously. So now things make somewhat more sense: U11(t, t) = U11(t, t) = 1, obviously, and U12(t, t) = U21(t, t) = 0. We then add the ΔUij(t + Δt, t) to Uij(t, t). Hence, we can, and probably should, think of those Kij(t) coefficients as the derivative of the Uij(t, t) functions with respect to time. So we can write something like this:

H and U

These derivatives are pure imaginary numbers. That does not mean that the Uij(t + Δt, t) functions are purely imaginary: U11(t + Δt, t) and U22(t + Δt, t) can be approximated by 1 − i·[E0/ħ]·Δt for small Δt, so they do have a real part. In contrast, U12(t + Δt, t) and U21(t + Δt, t) are, effectively, purely imaginary (for small Δt, that is).

I can’t help thinking these formulas reflect a deep and beautiful geometry, but its meaning escapes me so far. 😦 When everything is said and done, none of the reflections above makes things somewhat more intuitive: these wavefunctions remain as mysterious as ever.

I keep staring at those P1(t) and P2(t) functions, and the C1(t) and C2(t) functions that ‘generate’ them, so to speak. They’re not independent, obviously. In fact, they’re exactly the same, except for a phase difference, which corresponds to the phase difference between the sine and cosine. So it’s all one reality, really: all can be described in one single functional form, so to speak. I hope things become more obvious as I move forward. :-/

Post scriptum: I promised I’d show you how to get those differential equations but… Well… I’ve done that in other posts, so I’ll refer you to one of those. Sorry for not repeating myself. 🙂

The hydrogen molecule as a two-state system

My posts on the state transitions of an ammonia molecule weren’t easy, were they? So let’s try another two-state system. The illustration below shows an ionized hydrogen molecule in two possible states which, as usual, we’ll denote as |1〉 and |2〉. An ionized hydrogen molecule is an H2 molecule which lost an electron, so it’s two protons with one electron only, so we denote it as H2+. The difference between the two states is obvious: the electron is either with the first proton or with the second.

hydrogen

It’s an example taken from Feynman’s Lecture on two-state systems. The illustration itself raises a lot of questions, of course. The most obvious question is: how do we know which proton is which? We’re talking identical particles, right? Right. We should think of the proton spins! However, protons are fermions and, hence, they can’t be in the same state, so they must have opposite spins. Of course, now you’ll say: they’re not in the same state because they’re at different locations. Well… Now you’ve answered your own question. 🙂 However you want to look at this, the point is: we can distinguish both protons. Having said that, the reflections above raise other questions: what reference frame are we using? The answer is: it’s the reference frame of the system. We can mirror or rotate this image however we want – as I am doing below – but state |1〉 is state |1〉, and state |2〉 is state |2〉.

flip

The other obvious question is more difficult. If you’ve read anything at all about quantum mechanics, you’ll ask: what about the in-between states? The electron is actually being shared by the two protons, isn’t it? That’s what chemical bonds are all about, no? Molecular orbitals rather than atomic orbitals, right? Right. That’s actually what this post is all about. We know that, in quantum mechanics, the actual state – or what we think is the actual state – is always expressed as some linear combination of so-called base states. We wrote:

|ψ〉 = |1〉C|2〉C= |1〉〈1|ψ〉 + |2 〉〈2|ψ 〉

In terms of representing what’s actually going on, we only have these probability functions: they say that, if we would take a measurement, the probability of finding the electron near the first or the second proton varies as shown below:

graph

If the |1〉 and |2〉 states were actually representing two dual physical realities, the actual state of our H2molecule would be represented by some square or some pulse wave, as illustrated below. [We should be calling it a square function but that term has been reserved for a function like y = x2.]

Dutycycle

Of course, the symmetry of the situation implies that the average pulse duration τ would be one-half of the (average) period T, so we’d be talking a square wavefunction indeed. The two wavefunctions both qualify as probability density functions: the system is always in one state or the other, and the probabilities add up to one. But you’ll agree we prefer the smooth squared sine and cosine functions. To be precise, these smooth functions are:

  • P1(t) = |C1(t)|2 = cos2[(A/ħ)·t]
  • P2(t) = |C2(t)|= sin2[(A/ħ)·t]

So now we only need to explain A here (you know ħ already). But… Well… Why would we actually prefer those smooth functions? An irregular pulse function would seem to be doing a better job when it comes to modeling reality, doesn’t it? The electron should be either here, or there. Isn’t it?

Well… No. At least that’s why am slowly starting to understand. These pure base states |1〉 and |2〉 are real and not real at the same time. They’re real, because it’s what we’ll get when we verify, or measure, the state, so our measurement will tell us that it’s here or there. There’s no in-between. [I still need to study weak measurement theory.] But then they are not real, because our molecule will never ever be in those two states, except for those ephemeral moments when (A/ħ)·t = n·π (n = 0, 1, 2,…). So we’re really modeling uncertainty here and, while I am still exploring what that actually means, you should think of the electron as being everywhere really, but with an unequal density in space—sort of. 🙂

Now, we’ve learned we can describe the state of a system in terms of an alternative set of base states. We wrote: |ψ〉 = |I〉C|II〉CII = |I〉〈I|ψ〉 + |II〉〈II|ψ〉, with the CI, II and C1, 2 coefficients being related to each other in exactly the same way as the associated base states, i.e. through a transformation matrix, which we summarized as:

transformation

To be specific, the two sets of base states we’ve been working with so far were related as follows:

transformation

So we’d write: |ψ〉 = |I〉C|II〉CII = |I〉〈I|ψ〉 + |II〉〈II|ψ〉 = |1〉C|2〉C= |1〉〈1|ψ〉 + |2 〉〈2|ψ 〉, and the CI, II and C1, 2 coefficients would be related in exactly the same way as the base states:

Eq 4

[In case you’d want to review how that works, see my post on the Hamiltonian and base states.] Now, we cautioned that it’s difficult to try to interpret such base transformations – often referred to as a change in the representation or a different projection – geometrically. Indeed, we acknowledged that (base) states were very much like (base) vectors – from a mathematical point of view, that is – but, at the same time, we said that they were ‘objects’, really: elements in some Hilbert space, which means you can do the operations we’re doing here, i.e. adding and multiplying. Something like |I〉CI doesn’t mean all that much: Cis a complex number – and so we can work with numbers, of course, because we can visualize them – but |I〉 is a ‘base state’, and so what’s the meaning of that, and what’s the meaning of the |I〉CI or CI|I〉 product? I could babble about that, but it’s no use: a base state is a base state. It’s some state of the system that makes sense to us. In fact, it may be some state that does not make sense to us—in terms of the physics of the situation, that is – but then there will always be some mathematical sense to it because of that transformation matrix, which establishes a one-to-one relationship between all sets of base states.

You’ll say: why don’t you try to give it some kind of geometrical or whatever meaning? OK. Let’s try. State |1〉 is obviously like minus state |2〉 in space, so let’s see what happens when we equate |1〉 to 1 on the real axis, and |2〉 to −1. Geometrically, that corresponds to the (1, 0) and (−1, 0) points on the unit circle. So let’s multiply those points with (1/√2, −1/√2) and (1/√2, 1/√2) respectively. What do we get? Well… What product should we take? The dot product, the cross product, or the ordinary complex-number product? The dot product gives us a number, so we don’t want that. [If we’re going to represent base states by vectors, we want all states to be vectors.] A cross product will give us a vector that’s orthogonal to both vectors, so it’s a vector in ‘outer space’, so to say. We don’t want that, I must assume, and so we’re left with the complex-number product, which projects our  (1, 0) and (−1, 0) vectors into the (1/√2, −1/√2)·(1, 0) = (1/√2−i/√2)·(1+0·i) = √2−i/√2 = (1/√2, −i/√2) and (1/√2, 1/√2)·(−1, 0) = (1/√2+i/√2)·(−1+0·i) = −√2−i/√2 = (−1/√2, −i/√2) respectively.

transformation 2

What does this say? Nothing. Stuff like this only causes confusion. We had two base states that were ‘180 degrees’ apart, and now our new base states are only ’90 degrees’ apart. If we’d ‘transform’ the two new base states once more, they collapse into each other: (1/√2, −1/√2)·(1/√2, −1/√2) = (1/√2−i/√2)2 = −= (0, −1) = (1/√2, 1/√2)·(−1/√2, −1/√2) = −i. This is nonsense, of course. It’s got nothing to do with the angle we picked for our original set of base states: we could have separated our original set of base states by 90 degrees, or 45 degrees. It doesn’t matter. It’s the transformation itself: multiplying by (+1/√2, −1/√2) amounts to a clockwise rotation by 45 degrees, while multiplying by (+1/√2, +1/√2) amounts to the same, but counter-clockwise. So… Well… We should not try to think of our base vectors in any geometric way, because it just doesn’t make any sense. So Let’s not waste time on this: the ‘base states’ are a bit of a mystery, in the sense that they just are what they are: we can’t ‘reduce’ them any further, and trying to interpret them geometrically leads to contradictions, as evidenced by what I tried to do above. Base states are ‘vectors’ in a so-called Hilbert space, and… Well… That’s not your standard vector space. [If you think you can make more sense of it, please do let me know!]

Onwards!

Let’s take our transformation again:

  • |I〉 = (1/√2)|1〉 − (1/√2)|2〉 = (1/√2)[|1〉 − |2〉]
  • |II〉 = (1/√2)|1〉 + (1/√2)|2〉 = (1/√2)[|1〉 + |2〉]

Again, trying to geometrically interpret what it means to add or subtract two base states is not what you should be trying to do. In a way, the two expressions above only make sense when combining them with a final state, so when writing:

  • 〈ψ|I〉 = (1/√2)〈ψ|1〉 − (1/√2)〈ψ|2〉 = (1/√2)[〈ψ|1〉 − 〈ψ|2〉]
  • 〈ψ|II〉 = (1/√2)〈ψ|1〉 + (1/√2)〈ψ|2〉 = (1/√2)[〈ψ|1〉 + 〈ψ|2〉]

Taking the complex conjugate of this gives us the amplitudes of the system to be in state I or state II:

  • 〈I|ψ〉 = 〈ψ|I〉* = (1/√2)[〈ψ|1〉* − 〈ψ|2〉*] = (1/√2)[〈1|ψ〉 − 〈2|ψ〉]
  • 〈II|ψ〉 = 〈ψ|II〉* = (1/√2)[〈ψ|1〉* + 〈ψ|2〉*] = (1/√2)[〈1|ψ〉 + 〈2|ψ〉]

That still doesn’t tell us much, because we’d need to know the 〈1|ψ〉 and 〈2|ψ〉 functions, i.e. the amplitudes of the system to be in state 1 and state 2 respectively. What we do know, however, is that the 〈1|ψ〉 and 〈2|ψ〉 functions will have some rather special amplitudes. We wrote:

  • C= 〈 I | ψ 〉 =  e−(i/ħ)·EI·t
  • CII = 〈 II | ψ 〉 = e−(i/ħ)·EII·t

These are amplitudes of so-called stationary states: the associated probabilities – i.e. the absolute square of these functions – do not vary in time: |e−(i/ħ)·EI·t|2 = |e−(i/ħ)·EII·t|2 = 1. For our ionized hydrogen molecule, it means that, if it would happen to be in state I, it will stay in state I, and the same goes for state II. We write:

〈 I | I 〉 = 〈 II | II 〉 = 1 and 〈 I | II 〉 = 〈 II | I 〉 = 0

That’s actually just the so-called ‘orthogonality’ condition for base states, which we wrote as 〈i|j〉 = 〈j|i〉 = δij, but, in light of the fact that we can’t interpret them geometrically, we shouldn’t be calling it like that. The point is: we had those differential equations describing a system like this. If the amplitude to go from state 1 to state 2 was equal to some real- or complex-valued constant A, then we could write those equations either in terms of Cand C2, or in terms of Cand CII:

set of equations

So the two sets of equations are equivalent. However, what we want to do here is look at it in terms of Cand CII. Let’s first analyze those two energy levels E= E+ A and EII = E− A. Feynman graphs them as follows:

raph1raph2

Let me explain. In the first graph, we have E= E+ A and EII = E− A, and they are depicted as being symmetric, with A depending on the distance between the two protons. As for E0, that’s the energy of a hydrogen atom, i.e. a proton with a bound electron, and a separate proton. So it’s the energy of a system consisting of a hydrogen atom and a proton, which is obviously not the same as that of an ionized hydrogen molecule. The concept of a molecule assumes the protons are closely together. We assume E= 0 if the interproton distance is relatively large but, of course, as the protons come closer, we shouldn’t forget the repulsive electrostatic force between the two protons, which is represented by the dashed line in the first graph. Indeed, unlike the electron and the proton, the two protons will want to push apart, rather than pull together, so the potential energy of the system increases as the interproton distance decreases. So Eis not constant either: it also depends on the interproton distance. But let’s forget about Efor a while. Let’s look at the two curves for A now.

A is not varying in time, but its value does depend on the distance between the two protons. We’ll use this in a moment to calculate the approximate size of the hydrogen nucleus in a calculation that closely resembles Feynman’s calculation of the size of a hydrogen atom. That A should be some function of the interproton distance makes sense: the transition probability, and therefore A, will exponentially decrease with distance. There are a few things to reflect on here:

1. In the mentioned calculation of the size of a hydrogen atom, which is based on the Uncertainty Principle, Feynman shows that the energy of the system decreases when an electron is bound to the proton. The reasoning is that, if the potential energy of the electron is zero when it is not bound, then its potential energy will be negative when bound. Think of it: the electron and the proton attract each other, so it requires force to separate them, and force over a distance is energy. From our course in electromagnetics, we know that the potential energy, when bound, should be equal to −e2/a0, with ethe squared charge of the electron divided by 4πε0, and a0 the so-called Bohr radius of the atom. Of course, the electron also has kinetic energy. It can’t just sit on top of the proton because that would violate the Uncertainty Principle: we’d know where it was. Combining the two, Feynman calculates both a0 as well as the so-called Rydberg energy, i.e. the total energy of the bound electron, which is equal to −13.6 eV. So, yes, the bound state has less energy, so the electron will want to be bound, i.e. it will want to be close to one of the two protons.

2. Now, while that’s not what’s depicted above, it’s clear the magnitude of A will be related to that Rydberg energy which − please note − is quite high. Just compare it with the A for the ammonia molecule, which we calculated in our post on the maser: we found an A of about 0.5×10−4 eV there, so that’s like 270,000 times less! Nevertheless, the possibility is there, and what happens when the electron flips over amounts to tunneling: it penetrates and crosses a potential barrier. We did a post on that, and so you may want to look at how that works. One of the weird things we had to consider when a particle crosses such potential barrier, is that the momentum factor p in its wavefunction was some pure imaginary number, which we wrote as p = i·p’. We then re-wrote that wavefunction as a·e−iθ = a·e−i[(E/ħ)∙t − (i·p’/ħ)x] = a·e−i(E/ħ)∙t·ei2·p’·x/ħ = a·e−i(E/ħ)∙t·e−p’·x/ħ. Now, it’s easy to see that the e−p’·x/ħ factor in this formula is a real-valued exponential function, with the same shape as the general e−x function, which I depict below.

graph

This e−p’·x/ħ basically ‘kills’ our wavefunction as we move in the positive x-direction, across the potential barrier, which is what is illustrated below: if the distance is too large, then the amplitude for tunneling goes to zero.

potential barrier

So that’s what depicted in those graphs of E= E+ A and EII = E− A: A goes to zero when the interproton distance becomes too large. We also recognize the exponential shape for A in those graphs, which can also be derived from the same tunneling story.

Now we can calculate EA and E− A taking into account that both terms vary with the interproton distance as explained, and so that gives us the final curves on the right-hand side, which tell us that the equilibrium configuration of the ionized hydrogen molecule is state II, i.e. the lowest energy state, and the interproton distance there is approximately one Ångstrom, i.e. 1×10−10 m. [You can compare this with the Bohr radius, which we calculated as a0 = 0.528×10−10 m, so that all makes sense.] Also note the energy scale: ΔE is the excess energy over a proton plus a hydrogen atom, so that’s the energy when the two protons are far apart. Because it’s the excess energy, we have a zero point. That zero point is, obviously, the energy of a hydrogen atom and a proton. [Read this carefully, and please refer back to what I wrote above. The energy of a system consisting of a hydrogen atom and a proton is not the same as that of an ionized hydrogen molecule: the concept of a molecule assumes the protons are closely together.] We then re-scale by dividing by the Rydberg energy E= 13.6 eV. So ΔE/E≈ −0.2 ⇔ ΔE ≈ −0.2×13.6 = –2.72 eV. That basically says that the energy of our ionized hydrogen molecule is 2.72 eV lower than the energy of a hydrogen atom and a proton.

Why is it lower? We need to think about our model of the hydrogen atom once more: the energy of the electron was minimized by striking a balance between (1) being close to the proton and, therefore, having a low potential energy (or a low coulomb energy, as Feynman calls it) and (2) being further away from the proton and, therefore, lowering its kinetic energy according to the Uncertainty Principle ΔxΔp ≥ ħ/2, which Feynman boldly re-wrote as p = ħ/a0. Now, a molecular orbital, i.e. the electron being around two protons, results in “more space where the electron can have a low potential energy”, as Feynman puts it, so “the electron can spread out—lowering its kinetic energy—without increasing its potential energy.”

The whole discussion here actually amounts to an explanation for the mechanism by which an electron shared by two protons provides, in effect, an attractive force between the two protons. So we’ve got a single electron actually holding two protons together, which chemists refer to as a “one-electron bond.”

So… Well… That explains why the energy EII = E− A is what it is, so that’s smaller than Eindeed, with the difference equal to the value A for an interproton distance of 1 Å. But how should we interpret E= E+ A? What is that higher energy level? What does it mean?

That’s a rather tricky question. There’s no easy interpretation here, like we had for our ammonia molecule: the higher energy level had an obvious physical meaning in an electromagnetic field, as it was related to the electric dipole moment of the molecule. That’s not the case here: we have no magnetic or electric dipole moment here. So, once again, what’s the physical meaning of E= E+ A? Let me quote Feynman’s enigmatic answer here:

“Notice that this state is the difference of the states |1⟩ and |2⟩. Because of the symmetry of |1⟩ and |2⟩, the difference must have zero amplitude to find the electron half-way between the two protons. This means that the electron is somewhat more confined, which leads to a larger energy.”

What does he mean with that? It seems he’s actually trying to do what I said we shouldn’t try to do, and that is to interpret what adding versus subtracting states actually means. But let’s give it a fair look. We said that the |I〉 = (1/√2)[|1〉 − |2〉] expression didn’t mean much: we should add a final state and write: 〈ψ|I〉 = (1/√2)[〈ψ|1〉 − 〈ψ|2〉], which is equivalent to 〈I|ψ〉 = (1/√2)[〈1|ψ〉 − 〈2|ψ〉]. That still doesn’t tell us anything: we’re still adding amplitudes, and so we should allow for interference, and saying that |1⟩ and |2⟩ are symmetric simply means that 〈1|ψ〉 − 〈2|ψ〉 = 〈2|ψ〉 − 〈1|ψ〉 ⇔ 2·〈1|ψ〉 = 2·〈2|ψ〉 ⇔ 〈1|ψ〉 = 〈2|ψ〉. Wait a moment! That’s an interesting reflection. Following the same reasoning for |II〉 = (1/√2)[|1〉 + |2〉], we get 〈1|ψ〉 + 〈2|ψ〉 = 〈2|ψ〉 + 〈1|ψ〉 ⇔ … Huh? No, that’s trivial: 0 = 0.

Hmm… What to say? I must admit I don’t quite ‘get’ Feynman here: state I, with energy E= E+ A, seems to be both meaningless as well as impossible. The only energy levels that would seem to make sense here are the energy of a hydrogen atom and a proton and the (lower) energy of an ionized hydrogen molecule, which you get when you bring a hydrogen atom and a proton together. 🙂

But let’s move to the next thing: we’ve added only one electron to the two protons, and that was it, and so we had an ionized hydrogen molecule, i.e. an H2+ molecule. Why don’t we do a full-blown H2 molecule now? Two protons. Two electrons. It’s easy to do. The set of base states is quite predictable, and illustrated below: electron a can be either one of the two protons, and the same goes for electron b.

base

We can then go through the same as for the ion: the molecule’s stability is shown in the graph below, which is very similar to the graph of the energy levels of the ionized hydrogen molecule, i.e. the H2+  molecule. The shape is the same, but the values are different: the equilibrium state is at an interproton distance of 0.74 Å, and the energy of the equilibrium state is like 5 eV (ΔE/E≈ −0.375) lower than the energy of two separate hydrogen atoms.

raph3The explanation for the lower energy is the same: state II is associated with some kind of molecular orbital for both electrons, resulting in “more space where the electron can have a low potential energy”, as Feynman puts it, so “the electron can spread out—lowering its kinetic energy—without increasing its potential energy.”

However, there’s one extra thing here: the two electrons must have opposite spins. That’s the only way to actually distinguish the two electrons. But there is more to it: if the two electrons would not have opposite spin, we’d violate Fermi’s rule: when identical fermions are involved, and we’re adding amplitudes, then we should do so with a negative sign for the exchanged case. So our transformation would be problematic:

〈II|ψ〉 = (1/√2)[〈1|ψ〉 + 〈2|ψ〉] = (1/√2)[〈2|ψ〉 + 〈1|ψ〉]

When we switch the electrons, we should get a minus sign. The weird thing is: we do get that minus sign for state I:

〈I|ψ〉 = (1/√2)[〈1|ψ〉 − 〈2|ψ〉] = −(1/√2)[〈2|ψ〉 − 〈1|ψ〉]

So… Well… We’ve got a bit of an answer there as to what that the ‘other’ (upper) energy level of E= E+ A actually means, in physical terms, that is. It models two hydrogens coming together with parallel electron spins. Applying Fermi’s rules  – i.e. the exclusion principle, basically – we find that state II is, quite simply, not allowed for parallel electron spins: state I is, and it’s the only one. There’s something deep here, so let me quote the Master himself on it:

“We find that the lowest energy state—the only bound state—of the H2 molecule has the two electrons with spins opposite. The total spin angular momentum of the electrons is zero. On the other hand, two nearby hydrogen atoms with spins parallel—and so with a total angular momentum —must be in a higher (unbound) energy state; the atoms repel each other. There is an interesting correlation between the spins and the energies. It gives another illustration of something we mentioned before, which is that there appears to be an “interaction” energy between two spins because the case of parallel spins has a higher energy than the opposite case. In a certain sense you could say that the spins try to reach an antiparallel condition and, in doing so, have the potential to liberate energy—not because there is a large magnetic force, but because of the exclusion principle.”

You should read this a couple of times. It’s an important principle. We’ll discuss it again in the next posts, when we’ll be talking spin in much more detail once again. 🙂 The bottom line is: if the electrons are parallel, then they won’t ‘share’ any space at all and, hence, they are really much more confined in space, and the associated energy level is, therefore, much higher.

Post scriptum: I said we’d ‘calculate’ the equilibrium interproton distance. We didn’t do that. We just gave them through the graphs, which are based on the results of a ‘detailed quantum-mechanical calculation’—or that’s what Feynman claims, at least. I am not sure if they correspond to experimentally determined values, or what calculations are behind, exactly. Feynman notes that “this approximate treatment of the H2molecule as a two-state system breaks down pretty badly once the protons get as close together as they are at the minimum in the curve and, therefore, it will not give a good value for the actual binding energy. For small separations, the energies of the two “states” we imagined are not really equal to E0, and a more refined quantum mechanical treatment is needed.”

So… Well… That says it all, I guess.

The math behind the maser

As I skipped the mathematical arguments in my previous post so as to focus on the essential results only, I thought it would be good to complement that post by looking at the math once again, so as to ensure we understand what it is that we’re doing. So let’s do that now. We start with the easy situation: free space.

The two-state system in free space

We started with an ammonia molecule in free space, i.e. we assumed there were no external force fields, like a gravitational or an electromagnetic force field. Hence, the picture was as simple as the one below: the nitrogen atom could be ‘up’ or ‘down’ with regard to its spin around its axis of symmetry.

Capture

It’s important to note that this ‘up’ or ‘down’ direction is defined in regard to the molecule itself, i.e. not in regard to some external reference frame. In other words, the reference frame is that of the molecule itself. For example, if I flip the illustration above – like below – then we’re still talking the same states, i.e. the molecule is still in state 1 in the image on the left-hand side and it’s still in state 2 in the image on the right-hand side. 

Capture

We then modeled the uncertainty about its state by associating two different energy levels with the molecule: E0 + A and E− A. The idea is that the nitrogen atom needs to tunnel through a potential barrier to get to the other side of the plane of the hydrogens, and that requires energy. At the same time, we’ll show the two energy levels are effectively associated with an ‘up’ or ‘down’ direction of the electric dipole moment of the molecule. So that resembles the two spin states of an electron, which we associated with the +ħ/2 and −ħ/2 energies respectively. So if E0 would be zero (we can always take another reference point, remember?), then we’ve got the same thing: two energy levels that are separated by some definite amount: that amount is 2A for the ammonia molecule, and ħ when we’re talking quantum-mechanical spin. I should make a last note here, before I move on: note that these energies only make sense in the presence of some external field, because the + and − signs in the E0 + A and E− A and +ħ/2 and −ħ/2 expressions make sense only with regard to some external direction defining what’s ‘up’ and what’s ‘down’ really. But I am getting ahead of myself here. Let’s go back to free space: no external fields, so what’s ‘up’ or ‘down’ is completely random here. 🙂

Now, we also know an energy level can be associated with a complex-valued wavefunction, or an amplitude as we call it. To be precise, we can associate it with the generic a·e−(i/ħ)·(E·t − px) expression which you know so well by now. Of course,  as the reference frame is that of the molecule itself, its momentum is zero, so the px term in the a·e−(i/ħ)·(E·t − px) expression vanishes and the wavefunction reduces to a·ei·ω·t a·e−(i/ħ)·E·t, with ω = E/ħ. In other words, the energy level determines the temporal frequency, or the temporal variation (as opposed to the spatial frequency or variation), of the amplitude.

We then had to find the amplitudes C1(t) = 〈 1 | ψ 〉 and C2(t) =〈 2 | ψ 〉, so that’s the amplitude to be in state 1 or state 2 respectively. In my post on the Hamiltonian, I explained why the dynamics of a situation like this can be represented by the following set of differential equations:

Hamiltonian

As mentioned, the Cand C2 functions evolve in time, and so we should write them as C= C1(t) and C= C2(t) respectively. In fact, our Hamiltonian coefficients may also evolve in time, which is why it may be very difficult to solve those differential equations! However, as I’ll show below, one usually assumes they are constant, and then one makes informed guesses about them so as to find a solution that makes sense.

Now, I should remind you here of something you surely know: if Cand Care solutions to this set of differential equations, then the superposition principle tells us that any linear combination a·C1 + b·Cwill also be a solution. So we need one or more extra conditions, usually some starting condition, which we can combine with a normalization condition, so we can get some unique solution that makes sense.

The Hij coefficients are referred to as Hamiltonian coefficients and, as shown in the mentioned post, the H11 and H22 coefficients are related to the amplitude of the molecule staying in state 1 and state 2 respectively, while the H12 and H21 coefficients are related to the amplitude of the molecule going from state 1 to state 2 and vice versa. Because of the perfect symmetry of the situation here, it’s easy to see that H11 should equal H22 , and that H12 and H21 should also be equal to each other. Indeed, Nature doesn’t care what we call state 1 or 2 here: as mentioned above, we did not define the ‘up’ and ‘down’ direction with respect to some external direction in space, so the molecule can have any orientation and, hence, switching the i an j indices should not make any difference. So that’s one clue, at least, that we can use to solve those equations: the perfect symmetry of the situation and, hence, the perfect symmetry of the Hamiltonian coefficients—in this case, at least!

The other clue is to think about the solution if we’d not have two states but one state only. In that case, we’d need to solve iħ·[dC1(t)/dt] = H11·C1(t). That’s simple enough, because you’ll remember that the exponential function is its own derivative. To be precise, we write: d(a·eiωt)/dt = a·d(eiωt)/dt = a·iω·eiωt, and please note that can be any complex number: we’re not necessarily talking a real number here! In fact, we’re likely to talk complex coefficients, and we multiply with some other complex number (iω) anyway here! So if we write iħ·[dC1/dt] = H11·C1 as dC1/dt = −(i/ħ)·H11·C1 (remember: i−1 = 1/i = −i), then it’s easy to see that the Ca·e–(i/ħ)·H11·t function is the general solution for this differential equation. Let me write it out for you, just to make sure:

dC1/dt = d[a·e–(i/ħ)H11t]/dt = a·d[e–(i/ħ)H11t]/dt = –a·(i/ħ)·H11·e–(i/ħ)H11t

= –(i/ħ)·H11·a·e–(i/ħ)H11= −(i/ħ)·H11·C1

Of course, that reminds us of our generic wavefunction a·e−(i/ħ)·E0·t wavefunction: we only need to equate H11 with E0 and we’re done! Hence, in a one-state system, the Hamiltonian coefficient is, quite simply, equal to the energy of the system. In fact, that’s a result can be generalized, as we’ll see below, and so that’s why Feynman says the Hamiltonian ought to be called the energy matrix.

In fact, we actually may have two states that are entirely uncoupled, i.e. a system in which there is no dependence of C1 on Cand vice versa. In that case, the two equations reduce to:

iħ·[dC1/dt] = H11·C1 and iħ·[dC2/dt] = H22·C2

These do not form a coupled system and, hence, their solutions are independent:

C1(t) = a·e–(i/ħ)·H11·t and C2(t) = b·e–(i/ħ)·H22·t 

The symmetry of the situation suggests we should equate a and b, and then the normalization condition says that the probabilities have to add up to one, so |C1(t)|+ |C2(t)|= 1, so we’ll find that = 1/√2.

OK. That’s simple enough, and this story has become quite long, so we should wrap it up. The two ‘clues’ – about symmetry and about the Hamiltonian coefficients being energy levels – lead Feynman to suggest that the Hamiltonian matrix for this particular case should be equal to:

H-matrix

Why? Well… It’s just one of Feynman’s clever guesses, and it yields probability functions that makes sense, i.e. they actually describe something real. That’s all. 🙂 I am only half-joking, because it’s a trial-and-error process indeed and, as I’ll explain in a separate section in this post, one needs to be aware of the various approximations involved when doing this stuff. So let’s be explicit about the reasoning here:

  1. We know that H11 = H22 = Eif the two states would be identical. In other words, if we’d have only one state, rather than two – i.e. if H12 and H21 would be zero – then we’d just plug that in. So that’s what Feynman does. So that’s what we do here too! 🙂
  2. However, H12 and H21 are not zero, of course, and so assume there’s some amplitude to go from one position to the other by tunneling through the energy barrier and flipping to the other side. Now, we need to assign some value to that amplitude and so we’ll just assume that the energy that’s needed for the nitrogen atom to tunnel through the energy barrier and flip to the other side is equal to A. So we equate H12 and H21 with −A.

Of course, you’ll wonder: why minus A? Why wouldn’t we try H12 = H21 = A? Well… I could say that a particle usually loses potential energy as it moves from one place to another, but… Well… Think about it. Once it’s through, it’s through, isn’t it? And so then the energy is just Eagain. Indeed, if there’s no external field, the + or − sign is quite arbitrary. So what do we choose? The answer is: when considering our molecule in free space, it doesn’t matter. Using +A or −A yields the same probabilities. Indeed, let me give you the amplitudes we get for H11 = H22 = Eand H12 and H21 = −A:

  1. C1(t) = 〈 1 | ψ 〉 = (1/2)·e(i/ħ)·(E− A)·t + (1/2)·e(i/ħ)·(E+ A)·t = e(i/ħ)·E0·t·cos[(A/ħ)·t]
  2. C2(t) = 〈 2 | ψ 〉 = (1/2)·e(i/ħ)·(E− A)·t – (1/2)·e(i/ħ)·(E+ A)·t = i·e(i/ħ)·E0·t·sin[(A/ħ)·t]

[In case you wonder how we go from those exponentials to a simple sine and cosine factor, remember that the sum of complex conjugates, i.e eiθ eiθ reduces to 2·cosθ, while eiθ − eiθ reduces to 2·i·sinθ.]

Now, it’s easy to see that, if we’d have used +A rather than −A, we would have gotten something very similar:

  • C1(t) = 〈 1 | ψ 〉 = (1/2)·e(i/ħ)·(E+ A)·t + (1/2)·e(i/ħ)·(E− A)·t = e(i/ħ)·E0·t·cos[(A/ħ)·t]
  • C2(t) = 〈 2 | ψ 〉 = (1/2)·e(i/ħ)·(E+ A)·t – (1/2)·e(i/ħ)·(E− A)·t = −i·e(i/ħ)·E0·t·sin[(A/ħ)·t]

So we get a minus sign in front of our C2(t) function, because cos(α) = cos(–α) but sin(α) = −sin(α). However, the associated probabilities are exactly the same. For both, we get the same P1(t) and P2(t) functions:

  • P1(t) = |C1(t)|2 = cos2[(A/ħ)·t]
  • P2(t) = |C2(t)|= sin2[(A/ħ)·t]

[Remember: the absolute square of and −is |i|= +√12 = +1 and |i|2 = (−1)2|i|= +1 respectively, so the i and −i in the two C2(t) formulas disappear.]

You’ll remember the graph:

graph

Of course, you’ll say: that plus or minus sign in front of C2(t) should matter somehow, doesn’t it? Well… Think about it. Taking the absolute square of some complex number – or some complex function , in this case! – amounts to multiplying it with its complex conjugate. Because the complex conjugate of a product is the product of the complex conjugates, it’s easy to see what happens: the e(i/ħ)·E0·t factor in C1(t) = e(i/ħ)·E0·t·cos[(A/ħ)·t] and C2(t) = ±i·e(i/ħ)·E0·t·sin[(A/ħ)·t] gets multiplied by e+(i/ħ)·E0·t and, hence, doesn’t matter: e(i/ħ)·E0·t·e+(i/ħ)·E0·t = e0 = 1. The cosine factor in C1(t) = e(i/ħ)·E0·t·cos[(A/ħ)·t] is real, and so its complex conjugate is the same. Now, the ±i·sin[(A/ħ)·t] factor in C2(t) = ±i·e(i/ħ)·E0·t·sin[(A/ħ)·t] is a pure imaginary number, and so its complex conjugate is its opposite. For some reason, we’ll find similar solutions for all of the situations we’ll describe below: the factor determining the probability will either be real or, else, a pure imaginary number. Hence, from a math point of view, it really doesn’t matter if we take +A or −A for  or  real factor for those H12 and H21 coefficients. We just need to be consistent in our choice, and I must assume that, in order to be consistent, Feynman likes to think of our nitrogen atom borrowing some energy from the system and, hence, temporarily reducing its energy by an amount that’s equal to −A. If you have a better interpretation, please do let me know! 🙂

OK. We’re done with this section… Except… Well… I have to show you how we got those C1(t) and C1(t) functions, no? Let me copy Feynman here:

solutionNote that the ‘trick’ involving the addition and subtraction of the differential equations is a trick we’ll use quite often, so please do have a look at it. As for the value of the a and b coefficients – which, as you can see, we’ve equated to 1 in our solutions for C1(t) and C1(t) – we get those because of the following starting condition: we assume that at t = 0, the molecule will be in state 1. Hence, we assume C1(0) = 1 and C2(0) = 0. In other words: we assume that we start out on that P1(t) curve in that graph with the probability functions above, so the C1(0) = 1 and C2(0) = 0 starting condition is equivalent to P1(0) = 1 and P1(0) = 0. Plugging that in gives us a/2 + b/2 = 1 and a/2 − b/2 = 0, which is possible only if a = b = 1.

Of course, you’ll say: what if we’d choose to start out with state 2, so our starting condition is P1(0) = 0 and P1(0) = 1? Then a = 1 and b = −1, and we get the solution we got when equating H12 and H21 with +A, rather than with −A. So you can think about that symmetry once again: when we’re in free space, then it’s quite arbitrary what we call ‘up’ or ‘down’.

So… Well… That’s all great. I should, perhaps, just add one more note, and that’s on that A/ħ value. We calculated it in the previous post, because we wanted to actually calculate the period of those P1(t) and P2(t) functions. Because we’re talking the square of a cosine and a sine respectively, the period is equal to π, rather than 2π, so we wrote: (A/ħ)·T = π ⇔ T = π·ħ/A. Now, the separation between the two energy levels E+ A and E− A, so that’s 2A, has been measured as being equal, more or less, to 2A ≈ 10−4 eV.

How does one measure that? As mentioned above, I’ll show you, in a moment, that, when applying some external field, the plus and minus sign do matter, and the separation between those two energy levels E+ A and E− A will effectively represent something physical. More in particular, we’ll have transitions from one energy level to another and that corresponds to electromagnetic radiation being emitted or absorbed, and so there’s a relation between the energy and the frequency of that radiation. To be precise, we can write 2A = h·f0. The frequency of the radiation that’s being absorbed or emitted is 23.79 GHz, which corresponds to microwave radiation with a wavelength of λ = c/f0 = 1.26 cm. Hence, 2·A ≈ 25×109 Hz times 4×10−15 eV·s = 10−4 eV, indeed, and, therefore, we can write: T = π·ħ/A ≈ 3.14 × 6.6×10−16 eV·s divided by 0.5×10−4 eV, so that’s 40×10−12 seconds = 40 picoseconds. That’s 40 trillionths of a seconds. So that’s very short, and surely much shorter than the time that’s associated with, say, a freely emitting sodium atom, which is of the order of 3.2×10−8 seconds. You may think that makes sense, because the photon energy is so much lower: a sodium light photon is associated with an energy equal to E = h·f = 500×1012 Hz times 4×10−15  eV·s = 2 eV, so that’s 20,000 times 10−4 eV.

There’s a funny thing, however. An oscillation of a frequency of 500 tera-hertz that lasts 3.2×10−8 seconds is equivalent to 500×1012 Hz times 3.2×10−8 s ≈ 16 million cycles. However, an oscillation of a frequency of 23.97 giga-hertz that only lasts 40×10−12 seconds is equivalent to 23.97×109 Hz times 40×10−12 s ≈ 1000×10−3 = 1 ! One cycle only? We’re surely not talking resonance here!

So… Well… I am just flagging it here. We’ll have to do some more thinking about that later. [I’ve added an addendum that may or may not help us in this regard. :-)]

The two-state system in a field

As mentioned above, when there is no external force field, we define the ‘up’ or ‘down’ direction of the nitrogen atom was defined with regard to its its spin around its axis of symmetry, so with regard to the molecule itself. However, when we apply an external electromagnetic field, as shown below, we do have some external reference frame.

Now, the external reference frame – i.e. the physics of the situation, really – may make it more convenient to define the whole system using another set of base states, which we’ll refer to as I and II, rather than 1 and 2. Indeed, you’ve seen the picture below: it shows a state selector, or a filter as we called it. In this case, there’s a filtering according to whether our ammonia molecule is in state I or, alternatively, state II. It’s like a Stern-Gerlach apparatus splitting an electron beam according to the spin state of the electrons, which is ‘up’ or ‘down’ too, but in a totally different way than our ammonia molecule. Indeed, the ‘up’ and ‘down’ spin of an electron has to do with its magnetic moment and its angular momentum. However, there are a lot of similarities here, and so you may want to compare the two situations indeed, i.e. the electron beam in an inhomogeneous magnetic field versus the ammonia beam in an inhomogeneous electric field.

electric field

Now, when reading Feynman, as he walks us through the relevant Lecture on all of this, you get the impression that it’s the I and II states only that have some kind of physical or geometric interpretation. That’s not the case. Of course, the diagram of the state selector above makes it very obvious that these new I and II base states make very much sense in regard to the orientation of the field, i.e. with regard to external space, rather than with respect to the position of our nitrogen atom vis-á-vis the hydrogens. But… Well… Look at the image below: the direction of the field (which we denote by ε because we’ve been using the E for energy) obviously matters when defining the old ‘up’ and ‘down’ states of our nitrogen atom too!

In other words, our previous | 1 〉 and | 2 〉 base states acquire a new meaning too: it obviously matters whether or not the electric dipole moment of the molecule is in the same or, conversely, in the opposite direction of the field. To be precise, the presence of the electromagnetic field suddenly gives the energy levels that we’d associate with these two states a very different physical interpretation.

ammonia

Indeed, from the illustration above, it’s easy to see that the electric dipole moment of this particular molecule in state 1 is in the opposite direction and, therefore, temporarily ignoring the amplitude to flip over (so we do not think of A for just a brief little moment), the energy that we’d associate with state 1 would be equal to E+ με. Likewise, the energy we’d associate with state 2 is equal to E− με.  Indeed, you’ll remember that the (potential) energy of an electric dipole is equal to the vector dot product of the electric dipole moment μ and the field vector ε, but with a minus sign in front so as to get the sign for the energy righ. So the energy is equal to −μ·ε = −|μ|·|ε|·cosθ, with θ the angle between both vectors. Now, the illustration above makes it clear that state 1 and 2 are defined for θ = π and θ = 0 respectively. [And, yes! Please do note that state 1 is the highest energy level, because it’s associated with the highest potential energy: the electric dipole moment μ of our ammonia molecule will – obviously! – want to align itself with the electric field ε ! Just think of what it would imply to turn the molecule in the field!]

Therefore, using the same hunches as the ones we used in the free space example, Feynman suggests that, when some external electric field is involved, we should use the following Hamiltonian matrix:

H-matrix 2

So we’ll need to solve a similar set of differential equations with this Hamiltonian now. We’ll do that later and, as mentioned above, it will be more convenient to switch to another set of base states, or another ‘representation’ as it’s referred to. But… Well… Let’s not get too much ahead of ourselves: I’ll say something about that before we’ll start solving the thing, but let’s first look at that Hamiltonian once more.

When I say that Feynman uses the same clues here, then… Well.. That’s true and not true. You should note that the diagonal elements in the Hamiltonian above are not the same: E+ με ≠ E+ με. So we’ve lost that symmetry of free space which, from a math point of view, was reflected in those identical H11 = H22 = Ecoefficients.

That should be obvious from what I write above: state 1 and state 2 are no longer those 1 and 2 states we described when looking at the molecule in free space. Indeed, the | 1 〉 and | 2 〉 states are still ‘up’ or ‘down’, but the illustration above also makes it clear we’re defining state 1 and state 2 not only with respect to the molecule’s spin around its own axis of symmetry but also vis-á-vis some direction in space. To be precise, we’re defining state 1 and state 2 here with respect to the direction of the electric field ε. Now that makes a really big difference in terms of interpreting what’s going on.

In fact, the ‘splitting’ of the energy levels because of that amplitude A is now something physical too, i.e. something that goes beyond just modeling the uncertainty involved. In fact, we’ll find it convenient to distinguish two new energy levels, which we’ll write as E= E+ A and EII = E− A respectively. They are, of course, related to those new base states | I 〉 and | II 〉 that we’ll want to use. So the E+ A and E− A energy levels themselves will acquire some physical meaning, and especially the separation between them, i.e. the value of 2A. Indeed, E= E+ A and EII = E− A will effectively represent an ‘upper’ and a ‘lower’ energy level respectively.

But, again, I am getting ahead of myself. Let’s first, as part of working towards a solution for our equations, look at what happens if and when we’d switch to another representation indeed.

Switching to another representation

Let me remind you of what I wrote in my post on quantum math in this regard. The actual state of our ammonia molecule – or any quantum-mechanical system really – is always to be described in terms of a set of base states. For example, if we have two possible base states only, we’ll write:

| φ 〉 = | 1 〉 C1 + | 2 〉 C2

You’ll say: why? Our molecule is obviously always in either state 1 or state 2, isn’t it? Well… Yes and no. That’s the mystery of quantum mechanics: it is and it isn’t. As long as we don’t measure it, there is an amplitude for it to be in state 1 and an amplitude for it to be in state 2. So we can only make sense of its state by actually calculating 〈 1 | φ 〉 and 〈 2 | φ 〉 which, unsurprisingly are equal to 〈 1 | φ 〉 = 〈 1 | 1 〉 C1 + 〈 1 | 2 〉 C2  = C1(t) and 〈 2 | φ 〉 = 〈 2 | 1 〉 C1 + 〈 2 | 2 〉 C2  = C2(t) respectively, and so these two functions give us the probabilities P1(t) and  P2(t) respectively. So that’s Schrödinger’s cat really: the cat is dead or alive, but we don’t know until we open the box, and we only have a probability function – so we can say that it’s probably dead or probably alive, depending on the odds – as long as we do not open the box. It’s as simple as that.

Now, the ‘dead’ and ‘alive’ condition are, obviously, the ‘base states’ in Schrödinger’s rather famous example, and we can write them as | DEAD 〉 and | ALIVE 〉 you’d agree it would be difficult to find another representation. For example, it doesn’t make much sense to say that we’ve rotated the two base states over 90 degrees and we now have two new states equal to (1/√2)·| DEAD 〉 – (1/√2)·| ALIVE 〉 and (1/√2)·| DEAD 〉 + (1/√2)·| ALIVE 〉 respectively. There’s no direction in space in regard to which we’re defining those two base states: dead is dead, and alive is alive.

The situation really resembles our ammonia molecule in free space: there’s no external reference against which to define the base states. However, as soon as some external field is involved, we do have a direction in space and, as mentioned above, our base states are now defined with respect to a particular orientation in space. That implies two things. The first is that we should no longer say that our molecule will always be in either state 1 or state 2. There’s no reason for it to be perfectly aligned with or against the field. Its orientation can be anything really, and so its state is likely to be some combination of those two pure base states | 1 〉 and | 2 〉.

The second thing is that we may choose another set of base states, and specify the very same state in terms of the new base states. So, assuming we choose some other set of base states | I 〉 and | II 〉, we can write the very same state | φ 〉 = | 1 〉 C1 + | 2 〉 Cas:

| φ 〉 = | I 〉 CI + | II 〉 CII

It’s really like what you learned about vectors in high school: one can go from one set of base vectors to another by a transformation, such as, for example, a rotation, or a translation. It’s just that, just like in high school, we need some direction in regard to which we define our rotation or our translation.

For state vectors, I showed how a rotation of base states worked in one of my posts on two-state systems. To be specific, we had the following relation between the two representations:

matrix

The (1/√2) factor is there because of the normalization condition, and the two-by-two matrix equals the transformation matrix for a rotation of a state filtering apparatus about the y-axis, over an angle equal to (minus) 90 degrees, which we wrote as:

transformation

The y-axis? What y-axis? What state filtering apparatus? Just relax. Think about what you’ve learned already. The orientations are shown below: the S apparatus separates ‘up’ and ‘down’ states along the z-axis, while the T-apparatus does so along an axis that is tilted, about the y-axis, over an angle equal to α, or φ, as it’s written in the table above.

tilted

Of course, we don’t really introduce an apparatus at this or that angle. We just introduced an electromagnetic field, which re-defined our | 1 〉 and | 2 〉 base states and, therefore, through the rotational transformation matrix, also defines our | I 〉 and | II 〉 base states.

[…] You may have lost me by now, and so then you’ll want to skip to the next section. That’s fine. Just remember that the representations in terms of | I 〉 and | II 〉 base states or in terms of | 1 〉 and | 2 〉 base states are mathematically equivalent. Having said that, if you’re reading this post, and you want to understand it, truly (because you want to truly understand quantum mechanics), then you should try to stick with me here. 🙂 Indeed, there’s a zillion things you could think about right now, but you should stick to the math now. Using that transformation matrix, we can relate the Cand CII coefficients in the | φ 〉 = | I 〉 CI + | II 〉 CII expression to the Cand CII coefficients in the | φ 〉 = | 1 〉 C1 + | 2 〉 C2 expression. Indeed, we wrote:

  • C= 〈 I | ψ 〉 = (1/√2)·(C1 − C2)
  • CII = 〈 II | ψ 〉 = (1/√2)·(C1 + C2)

That’s exactly the same as writing:

transformation

OK. […] Waw! You just took a huge leap, because we can now compare the two sets of differential equations:

set of equations

They’re mathematically equivalent, but the mathematical behavior of the functions involved is very different. Indeed, unlike the C1(t) and C2(t) amplitudes, we find that the CI(t) and CII(t) amplitudes are stationary, i.e. the associated probabilities – which we find by taking the absolute square of the amplitudes, as usual – do not vary in time. To be precise, if you write it all out and simplify, you’ll find that the CI(t) and CII(t) amplitudes are equal to:

  • CI(t) = 〈 I | ψ 〉 = (1/√2)·(C1 − C2) = (1/√2)·e(i/ħ)·(E0+ A)·t = (1/√2)·e(i/ħ)·EI·t
  • CII(t) = 〈 II | ψ 〉 = (1/√2)·(C1 + C2) = (1/√2)·e(i/ħ)·(E0− A)·t = (1/√2)·e(i/ħ)·EII·t

As the absolute square of the exponential is equal to one, the associated probabilities, i.e. |CI(t)|2 and |CII(t)|2, are, quite simply, equal to |1/√2|2 = 1/2. Now, it is very tempting to say that this means that our ammonia molecule has an equal chance to be in state I or state II. In fact, while I may have said something like that in my previous posts, that’s not how one should interpret this. The chance of our molecule being exactly in state I or state II, or in state 1 or state 2 is varying with time, with the probability being ‘dumped’ from one state to the other all of the time.

I mean… The electric dipole moment can point in any direction, really. So saying that our molecule has a 50/50 chance of being in state 1 or state 2 makes no sense. Likewise, saying that our molecule has a 50/50 chance of being in state I or state II makes no sense either. Indeed, the state of our molecule is specified by the | φ 〉 = | I 〉 CI + | II 〉 CII = | 1 〉 C1 + | 2 〉 Cequations, and neither of these two expressions is a stationary state. They mix two frequencies, because they mix two energy levels.

Having said that, we’re talking quantum mechanics here and, therefore, an external inhomogeneous electric field will effectively split the ammonia molecules according to their state. The situation is really like what a Stern-Gerlach apparatus does to a beam of electrons: it will split the beam according to the electron’s spin, which is either ‘up’ or, else, ‘down’, as shown in the graph below:

diagram 2

The graph for our ammonia molecule, shown below, is very similar. The vertical axis measures the same: energy. And the horizontal axis measures με, which increases with the strength of the electric field ε. So we see a similar ‘splitting’ of the energy of the molecule in an external electric field.

graph new

How should we explain this? It is very tempting to think that the presence of an external force field causes the electrons, or the ammonia molecule, to ‘snap into’ one of the two possible states, which are referred to as state I and state II respectively in the illustration of the ammonia state selector below. But… Well… Here we’re entering the murky waters of actually interpreting quantum mechanics, for which (a) we have no time, and (b) we are not qualified. So you should just believe, or take for granted, what’s being shown here: an inhomogeneous electric field will split our ammonia beam according to their state, which we define as I and II respectively, and which are associated with the energy E0+ A and E0− A  respectively.

electric field

As mentioned above, you should note that these two states are stationary. The Hamiltonian equations which, as they always do, describe the dynamics of this system, imply that the amplitude to go from state I to state II, or vice versa, is zero. To make sure you ‘get’ that, I reproduce the associated Hamiltonian matrix once again:

H-matrix I and II

Of course, that will change when we start our analysis of what’s happening in the maser. Indeed, we will have some non-zero HI,II and HII,I amplitudes in the resonant cavity of our ammonia maser, in which we’ll have an oscillating electric field and, as a result, induced transitions from state I to II and vice versa. However, that’s for later. While I’ll quickly insert the full picture diagram below, you should, for the moment, just think about those two stationary states and those two zeroes. 🙂

maser diagram

Capito? If not… Well… Start reading this post again, I’d say. 🙂

Intermezzo: on approximations

At this point, I need to say a few things about all of the approximations involved, because it can be quite confusing indeed. So let’s take a closer look at those energy levels and the related Hamiltonian coefficients. In fact, in his LecturesFeynman shows us that we can always have a general solution for the Hamiltonian equations describing a two-state system whenever we have constant Hamiltonian coefficients. That general solution – which, mind you, is derived assuming Hamiltonian coefficients that do not depend on time – can always be written in terms of two stationary base states, i.e. states with a definite energy and, hence, a constant probability. The equations, and the two definite energy levels are:

Hamiltonian

solution3

That yields the following values for the energy levels for the stationary states:

solution x

Now, that’s very different from the E= E0+ A and EII = E0− A energy levels for those stationary states we had defined in the previous section: those stationary states had no square root, and no μ2ε2, in their energy. In fact, that sort of answers the question: if there’s no external field, then that μ2ε2 factor is zero, and the square root in the expression becomes ±√A= ±A. So then we’re back to our E= E0+ A and EII = E0− A formulas. The whole point, however, is that we will actually have an electric field in that cavity. Moreover, it’s going to be a field that varies in time, which we’ll write:

field

Now, part of the confusion in Feynman’s approach is that he constantly switches between representing the system in terms of the I and II base states and the 1 and 2 base states respectively. For a good understanding, we should compare with our original representation of the dynamics in free space, for which the Hamiltonian was the following one:

H-matrix

That matrix can easily be related to the new one we’re going to have to solve, which is equal to:

H-matrix 2

The interpretation is easy if we look at that illustration again:

ammonia

If the direction of the electric dipole moment is opposite to the direction ε, then the associated energy is equal to −μ·ε = −μ·ε = −|μ|·|ε|·cosθ = −μ·ε·cos(π) = +με. Conversely, for state 2, we find −μ·ε·cos(0) = −με for the energy that’s associated with the dipole moment. You can and should think about the physics involved here, because they make sense! Thinking of amplitudes, you should note that the +με and −με terms effectively change the H11 and H22 coefficients, so they change the amplitude to stay in state 1 or state 2 respectively. That, of course, will have an impact on the associated probabilities, and so that’s why we’re talking of induced transitions now.

Having said that, the Hamiltonian matrix above keeps the −A for H12 and H21, so the matrix captures spontaneous transitions too!

Still… You may wonder why Feynman doesn’t use those Eand EII formulas with the square root because that would give us some exact solution, wouldn’t it? The answer to that question is: maybe it would, but would you know how to solve those equations? We’ll have a varying field, remember? So our Hamiltonian H11 and H22 coefficients will no longer be constant, but time-dependent. As you’re going to see, it takes Feynman three pages to solve the whole thing using the +με and −με approximation. So just imagine how complicated it would be using that square root expression! [By the way, do have a look at those asymptotic curves in that illustration showing the splitting of energy levels above, so you see how that approximation looks like.]

So that’s the real answer: we need to simplify somehow, so as to get any solutions at all!

Of course, it’s all quite confusing because, after Feynman first notes that, for strong fields, the A2 in that square root is small as compared to μ2ε2, thereby justifying the use of the simplified E= E0+ με = H11 and EII = E0− με = H22 coefficients, he continues and bluntly uses the very same square root expression to explain how that state selector works, saying that the electric field in the state selector will be rather weak and, hence, that με will be much smaller than A, so one can use the following approximation for the square root in the expressions above:

square root sum of squaresThe energy expressions then reduce to:energy 2

And then we can calculate the force on the molecules as:

force

So the electric field in the state selector is weak, but the electric field in the cavity is supposed to be strong, and so… Well… That’s it, really. The bottom line is that we’ve a beam of ammonia molecules that are all in state I, and it’s what happens with that beam then, that is being described by our new set of differential equations:

new

Solving the equations

As all molecules in our ammonia beam are described in terms of the | I 〉 and | II 〉 base states – as evidenced by the fact that we say all molecules that enter the cavity are state I – we need to switch to that representation. We do that by using that transformation above, so we write:

  • C= 〈 I | ψ 〉 = (1/√2)·(C1 − C2)
  • CII = 〈 II | ψ 〉 = (1/√2)·(C1 + C2)

Keeping these ‘definitions’ of Cand CII in mind, you should then add the two differential equations, divide the result by the square root of 2, and you should get the following new equation:

Eq1

Please! Do it and verify the result! You want to learn something here, no? 🙂

Likewise, subtracting the two differential equations, we get:

Eq2

We can re-write this as:set new

Now, the problem is that the Hamiltonian constants here are not constant. To be precise, the electric field ε varies in time. We wrote:

field

So HI,II  and HII,I, which are equal to με, are not constant: we’ve got Hamiltonian coefficients that are a function of time themselves. […] So… Well… We just need to get on with it and try to finally solve this thing. Let me just copy Feynman as he grinds through this:

F1

This is only the first step in the process. Feynman just takes two trial functions, which are really similar to the very general Ca·e–(i/ħ)·H11·t function we presented when only one equation was involved, or – if you prefer a set of two equations – those CI(t) = a·e(i/ħ)·EI·t and CI(t) = b·e(i/ħ)·EII·equations above. The difference is that the coefficients in front, i.e. γI and γII are not some (complex) constant, but functions of time themselves. The next step in the derivation is as follows:

F2

One needs to do a bit of gymnastics here as well to follow what’s going on, but please do check and you’ll see it works. Feynman derives another set of differential equations here, and they specify these γI = γI(t) and γII = γII(t) functions. These equations are written in terms of the frequency of the field, i.e. ω, and the resonant frequency ω0, which we mentioned above when calculating that 23.79 GHz frequency from the 2A = h·f0 equation. So ω0 is the same molecular resonance frequency but expressed as an angular frequency, so ω0 = f0/2π = ħ/2A. He then proceeds to simplify, using assumptions one should check. He then continues:

F3

That gives us what we presented in the previous post:

F4

So… Well… What to say? I explained those probability functions in my previous post, indeed. We’ve got two probabilities here:

  • P= cos2[(με0/ħ)·t]
  • PII = sin2[(με0/ħ)·t]

So that’s just like the P=  cos2[(A/ħ)·t] and P= sin2[(A/ħ)·t] probabilities we found for spontaneous transitions. But so here we are talking induced transitions.

As you can see, the frequency and, hence, the period, depend on the strength, or magnitude, of the electric field, i.e. the εconstant in the ε = 2ε0cos(ω·t) expression. The natural unit for measuring time would be the period once again, which we can easily calculate as (με0/ħ)·T = π ⇔ T = π·ħ/με0.

Now, we had that T = (π·ħ)/(2A) expression above, which allowed us to calculate the period of the spontaneous transition frequency, which we found was like 40 picoseconds, i.e. 40×10−12 seconds. Now, the T = (π·ħ)/(2με0) is very similar, it allows us to calculate the expected, average, or mean time for an induced transition. In fact, if we write Tinduced = (π·ħ)/(2με0) and Tspontaneous = (π·ħ)/(2A), then we can take ratio to find:

Tinduced/Tspontaneous = [(π·ħ)/(2με0)]/[(π·ħ)/(2A)] = A/με0

This A/με0 ratio is greater than one, so Tinduced/Tspontaneous is greater than one, which, in turn, means that the presence of our electric field – which, let me remind you, dances to the beat of the resonant frequency – causes a slower transition than we would have had if the oscillating electric field were not present.

But – Hey! – that’s the wrong comparison! Remember all molecules enter in a stationary state, as they’ve been selected so as to ensure they’re in state I. So there is no such thing as a spontaneous transition frequency here! They’re all polarized, so to speak, and they would remain that way if there was no field in the cavity. So if there was no oscillating electric field, they would never transition. Nothing would happen! Well… In terms of our particular set of base states, of course! Why? Well… Look at the Hamiltonian coefficients HI,II = HII,I = με: these coefficients are zero if ε is zero. So… Well… That says it all.

So that‘s what it’s all about: induced emission and, as I explained in my previous post, because all molecules enter in state I, i.e. the upper energy state, literally, they all ‘dump’ a net amount of energy equal to 2A into the cavity at the occasion of their first transition. The molecules then keep dancing, of course, and so they absorb and emit the same amount as they go through the cavity, but… Well… We’ve got a net contribution here, which is not only enough to maintain the cavity oscillations, but actually also provides a small excess of power that can be drawn from the cavity as microwave radiation of the same frequency.

As Feynman notes, an exact description of what actually happens requires an understanding of the quantum mechanics of the field in the cavity, i.e. quantum field theory, which I haven’t studied yet. But… Well… That’s for later, I guess. 🙂

Post scriptum: The sheer length of this post shows we’re not doing something that’s easy here. Frankly, I feel the whole analysis is still quite obscure, in the sense that – despite looking at this thing again and again – it’s hard to sort of interpret what’s going on, in a physical sense that is. But perhaps one shouldn’t try that. I’ve quoted Feynman’s view on how easy or how difficult it is to ‘understand’ quantum mechanics a couple of times already, so let me do it once more:

“Because atomic behavior is so unlike ordinary experience, it is very difficult to get used to, and it appears peculiar and mysterious to everyone—both to the novice and to the experienced physicist. Even the experts do not understand it the way they would like to, and it is perfectly reasonable that they should not, because all of direct, human experience and human intuition applies to large objects.”

So… Well… I’ll grind through the remaining Lectures now – I am halfway through Volume III now – and then re-visit all of this. Despite Feynman’s warning, I want to understand it the way I like to, even if I don’t quite know what way that is right now. 🙂

Addendum: As for those cycles and periods, I noted a couple of times already that the Planck-Einstein equation E = h·f  can usefully be re-written as E/= h, as it gives a physical interpretation to the value of the Planck constant. In fact, I said h is the energy that’s associated with one cycle, regardless of the frequency of the radiation involved. Indeed, the energy of a photon divided by the number of cycles per second, should give us the energy per cycle, no?

Well… Yes and no. Planck’s constant h and the frequency are both expressed referencing the time unit. However, if we say that a sodium atom emits one photon only as its electron transitions from a higher energy level to a lower one, and if we say that involves a decay time of the order of 3.2×10−8 seconds, then what we’re saying really is that a sodium light photon will ‘pack’ like 16 million cycles, which is what we get when we multiply the number of cycles per second (i.e. the mentioned frequency of 500×1012 Hz) by the decay time (i.e. 3.2×10−8 seconds): (500×1012 Hz)·(3.2×10−8 s) = 16 ×10cycles, indeed. So the energy per cycle is 2.068 eV (i.e. the photon energy) divided by 16×106, so that’s 0.129×10−6 eV. Unsurprisingly, that’s what we get when we we divide h by 3.2×10−8 s: (4.13567×10−15)/(3.2×10−8 s) = 1.29×10−7 eV. We’re just putting some values in to the E/(T) = h/T equation here.

The logic for that 2A = h·f0 is the same. The frequency of the radiation that’s being absorbed or emitted is 23.79 GHz, so the photon energy is (23.97×109 Hz)·(4.13567×10−15 eV·s) ≈ 1×10−4 eV. Now, we calculated the transition period T as T = π·ħ/A ≈ (π·6.626×10−16 eV·s)/(0.5×10−4 eV) ≈ 41.6×10−12 seconds. Now, an oscillation of a frequency of 23.97 giga-hertz that only lasts 41.6×10−12 seconds is an oscillation of one cycle only. The consequence is that, when we continue this style of reasoning, we’d have a photon that packs all of its energy into one cycle!

Let’s think about what this implies in terms of the density in space. The wavelength of our microwave radiation is 1.25×10−2 m, so we’ve got a ‘density’ of 1×10−4 eV/1.25×10−2 m = 0.8×10−2 eV/m = 0.008 eV/m. The wavelength of our sodium light is 0.6×10−6 m, so we get a ‘density’ of 1.29×10−7 eV/0.6×10−6 m = 2.15×10−1 eV/m = 0.215 eV/m. So the energy ‘density’ of our sodium light is 26.875 times that of our microwave radiation. 🙂

Frankly, I am not quite sure if calculations like this make much sense. In fact, when talking about energy densities, I should review my posts on the Poynting vector. However, they may help you think things through. 🙂

The Hamiltonian for a two-state system: the ammonia example

Ammonia, i.e. NH3, is a colorless gas with a strong smell. Its serves as a precursor in the production of fertilizer, but we also know it as a cleaning product, ammonium hydroxide, which is NH3 dissolved in water. It has a lot of other uses too. For example, its use in this post, is to illustrate a two-state system. 🙂 We’ll apply everything we learned in our previous posts and, as I  mentioned when finishing the last of those rather mathematical pieces, I think the example really feels like a reward after all of the tough work on all of those abstract concepts – like that Hamiltonian matrix indeed – so I hope you enjoy it. So… Here we go!

The geometry of the NH3 molecule can be described by thinking of it as a trigonal pyramid, with the nitrogen atom (N) at its apex, and the three hydrogen atoms (H) at the base, as illustrated below. [Feynman’s illustration is slightly misleading, though, because it may give the impression that the hydrogen atoms are bonded together somehow. That’s not the case: the hydrogen atoms share their electron with the nitrogen, thereby completing the outer shell of both atoms. This is referred to as a covalent bond. You may want to look it up, but it is of no particular relevance to what follows here.]

Capture

Here, we will only worry about the spin of the molecule about its axis of symmetry, as shown above, which is either in one direction or in the other, obviously. So we’ll discuss the molecule as a two-state system. So we don’t care about its translational (i.e. linear) momentum, its internal vibrations, or whatever else that might be going on. It is one of those situations illustrating that the spin vector, i.e. the vector representing angular momentum, is an axial vector: the first state, which is denoted by | 1 〉 is not the mirror image of state | 2 〉. In fact, there is a more sophisticated version of the illustration above, which usefully reminds us of the physics involved.

dipoleIt should be noted, however, that we don’t need to specify what the energy barrier really consists of: moving the center of mass obviously requires some energy, but it is likely that a ‘flip’ also involves overcoming some electrostatic forces, as shown by the reversal of the electric dipole moment in the illustration above. In fact, the illustration may confuse you, because we’re usually thinking about some net electric charge that’s spinning, and so the angular momentum results in a magnetic dipole moment, that’s either ‘up’ or ‘down’, and it’s usually also denoted by the very same μ symbol that’s used below. As I explained in my post on angular momentum and the magnetic moment, it’s related to the angular momentum J through the so-called g-number. In the illustration above, however, the μ symbol is used to denote an electric dipole moment, so that’s different. Don’t rack your brain over it: just accept there’s an energy barrier, and it requires energy to get through it. Don’t worry about its details!

Indeed, in quantum mechanics, we abstract away from such nitty-gritty, and so we just say that we have base states | i 〉 here, with i equal to 1 or 2. One or the other. Now, in our post on quantum math, we introduced what Feynman only half-jokingly refers to as the Great Law of Quantum Physics: | = ∑ | i 〉〈 i | over all base states i. It basically means that we should always describe our initial and end states in terms of base states. Applying that principle to the state of our ammonia molecule, which we’ll denote by | ψ 〉, we can write:

f1

You may – in fact, you should – mechanically apply that | = ∑ | i 〉〈 i | substitution to | ψ 〉 to get what you get here, but you should also think about what you’re writing. It’s not an easy thing to interpret, but it may help you to think of the similarity of the formula above with the description of a vector in terms of its base vectors, which we write as A = Ax·e+ Ay·e2 + Az·e3. Just substitute the Acoefficients for Ci and the ebase vectors for the | i 〉 base states, and you may understand this formula somewhat better. It also explains why the | ψ 〉 state is often referred to as the | ψ 〉 state vector: unlike our  A = ∑ Ai·esum of base vectors, our | 1 〉 C1 + | 2 〉 Csum does not have any geometrical interpretation but… Well… Not all ‘vectors’ in math have a geometric interpretation, and so this is a case in point.

It may also help you to think of the time-dependency. Indeed, this formula makes a lot more sense when realizing that the state of our ammonia molecule, and those coefficients Ci, depend on time, so we write: ψ = ψ(t) and C= Ci(t). Hence, if we would know, for sure, that our molecule is always in state | 1 〉, then C1 = 1 and C2 = 0, and we’d write: | ψ 〉 = | 1 〉 = | 1 〉 1 + | 2 〉 0. [I am always tempted to insert a little dot (·), and change the order of the factors, so as to show we’re talking some kind of product indeed – so I am tempted to write | ψ 〉 = C1·| 1 〉 C1 + C2·| 2 〉 C2, but I note that’s not done conventionally, so I won’t do it either.]  

Why this time dependency? It’s because we’ll allow for the possibility of the nitrogen to push its way through the pyramid – through the three hydrogens, really – and flip to the other side. It’s unlikely, because it requires a lot of energy to get half-way through (we’ve got what we referred to as an energy barrier here), but it may happen and, as we’ll see shortly, it results in us having to think of the the ammonia molecule as having two separate energy levels, rather than just one. We’ll denote those energy levels as E0 ± A. However, I am getting ahead of myself here, so let me get back to the main story.

To fully understand the story, you should really read my previous post on the Hamiltonian, which explains how those Ci coefficients, as a function of time, can be determined. They’re determined by a set of differential equations (i.e. equations involving a function and the derivative of that function) which we wrote as:

H6

 If we have two base states only – which is the case here – then this set of equations is:

set - two-base

Two equations and two functions – C= C1(t) and C= C2(t) – so we should be able to solve this thing, right? Well… No. We don’t know those Hij coefficients. As I explained in my previous post, they also evolve in time, so we should write them as Hij(t) instead of Hij tout court, and so it messes the whole thing up. We have two equations and six functions really. There is no way we can solve this! So how do we get out of this mess?

Well… By trial and error, I guess. 🙂 Let us just assume the molecule would behave nicely—which we know it doesn’t, but so let’s push the ‘classical’ analysis as far as we can, so we might get some clues as to how to solve this problem. In fact, our analysis isn’t ‘classical’ at all, because we’re still talking amplitudes here! However, you’ll agree the ‘simple’ solution would be that our ammonia molecule doesn’t ‘tunnel’. It just stays in the same spin direction forever. Then H12 and H21 must be zero (think of the U12(t + Δt, t) and U21(t + Δt, t) functions) and H11 and H22 are equal to… Well… I’d love to say they’re equal to 1 but… Well… You should go through my previous posts: these Hamiltonian coefficients are related to probabilities but… Well… Same-same but different, as they say in Asia. 🙂 They’re amplitudes, which are things you use to calculate probabilities. But calculating probabilities involve normalization and other stuff, like allowing for interference of amplitudes, and so… Well… To make a long story short, if our ammonia molecule would stay in the same spin direction forever, then H11 and H22  are not one but some constant. In any case, the point is that they would not change in time (so H11(t) = H11  and H22(t ) = H22), and, therefore, our two equations would reduce to:

S1

So the coefficients are now proper coefficients, in the sense that they’ve got some definite value, and so we have two equations and two functions only now, and so we can solve this. Indeed, remembering all of the stuff we wrote on the magic of exponential functions (more in particular, remembering that d[ex]/dx), we can understand the proposed solution:

S2

As Feynman notes: “These are just the amplitudes for stationary states with the energies E= H11 and E= H22.” Now let’s think about that. Indeed, I find the term ‘stationary’ state quite confusing, as it’s ill-defined. In this context, it basically means that we have a wavefunction that is determined by (i) a definite (i.e. unambiguous, or precise) energy level and (ii) that there is no spatial variation. Let me refer you to my post on the basics of quantum math here. We often use a sort of ‘Platonic’ example of the wavefunction indeed:

a·ei·θ ei·(ω·t − k ∙x) = a·e(i/ħ)·(E·t − px)

So that’s a wavefunction assuming the particle we’re looking at has some well-defined energy E and some equally well-defined momentum p. Now, that’s kind of ‘Platonic’ indeed, because it’s more like an idea, rather than something real. Indeed, a wavefunction like that means that the particle is everywhere and nowhere, really—because its wavefunction is spread out all of over space. Of course, we may think of the ‘space’ as some kind of confined space, like a box, and then we can think of this particle as being ‘somewhere’ in that box, and then we look at the temporal variation of this function only – which is what we’re doing now: we don’t consider the space variable x at all. So then the equation reduces to a·e–(i/ħ)·(E·t), and so… Well… Yes. We do find that our Hamiltonian coefficient Hii is like the energy of the | i 〉 state of our NH3 molecule, so we write: H11 = E1, and H22 = E2, and the ‘wavefunctions’ of our Cand Ccoefficients can be written as:

  • Ca·e(i/ħ)·(H11·t) a·e(i/ħ)·(E1·t), with H11 = E1, and
  • C= a·e(i/ħ)·(H22·t) a·e(i/ħ)·(E2·t), with H22 = E2.

But can we interpret Cand  Cas proper amplitudes? They are just coefficients in these equations, aren’t they? Well… Yes and no. From what we wrote in previous posts, you should remember that these Ccoefficients are equal to 〈 i | ψ 〉, so they are the amplitude to find our ammonia molecule in one state or the other.

Back to Feynman now. He adds, logically but brilliantly:

We note, however, that for the ammonia molecule the two states |1〉 and |2〉 have a definite symmetry. If nature is at all reasonable, the matrix elements H11 and H22 must be equal. We’ll call them both E0, because they correspond to the energy the states would have if H11 and H22 were zero.”

So our Cand Camplitudes then reduce to:

  • C〈 1 | ψ 〉 = a·e(i/ħ)·(E0·t)
  • C=〈 2 | ψ 〉 = a·e(i/ħ)·(E0·t)

We can now take the absolute square of both to find the probability for the molecule to be in state 1 or in state 2:

  • |〈 1 | ψ 〉|= |a·e(i/ħ)·(E0·t)|a
  • |〈 2 | ψ 〉|= |a·e(i/ħ)·(E0·t)|a

Now, the probabilities have to add up to 1, so a+ a= 1 and, therefore, the probability to be in either in state 1 or state 2 is 0.5, which is what we’d expect.

Note: At this point, it is probably good to get back to our | ψ 〉 = | 1 〉 C1 + | 2 〉 Cequation, so as to try to understand what it really says. Substituting the a·e(i/ħ)·(E0·t) expression for C1 and C2 yields:

| ψ 〉 = | 1 〉 a·e(i/ħ)·(E0·t) + | 2 〉 a·e(i/ħ)·(E0·t) = [| 1 〉 + | 2 〉] a·e(i/ħ)·(E0·t)

Now, what is this saying, really? In our previous post, we explained this is an ‘open’ equation, so it actually doesn’t mean all that much: we need to ‘close’ or ‘complete’ it by adding a ‘bra’, i.e. a state like 〈 χ |, so we get a 〈 χ | ψ〉 type of amplitude that we can actually do something with. Now, in this case, our final 〈 χ | state is either 〈 1 | or 〈 2 |, so we write:

  • 〈 1 | ψ 〉 = [〈 1 | 1 〉 + 〈 1 | 2 〉]·a·e(i/ħ)·(E0·t) = [1 + 0]·a·e(i/ħ)·(E0·t)· = a·e(i/ħ)·(E0·t)
  • 〈 2 | ψ 〉 = [〈 2 | 1 〉 + 〈 2 | 2 〉]·a·e(i/ħ)·(E0·t) = [0 + 1]·a·e(i/ħ)·(E0·t)· = a·e(i/ħ)·(E0·t)

Note that I finally added the multiplication dot (·) because we’re talking proper amplitudes now and, therefore, we’ve got a proper product too: we multiply one complex number with another. We can now take the absolute square of both to find the probability for the molecule to be in state 1 or in state 2:

  • |〈 1 | ψ 〉|= |a·e(i/ħ)·(E0·t)|a
  • |〈 2 | ψ 〉|= |a·e(i/ħ)·(E0·t)|a

Unsurprisingly, we find the same thing: these probabilities have to add up to 1, so a+ a= 1 and, therefore, the probability to be in state 1 or state 2 is 0.5. So the notation and the logic behind makes perfect sense. But let me get back to the lesson now.

The point is: the true meaning of a ‘stationary’ state here, is that we have non-fluctuating probabilities. So they are and remain equal to some constant, i.e. 1/2 in this case. This implies that the state of the molecule does not change: there is no way to go from state 1 to state 2 and vice versa. Indeed, if we know the molecule is in state 1, it will stay in that state. [Think about what normalization of probabilities means when we’re looking at one state only.]

You should note that these non-varying probabilities are related to the fact that the amplitudes have a non-varying magnitude. The phase of these amplitudes varies in time, of course, but their magnitude is and remains aalways. The amplitude is not being ‘enveloped’ by another curve, so to speak.

OK. That should be clear enough. Sorry I spent so much time on this, but this stuff on ‘stationary’ states comes back again and again and so I just wanted to clear that up as much as I can. Let’s get back to the story.

So we know that, what we’re describing above, is not what ammonia does really. As Feynman puts it: “The equations [i.e. the Cand Cequations above] don’t tell us what what ammonia really does. It turns out that it is possible for the nitrogen to push its way through the three hydrogens and flip to the other side. It is quite difficult; to get half-way through requires a lot of energy. How can it get through if it hasn’t got enough energy? There is some amplitude that it will penetrate the energy barrier. It is possible in quantum mechanics to sneak quickly across a region which is illegal energetically. There is, therefore, some [small] amplitude that a molecule which starts in |1〉 will get to the state |2. The coefficients H12 and H21 are not really zero.”

He adds: “Again, by symmetry, they should both be the same—at least in magnitude. In fact, we already know that, in general, Hij must be equal to the complex conjugate of Hji.”

His next step, then, is to interpreted as either a stroke of genius or, else, as unexplained. 🙂 He invokes the symmetry of the situation to boldly state that H12 is some real negative number, which he denotes as −A, which – because it’s a real number (so the imaginary part is zero) – must be equal to its complex conjugate H21. So then Feynman does this fantastic jump in logic. First, he keeps using the E0 value for H11 and H22, motivating that as follows: “If nature is at all reasonable, the matrix elements H11 and H22 must be equal, and we’ll call them both E0, because they correspond to the energy the states would have if H11 and H22 were zero.” Second, he uses that minus A value for H12 and H21. In short, the two equations and six functions are now reduced to:

equations

Solving these equations is rather boring. Feynman does it as follows:

solution

Now, what does these equations actually mean? It depends on those a and b coefficients. Looking at the solutions, the most obvious question to ask is: what if a or b are zero? If b is zero, then the second terms in both equations is zero, and so C1 and C2 are exactly the same: two amplitudes with the same temporal frequency ω = (E− A)/ħ. If a is zero, then C1 and C2 are the same too, but with opposite sign: two amplitudes with the same temporal frequency ω = (E+ A)/ħ. Squaring them – in both cases (i.e. for a = 0 or b = 0) – yields, once again, an equal and constant probability for the spin of the ammonia molecule to in the ‘up’ or ‘down’ or ‘down’. To be precise, we We can now take the absolute square of both to find the probability for the molecule to be in state 1 or in state 2:

  • For b = 0: |〈 1 | ψ 〉|= |(a/2)·e(i/ħ)·(E− A)·t|a2/4 = |〈 2 | ψ 〉|
  • For a = 0: |〈 1 | ψ 〉|=|(b/2)·e(i/ħ)·(E+ A)·t|= b2/4 = |〈 2 | ψ 〉|(the minus sign in front of b/2 is squared away)

So we get two stationary states now. Why two instead of one? Well… You need to use your imagination a bit here. They actually reflect each other: they’re the same as the one stationary state we found when assuming our nitrogen atom could not ‘flip’ from one position to the other. It’s just that the introduction of that possibility now results in a sort of ‘doublet’ of energy levels. But so we shouldn’t waste our time on this, as we want to analyze the general case, for which the probabilities to be in state 1 or state 2 do vary in time. So that’s when a and b are non-zero.

To analyze it all, we may want to start with equating t to zero. We then get:

C1

This leads us to conclude that a = b = 1, so our equations for C1(t) and C2(t) can now be written as:

C2

Remembering our rules for adding and subtracting complex conjugates (eiθ + e–iθ = 2cosθ and eiθ − e–iθ = 2sinθ), we can re-write this as:

C3

Now these amplitudes are much more interesting. Their temporal variation is defined by Ebut, on top of that, we have an envelope here: the cos(A·t/ħ) and sin(A·t/ħ) factor respectively. So their magnitude is no longer time-independent: both the phase as well as the amplitude now vary with time. What’s going on here becomes quite obvious when calculating and plotting the associated probabilities, which are

  • |C1(t)|= cos2(A·t/ħ), and
  • |C2(t)|= sin2(A·t/ħ)

respectively (note that the absolute square of i is equal to 1, not −1). The graph of these functions is depicted below.

graph

As Feynman puts it: “The probability sloshes back and forth.” Indeed, the way to think about this is that, if our ammonia molecule is in state 1, then it will not stay in that state. In fact, one can be sure the nitrogen atom is going to flip at some point in time, with the probabilities being defined by that fluctuating probability density function above. Indeed, as time goes by, the probability to be in state 2 increases, until it will effectively be in state 2. And then the cycle reverses.

Our | ψ 〉 = | 1 〉 C1 + | 2 〉 Cequation is a lot more interesting now, as we do have a proper mix of pure states now: we never really know in what state our molecule will be, as we have these ‘oscillating’ probabilities now, which we should interpret carefully.

The point to note is that the a = 0 and b = 0 solutions came with precise temporal frequencies: (E− A)/ħ and (E0 + A)/ħ respectively, which correspond to two separate energy levels: E− A and E0 + A respectively, with |A| = H12 = H21. So everything is related to everything once again: allowing the nitrogen atom to push its way through the three hydrogens, so as to flip to the other side, thereby breaking the energy barrier, is equivalent to associating two energy levels to the ammonia molecule as a whole, thereby introducing some uncertainty, or indefiniteness as to its energy, and that, in turn, gives us the amplitudes and probabilities that we’ve just calculated.

Note that the probabilities “sloshing back and forth”, or “dumping into each other” – as Feynman puts it – is the result of the varying magnitudes of our amplitudes, going up and down and, therefore, their absolute square varies too.

So… Well… That’s it as an introduction to a two-state system. There’s more to come. Ammonia is used in the ammonia maser. Now that is something that’s interesting to analyze—both from a classical as well as from a quantum-mechanical perspective. Feynman devotes a full chapter to it, so I’d say… Well… Have a look. 🙂

Post scriptum: I must assume this analysis of the NH3 molecule, with the nitrogen ‘flipping’ across the hydrogens, triggers a lot of questions, so let me try to answer some. Let me first insert the illustration once more, so you don’t have to scroll up:

dipole

The first thing that you should note is that the ‘flip’ involves a change in the center of mass position. So that requires energy, which is why we associate two different energy levels with the molecule: E+ A and E− A. However, as mentioned above, we don’t care about the nitty-gritty here: the energy barrier is likely to combine a number of factors, including electrostatic forces, as evidenced by the flip in the electric dipole moment, which is what the μ symbol here represents! Just note that the two energy levels are separated by an amount that’s equal to 2·A, rather than A and that, once again, it becomes obvious now why Feynman would prefer the Hamiltonian to be called the ‘energy matrix’, as its coefficients do represent specific energy levels, or differences between them! Now, that assumption yielded the following wavefunctions for C= 〈 1 | ψ 〉 and C= 〈 2 | ψ 〉:

  • C= 〈 1 | ψ 〉 = (1/2)·e(i/ħ)·(E− A)·t + (1/2)·e(i/ħ)·(E+ A)·t
  • C= 〈 2 | ψ 〉 = (1/2)·e(i/ħ)·(E− A)·t – (1/2)·e(i/ħ)·(E+ A)·t

Both are composite waves. To be precise, they are the sum of two component waves with a temporal frequency equal to ω= (E− A)/ħ and ω= (E+ A)/ħ respectively. [As for the minus sign in front of the second term in the wave equation for C2, −1 = e±iπ, so + (1/2)·e(i/ħ)·(E+ A)·t and – (1/2)·e(i/ħ)·(E+ A)·t are the same wavefunction: they only differ because their relative phase is shifted by ±π.]

Now, writing things this way, rather than in terms of probabilities, makes it clear that the two base states of the molecule themselves are associated with two different energy levels, so it is not like one state has more energy than the other. It’s just that the possibility of going from one state to the other requires an uncertainty about the energy, which is reflected by the energy doublet  E± A in the wavefunction of the base states. Now, if the wavefunction of the base states incorporates that energy doublet, then it is obvious that the state of the ammonia molecule, at any point in time, will also incorporate that energy doublet.

This triggers the following remark: what’s the uncertainty really? Is it an uncertainty in the energy, or is it an uncertainty in the wavefunction? I mean: we have a function relating the energy to a frequency. Introducing some uncertainty about the energy is mathematically equivalent to introducing uncertainty about the frequency. Think of it: two energy levels implies two frequencies, and vice versa. More in general, introducing n energy levels, or some continuous range of energy levels ΔE, amounts to saying that our wave function doesn’t have a specific frequency: it now has n frequencies, or a range of frequencies Δω = ΔE/ħ. Of course, the answer is: the uncertainty is in both, so it’s in the frequency and in the energy and both are related through the wavefunction. So… In a way, we’re chasing our own tail.

Having said that, the energy may be uncertain, but it is real. It’s there, as evidenced by the fact that the ammonia molecule behaves like an atomic oscillator: we can excite it in exactly the same way as we can excite an electron inside an atom, i.e. by shining light on it. The only difference is the photon energies: to cause a transition in an atom, we use photons in the optical or ultraviolet range, and they give us the same radiation back. To cause a transition in an ammonia molecule, we only need photons with energies in the microwave range. Here, I should quickly remind you of the frequencies and energies involved. visible light is radiation in the 400–800 terahertz range and, using the E = h·f equation, we can calculate the associated energies of a photon as 1.6 to 3.2 eV. Microwave radiation – as produced in your microwave oven – is typically in the range of 1 to 2.5 gigahertz, and the associated photon energy is 4 to 10 millionths of an eV. Having illustrated the difference in terms of the energies involved, I should add that masers and lasers are based on the same physical principle: LASER and MASER stand for Light/Micro-wave Amplification by Stimulated Emission of Radiation, respectively.

So… How shall I phrase this? There’s uncertainty, but the way we are modeling that uncertainty matters. So yes, the uncertainty in the frequency of our wavefunction and the uncertainty in the energy are mathematically equivalent, but the wavefunction has a meaning that goes much beyond that. [You may want to reflect on that yourself.]

Finally, another question you may have is why would Feynman take minus A (i.e. −A) for H12 and H21. Frankly, my first thought on this was that it should have something to do with the original equation for these Hamiltonian coefficients, which also has a minus sign: Uij(t + Δt, t) = δij + Kij(t)·Δt = δij − (i/ħ)·Hij(t)·Δt. For i ≠ j, this reduces to:

Uij(t + Δt, t) = + Kij(t)·Δt = − (i/ħ)·Hij(t)·Δt

However, the answer is: it really doesn’t matter. One could write: H12 and H21 = +A, and we’d find the same equations. We’d just switch the indices 1 and 2, and the coefficients a and b. But we get the same solutions. You can figure that out yourself. Have fun with it !

Oh ! And please do let me know if some of the stuff above would trigger other questions. I am not sure if I’ll be able to answer them, but I’ll surely try, and good question always help to ensure we sort of ‘get’ this stuff in a more intuitive way. Indeed, when everything is said and done, the goal of this blog is not simply re-produce stuff, but to truly ‘get’ it, as good as we can. 🙂

Quantum math: the Hamiltonian

After all of the ‘rules’ and ‘laws’ we’ve introduced in our previous post, you might think we’re done but, of course, we aren’t. Things change. As Feynman puts it: “One convenient, delightful ‘apparatus’ to consider is merely a wait of a few minutes; During the delay, various things could be going on—external forces applied or other shenanigans—so that something is happening. At the end of the delay, the amplitude to find the thing in some state χ is no longer exactly the same as it would have been without the delay.”

In short, the picture we presented in the previous posts was a static one. Time was frozen. In reality, time passes, and so we now need to look at how amplitudes change over time. That’s where the Hamiltonian kicks in. So let’s have a look at that now.

[If you happen to understand the Hamiltonian already, you may want to have a look at how we apply it to a real situation: we’ll explain the basics involving state transitions of the ammonia molecule, which are a prerequisite to understanding how a maser works, which is not unlike a laser. But that’s for later. First we need to get the basics.]

Using Dirac’s bra-ket notation, which we introduced in the previous posts, we can write the amplitude to find a ‘thing’ – i.e. a particle, for example, or some system, of particles or other things – in some state χ at the time t = t2, when it was in some state φ state at the time t = t1 as follows:

H1

Don’t be scared of this thing. If you’re unfamiliar with the notation, just check out my previous posts: we’re just replacing A by U, and the only thing that we’ve modified is that the amplitudes to go from φ to χ now depend on t1 and t2. Of course, we’ll describe all states in terms of base states, so we have to choose some representation and expand this expression, so we write: 

H2

I’ve explained the point a couple of time already, but let me note it once more: in quantum physics, we always measure some (vector) quantity – like angular momentum, or spin – in some direction, let’s say the z-direction, or the x-direction, or whatever direction really. Now we can do that in classical mechanics too, of course, and then we find the component of that vector quantity (vector quantities are defined by their magnitude and, importantly, their direction). However, in classical mechanics, we know the components in the x-, y- and z-direction will unambiguously determine that vector quantity. In quantum physics, it doesn’t work that way. The magnitude is never all in one direction only, so we can always some of it in some other direction. (see my post on transformations, or on quantum math in general). So there is an ambiguity in quantum physics has no parallel in classical mechanics. So the concept of a component of a vector needs to be carefully interpreted. There’s nothing definite there, like in classical mechanics: all we have is amplitudes, and all we can do is calculate probabilities, i.e. expected values based on those amplitudes.

In any case, I can’t keep repeating this, so let me move on. In regard to that 〈 χ | U | φ 〉 expression, I should, perhaps, add a few remarks. First, why U instead of A? The answer: no special reason, but it’s true that the use of U reminds us of energy, like potential energy, for example. We might as well have used W. The point is: energy and momentum do appear in the argument of our wavefunctions, and so we might as well remind ourselves of that by choosing symbols like W or U here. Second, we may, of course, want to choose our time scale such that t1 = 0. However, it’s fine to develop the more general case. Third, it’s probably good to remind ourselves we can think of matrices to model it all. More in particular, if we have three base states, say ‘plus‘, ‘zero, or ‘minus‘, and denoting 〈 i | φ 〉 and 〈 i | χ 〉 as Ci and Di respectively (so 〈 χ | i 〉 = 〈 i | χ 〉* = Di*), then we can re-write the expanded expression above as:

Matrix U

Fourth, you may have heard of the S-matrix, which is also known as the scattering matrix—which explains the S in front but it’s actually a more general thing. Feynman defines the S-matrix as the U(t1, t2) matrix for t→ −∞ and t→ +∞, so as some kind of limiting case of U. That’s true in the sense that the S-matrix is used to relate initial and final states, indeed. However, the relation between the S-matrix and the so-called evolution operators U is slightly more complex than he wants us to believe. I can’t say too much about this now, so I’ll just refer you to the Wikipedia article on that, as I have to move on.

The key to the analysis is to break things up once more. More in particular, one should appreciate that we could look at three successive points in time, t1, t2, t3, and write U(t1, t3) as:

U(t3, t1) = U(t3, t2)·U(t2, t1)

It’s just like adding another apparatus in series, so it’s just like what did in our previous post, when we wrote:

B1

So we just put a | bar between B and A and wrote it all out. That | bar is really like a factor 1 in multiplication but – let me caution you – you really need to watch the order of the various factors in your product, and read symbols in the right order, which is often from right to left, like in Hebrew or Arab, rather than from left to right. In that regard, you should note that we wrote U(t3, t1) rather than U(t1, t3): you need to keep your wits about you here! So as to make sure we can all appreciate that point, let me show you what that U(t3, t1) = U(t3, t2)·U(t2, t1) actually says by spelling it out if we have two base states only (like ‘up‘ or ‘down‘, which I’ll note as ‘+’ and ‘−’ again) :

Matrix U2

So now you appreciate why we try to simplify our notation as much as we can! But let me get back to the lesson. To explain the Hamiltonian, which we need to describe how states change over time, Feynman embarks on a rather spectacular differential analysis. Now, we’ve done such exercises before, so don’t be too afraid. He substitutes t1 for t tout court, and tfor t + Δt, with Δt the infinitesimal you know from Δy = (dy/dx)·Δx, with the derivative dy/dx being defined as the Δy/Δx ratio for Δx → 0. So we write U(t2, t1) = U(t + Δt, t). Now, we also explained the idea of an operator in our previous post. It came up when we’re being creative, and so we dropped the 〈 χ | state from the 〈 χ | A | φ〉 expression and just wrote:

C1

If you ‘get’ that, you’ll also understand what I am writing now:

chi1

This is quite abstract, however. It is an ‘open’ equation, really: one needs to ‘complete’ it with a ‘bra’, i.e. a state like 〈 χ |, so as to give a 〈 χ | ψ〉 = 〈 χ | A | φ〉 type of amplitude that actually means something. What we’re saying is that our operator (or our ‘apparatus’ if it helps you to think that way) does not mean all that much as long as we don’t measure what comes out, so we have to choose some set of base states, i.e. a representation, which allows us to describe the final state, which we write as 〈 χ |. In fact, what we’re interested in is the following amplitudes:

chi2

So now we’re in business, really. 🙂 If we can find those amplitudes, for each of our base states i, we know what’s going on. Of course, we’ll want to express our ψ(t) state in terms of our base states too, so the expression we should be thinking of is:

chi3

Phew! That looks rather unwieldy, doesn’t it? You’re right. It does. So let’s simplify. We can do the following substitutions:

  • 〈 i | ψ(t + Δt)〉 = Ci(t + Δt) or, more generally, 〈 j | ψ(t)〉 = Cj(t)
  • 〈 i | U(t2, t1) | j〉 = Uij(t2, t1) or, more specifically, 〈 i | U(t + Δt, t) | j〉 = Uij(t + Δt, t)

H3

As Feynman notes, that’s how the dynamics of quantum mechanics really look like. But, of course, we do need something in terms of derivatives rather than in terms of differentials. That’s where the Δy = (dy/dx)·Δx equation comes in. The analysis looks kinda dicey because it’s like doing some kind of first-order linear approximation of things – rather than an exact kinda thing – but that’s how it is. Let me remind you of the following formula: if we write our function y as y = f(x), and we’re evaluating the function near some point a, then our Δy = (dy/dx)·Δx equation can be used to write:

y = f(x) ≈ f(a) + f'(a)·(x − a) = f(a) + (dy/dx)·Δx

To remind yourself of how this works, you can complete the drawing below with the actual y = f(x) as opposed to the f(a) + Δy approximation, remembering that the (dy/dx) derivative gives you the slope of the tangent to the curve, but it’s all kids’ stuff really and so we shouldn’t waste too much spacetime on this. 🙂

300px-TangentGraphic2

The point is: our Uij(t + Δt, t) is a function too, not only of time, but also of i and j. It’s just a rather special function, because we know that, for Δt → 0, Uij will be equal to 1 if i = (in plain language: if Δt → 0 goes to zero, nothing happens and we’re just in state i), and equal to 0 if i = j. That’s just as per the definition of our base states. Indeed, remember the first ‘rule’ of quantum math:

〈 i | j〉 = 〈 j | i〉 = δij, with δij = δji is equal to 1 if i = j, and zero if i ≠ j

So we can write our f(x) ≈ f(a) + (dy/dx)·Δx expression for Uij as:

H4

So Kij is also some kind of derivative and the Kronecker delta, i.e. δij, serves as the reference point around which we’re evaluating UijHowever, that’s about as far as the comparison goes. We need to remind ourselves that we’re talking complex-valued amplitudes here. In that regard, it’s probably also good to remind ourselves once more that we need to watch the order of stuff: Uij = 〈 i | U | j〉, so that’s the amplitude to go from base state to base state i, rather than the other way around. Of course, we have the 〈 χ | φ 〉 = 〈 φ | χ 〉* rule, but we still need to see how that plays out with an expression like 〈 i | U(t + Δt, t) | j〉. So, in short, we should be careful here! 

Having said that, we can actually play a bit with that expression, and so that’s what we’re going to do now. The first thing we’ll do is to write Kij as a function of time indeed:

Kij = Kij(t)

So we don’t have that Δt in the argument. It’s just like dy/dx = f'(x): a derivative is a derivative—a function which we derive from some other function. However, we’ll do something weird now: just like any function, we can multiply or divide it by some constant, so we can write something like G(x) = F(x), which is equivalent to saying that F(x) = G(x)/c. I know that sound silly but it is how is, and we can also do it with complex-valued functions: we can define some other function by multiplying or dividing by some complex-valued constant, like a + b·i, or ξ or whatever other constant. Just note we’re no longer talking the base state but the imaginary unit i. So it’s all done so as to confuse you even more. 🙂

So let’s take −i/ħ as our constant and re-write our Kij(t) function as −itimes some other function, which we’ll denote by Hij(t), so Kij(t) = –(i/ħ)·Hij(t). You guess it, of course: Hij(t) is the infamous Hamiltonian, and it’s written the way it’s written both for historical as well as for practical reasons, which you’ll soon discover. Of course, we’re talking one coefficient only and we’ll have nine if we have three base states i and j, or four if we have only two. So we’ve got a n-by-n matrix once more. As for its name… Well… As Feynman notes: “How Hamilton, who worked in the 1830s, got his name on a quantum mechanical matrix is a tale of history. It would be much better called the energy matrix, for reasons that will become apparent as we work with it.”

OK. So we’ll just have to acknowledge that and move on. Our Uij(t + Δt, t) = δij + Kij(t)·Δt expression becomes:

 Uij(t + Δt, t) = δij –(i/ħ)·Hij(t)·Δt

[Isn’t it great you actually start to understand those Chinese-looking formulas? :-)] We’re not there yet, however. In fact, we’ve still got quite a bit of ground to cover. We now need to take that other monster:

H3

So let’s substitute now, so we get:

H5

We can get this in the form we want to get – so that’s the form you’ll find in textbooks 🙂 – by noting that the ∑δij·Cj(t) sum, taking over all is, quite simply, equal to Ci(t). [Think about the indexes here: we’re looking at some i, and so it’s only the j that’s taking on whatever value it can possibly have.] So we can move that to the other side, which gives us Ci(t + Δt) – Ci(t). We can then divide both sides of our expression by Δt, which gives us an expression like [f(x + Δx) – f(x)]/Δx = Δy//Δx, which is actually the definition of the derivative for Δx going to zero. Now, that allows us to re-write the whole thing in terms of a proper derivative, rather than having to work with this rather unwieldy differential stuff. So, if we substitute [Ci(t + Δt) – Ci(t)]/Δx for d[Ci(t)]/dt, and then also move –(i/ħ) to the left-hand side, remembering that 1/i = –i (and, hence, [–(i/ħ)]−1 = i/ħ), we get the formula in the shape we wanted it in:

H6

Done ! Of course, this is a set of differential equations and… Well… Yes. Yet another set of differential equations. 🙂 It seems like we can’t solve anything without involving differential equations in physics, isn’t it? But… Well… I guess that’s the way it is. So, before we turn to some example, let’s note a few things.

First, we know that a particle, or a system, must be in some state at any point of time. That’s equivalent to stating that the sum of the probabilities |Ci(t)|= |〈 i | ψ(t)〉|is some constant. In fact, we’d like to say it’s equal to one, but then we haven’t normalized anything here. You can fiddle with the formulas but it’s probably easier to just acknowledge that, if we’d measure anything – think of the angular momentum along the z-direction, or some other direction, if you’d want an example – then we’ll find it’s either ‘up’ or ‘down’ for a spin-1/2 particle, or ‘plus’, ‘zero’, or ‘minus’ for a spin-1 particle.

Now, we know that the complex conjugate of a sum is equal to the sum of the complex conjugates: [∑ z]* = ∑ zi*, and that the complex conjugate of a product is the product of the complex conjugates, so we have [∑ ziz]* = ∑ zi*zj*. Now, some fiddling with the formulas above should allow you to prove that Hij = Hij*, and the associated matrix is usually referred to as the Hermitian or conjugate transpose. If if the original Hamiltonian matrix is denoted as H, then its conjugate transpose will be denoted by H*, H or even H(so the in the superscript stands for Hermitian, instead of Hamiltonean). So… Yes. There’s competing notations around. 🙂

The simplest situation, of course, is when the Hamiltonian do not depend on time. In that case, we’re back in the static case, and all Hij coefficients are just constants. For a system with two base states, we’d have the following set of equations:

set - two-base

This set of two equations can be easily solved by remembering the solution for one equation only. Indeed, if we assume there’s only base state – which is like saying: the particle is at rest somewhere (yes: it’s that stupid!) – our set of equations reduces to only one:

one equation

This is a differential equation which is easily solved to give:

solution

[As for being ‘easily solved’, just remember the exponential function is its own derivative and, therefore, d[a·e–(i/ħ)Hijt]/dt = a·d[e–(i/ħ)Hijt]/dt = –a·(i/ħ)·Hij·e–(i/ħ)Hijt, which gives you the differential equation, so… Well… That’s the solution.]

This should, of course, remind you of the equation that inspired Louis de Broglie to write down his now famous matter-wave equation (see my post on the basics of quantum math):

a·ei·θ ei·(ω·t − k ∙x) = a·e(i/ħ)·(E·t − px)

Indeed, if we look at the temporal variation of this function only – so we don’t consider the space variable x – then this equation reduces to a·e–(i/ħ)·(E·t), and so find that our Hamiltonian coefficient H11 is equal to the energy of our particle, so we write: H11 = E, which, of course, explains why Feynman thinks the Hamiltonian matrix should be referred to as the energy matrix. As he puts it: “The Hamiltonian is the generalization of the energy for more complex situations.”

Now, I’ll conclude this post by giving you the answer to Feynman’s remark on why the Irish 19th century mathematician William Rowan Hamilton should be associated with the Hamiltonian. The truth is: the term ‘Hamiltonian matrix’ may also refer to a more general notion. Let me copy Wikipedia here: “In mathematics, a Hamiltonian matrix is a 2n-by-2n matrix A such that JA is symmetric, where J is the skew-symmetric matrix

J= \begin{bmatrix} 0 & I_n \\ -I_n & 0 \\ \end{bmatrix}

and In is the n-by-n identity matrix. In other words, A is Hamiltonian if and only if (JA)T = JA where ()T denotes the transpose. So… That’s the answer. 🙂 And there’s another reason too: Hamilton invented the quaternions and… Well… I’ll leave it to you to check out what these have got to do with quantum physics. 🙂

[…] Oh ! And what about the maser example? Well… I am a bit tired now, so I’ll just refer you to Feynman’s exposé on it. It’s not that difficult if you understood all of the above. In fact, it’s actually quite straightforward, and so I really recommend you work your way through the example, as it will give you a much better ‘feel’ for the quantum-mechanical framework we’ve developed so far. In fact, walking through the whole thing is like a kind of ‘reward’ for having worked so hard on the more abstract stuff in this and my previous posts. So… Yes. Just go for it! 🙂 [And, just in case you don’t want to go for it, I did write a little introduction to in the following post. :-)]