Quantum-mechanical operators

We climbed a mountain—step by step—post by post. :-) We have reached the top now, and the view is gorgeous. We understand Schrödinger’s equation, which describes how amplitudes propagate through space-time. It’s the quintessential quantum-mechanical expression. Let’s enjoy now, and deepen our understanding by introducing the concept of (quantum-mechanical) operators. Let me do so by first making a remark on notation. You’ll remember we wrote Schrödinger’s famous equation as:

schrodinger 5

However, you may have seen the following inscription on his bust, or on his grave, or—somewhat less morbid or deferential—just the following formula, which summarizes the whole expression as:

simple

The H in this expression is, of course, not the Hamiltonian matrix, but an operator. So the same symbol (H) is used to denote two different things. To distinguish the two, we should use the hat symbol, so as to distinguish the matrix (e.g. A) from the operator (Â). However, it’s just like studying statistics: the hat symbol is supposed to distinguish some estimator function (â) from the parameter itself (α), or from the estimate of the parameter, i.e. the observation (a). However, you’ll surely remember the hat disappeared pretty quickly in your statistics course, because the context is usually enough to see what’s being meant. So… Well… I’ll be sloppy as well here, if only because the WordPress editor only offers very few symbols with a hat! :-)

Now, you’ll note the H operator in the expression above is pretty monstrous as it’s, obviously, identical to:

H

As you can see, this H operator actually consists of two other operators: (1) the ∇operator, which you know (∇= ∂/∂x2  + ∂/∂y+ ∂/∂z2), and (2) the V(x, y, z) ‘operator’, which—in this particular context—just means: “multiply with V”. [Needless to say, V is the potential here, and so it captures the presence of external force fields.]

So… Well… This H here surely looks very different than the quantum-mechanical Hamiltonian operator we discussed when dealing with a finite set of base states: that H was nothing but the Hamiltonian matrix operating on some state indeed.

Having said that, it shouldn’t surprise you if I say that, despite the fact that they look so different, these two operators are actually equivalent: the only difference is that one is designed to operate on a (state) vector, while the other is designed to operate on a continuous function. Their interpretation is similar, as evidenced from the fact that both are being referred to as the energy operator in quantum physics.

So… Yes… Let’s talk about that. But let’s first review the basics.

What is all that psi-chology?

We’ll need to go from what is referred to as matrix mechanics to what is referred to as wave mechanics. So… Well… Let’s start with matrix mechanics. The matrix-mechanical approach is summarized in that set of Hamiltonian equations which, by now, you know so well:

new

You understand this equation, but, even then, it’s always good to remind oneself of the description of a state:

essential

|ψ〉 is the state of a system, like the ground state of a hydrogen atom, or one of its many excited states. You also know that the lifetime of a system in an excited state is usually short: some spontaneous or induced emission of a quantum of energy (i.e. a photon) will ensure that the system quickly returns to a less excited state, or to the ground state itself. However, that doesn’t impact the analysis here: we’re looking at the state of the system at some point in time here. That’s all. In fact, that’s why we introduce the concept of operators: the state of the system will, inevitably, change—as time goes by.

That’s clear enough. However, I should warn you here. There’s this potential confusion. It’s caused by the ubiquity of the ψ symbol (i.e. the Greek letter psi). It’s really something psi-chological. :-) In matrix mechanics, our ψ would just denotes a state of a system, which could be an atom, which we’d describe by the orbital(s) of the electron(s) around it. In this regard, I found the following illustration from Wikipedia particularly helpful: the green orbitals show excitations of copper (Cu) orbitals on a CuOplane. [The two big arrows just illustrate the principle of X-ray spectroscopy, so it’s an X-ray probing the structure of the material.]

800px-CuO2-plane_in_high_Tc_superconductor

So… Well… We’d write ψ as |ψ〉 just to remind ourselves we’re talking of some state of the system indeed. However, quantum physicists will also use the psi symbol to denote some specific Ci amplitude (or coefficient) in that |ψ〉 = ∑|iCi formula above. To be specific, they’d replace the base states |i〉 by the continuous position variable x, and they would write the following:

Ci = ψ(ix) = ψ(x) = Cψ(x) = C(x) = 〈x|ψ〉

In fact, that’s just like writing:

φ(p) = 〈 mom p | ψ 〉 = 〈p|ψ〉 = Cφ(p) = C(p)

What they’re doing here, is (1) reduce the ‘system‘ to a ‘particle‘ once more (which is OK, as long as you know what you’re doing) and (2) they basically state the following:

If a particle is in some state |ψ〉, then we can associate some wavefunction ψ(x) or φ(p)—with it, and that wavefunction will represent the amplitude for the system (i.e. our particle) to be at x, or to have a momentum that’s equal to p.

So they should have written χ(x) instead of ψ(x), I feel, so as to avoid confusion: one should not use the same symbol for the |ψ〉 state and the ψ(x) wavefunction. The point is: the position or the momentum, or even the energy, are properties of the |ψ〉 state and, therefore, it’s really confusing to use the same symbol psi (ψ) to describe (1) the state, and (2) the wavefunction of just one of the various properties of that state (in this case: its position). In fact, that’s what this post is all about: it’s about how to describe certain properties of the system. Of course, we’re talking quantum mechanics here and, hence, uncertainty, and, therefore, we’re going to talk about the average position, energy, momentum, etcetera that’s associated with a particular state of a system, or—as we’ll keep things very simple—the properties of a ‘particle’. Think of an electron in some orbital, indeed! :-)

So let’s now look at that set of Hamiltonian equations once again:

new

Looking at it carefully – so just look at it once again! :-) – and thinking about what we did when going from the discrete to the continuous setting, we can now understand we should write the following for the continuous case:

equivalence

Of course, combining Schrödinger’s equation with the expression above implies the following:

equality

Now how can we relate that integral to the expression on the right-hand side? I’ll have to disappoint you here, as it requires a lot of math to transform that integral. It requires writing H(x, x’) in terms of rather complicated functions, including – you guessed it, didn’t you? – Dirac’s delta function. Hence, I assume you’ll believe me if I say that the matrix- and wave-mechanical approaches are actually equivalent. In any case, if you’d want to check it, you can always read Feynman yourself. :-)

Now, I wrote this post to talk about quantum-mechanical operators, so let me do that now.

Quantum-mechanical operators

You know the concept of an operator. As mentioned above, we should put a little hat (^) on top of our Hamiltonian operator, so as to distinguish it from the matrix itself. However, as mentioned above, the difference is usually quite clear from the context. Our operators were all matrices so far, and we’d write the matrix elements of, say, some operator A, as:

Aij ≡ 〈 i | A | j 〉

The whole matrix itself, however, would usually not act on a base state but… Well… Just on some state ψ, to produce some new state φ, and so we’d write:

| φ 〉 = A | ψ 〉

Of course, we’d have to describe | φ 〉 in terms of the (same) set of base states and, therefore, we’d expand this expression into something like this:

operator 2

You get the idea. I should just add one more thing. You know this important property of amplitudes: the 〈 ψ | φ 〉 amplitude is the complex conjugate of the 〈 φ | ψ 〉 amplitude. It’s got to do with time reversibility, because the complex conjugate of eiθ = ei(ω·t−k·x) is equal to eiθ = ei(ω·t−k·x), so we’re just reversing the x– and t-direction. We write:

 〈 ψ | φ 〉 = 〈 φ | ψ 〉*

Now what happens if we want to take the complex conjugate when we insert a matrix, so when writing 〈 φ | A | ψ 〉 instead of 〈 φ | ψ 〉, this rules becomes:

〈 φ | A | ψ 〉* = 〈 ψ | A† | φ 〉

The dagger symbol denotes the conjugate transpose, so A† is an operator whose matrix elements are equal to Aij† = Aji*. Now, it may or may not happen that the A† matrix is actually equal to the original A matrix. In that case – and only in that case – we can write:

〈 ψ | A | φ 〉 = 〈 φ | A | ψ 〉*

We then say that A is a ‘self-adjoint’ or ‘Hermitian’ operator. That’s just a definition of a property, but many quantum-mechanical operators are actually Hermitian. In any case, we’re well armed now to discuss some actual operators, and we’ll start with that energy operator.

The energy operator (H)

We know the state of a system is described in terms of a set of base states. Now, our analysis of N-state systems showed we can always describe it in terms of a special set of base states, which are referred to as the states of definite energy because… Well… Because they’re associated with some definite energy. In that post, we referred to these energy levels as En (n = I, II,… N). We used boldface for the subscript n (so we wrote n instead of n) because of these Roman numerals. With each energy level, we could associate a base state, of definite energy indeed, that we wrote as |n〉. To make a long story short, we summarized our results as follows:

  1. The energies EI, EII,…, En,…, EN are the eigenvalues of the Hamiltonian matrix H.
  2. The state vectors |n〉 that are associated with each energy En, i.e. the set of vectors |n〉, are the corresponding eigenstates.

We’ll be working with some more subscripts in what follows, and these Roman numerals and the boldface notation are somewhat confusing (if only because I don’t want you to think of these subscripts as vectors), we’ll just denote EI, EII,…, En,…, EN as E1, E2,…, Ei,…, EN, and we’ll number the states of definite energy accordingly, also using some Greek letter so as to clearly distinguish them from all our Latin letter symbols: we’ll write these states as: |η1〉, |η1〉,… |ηN〉. [If I say, ‘we’, I mean Feynman of course. You may wonder why he doesn’t write |Ei〉, or |εi〉. The answer is: writing |En〉 would cause confusion, because this state will appear in expressions like: |Ei〉Ei, so that’s the ‘product’ of a state and the associated scalar. Too confusing. As for using η (eta) instead of ε (epsilon) to denote something that’s got to do with energy… Well… I guess he wanted to keep the resemblance with the n, and then the Ancient Greek apparently did use this η letter  for a sound like ‘e‘ so… Well… Why not? Let’s get back to the lesson.]

Using these base states of definite energy, we can write the state of the system as:

|ψ〉 = ∑ |ηi〉 C = ∑ |ηi〉〈ηi|ψ〉    over all (i = 1, 2,… , N)

Now, we didn’t talk all that much about what these base states actually mean in terms of measuring something but you’ll believe if I say that, when measuring the energy of the system, we’ll always measure one or the other E1, E2,…, Ei,…, EN value. We’ll never measure something in-between: it’s either-or. Now, as you know, measuring something in quantum physics is supposed to be destructive but… Well… Let us imagine we could make a thousand measurements to try to determine the average energy of the system. We’d do so by counting the number of times we measure E1 (and of course we’d denote that number as N1), E2E3, etcetera. You’ll agree that we’d measure the average energy as:

E average

However, measurement is destructive, and we actually know what the expected value of this ‘average’ energy will be, because we know the probabilities of finding the system in a particular base state. That probability is equal to the absolute square of that Ccoefficient above, so we can use the P= |Ci|2 formula to write:

Eav〉 = ∑ Pi Ei over all (i = 1, 2,… , N)

Note that this is a rather general formula. It’s got nothing to do with quantum mechanics: if Ai represents the possible values of some quantity A, and Pi is the probability of getting that value, then (the expected value of) the average A will also be equal to 〈Aav〉 = ∑ Pi Ai. No rocket science here! :-) But let’s now apply our quantum-mechanical formulas to that 〈Eav〉 = ∑ Pi Ei formula. [Oh—and I apologize for using the same angle brackets 〈 and 〉 to denote an expected value here—sorry for that! But it’s what Feynman does—and other physicists! You see: they don’t really want you to understand stuff, and so they often use very confusing symbols.] Remembering that the absolute square of a complex number equals the product of that number and its complex conjugate, we can re-write the 〈Eav〉 = ∑ Pi Ei formula as:

Eav〉 = ∑ Pi Ei = ∑ |Ci|Ei = ∑ Ci*CEi = ∑ C*CEi = ∑ 〈ψ|ηi〉〈ηi|ψ〉E= ∑ 〈ψ|ηiEi〈ηi|ψ〉 over all i

Now, you know that Dirac’s bra-ket notation allows numerous manipulations. For example, what we could do is take out that ‘common factor’ 〈ψ|, and so we may re-write that monster above as:

Eav〉 = 〈ψ| ∑ ηiEi〈ηi|ψ〉 = 〈ψ|φ〉, with |φ〉 = ∑ |ηiEi〈ηi|ψ〉 over all i

Huh? Yes. Note the difference between |ψ〉 = ∑ |ηi〉 C = ∑ |ηi〉〈ηi|ψ〉 and |φ〉 = ∑ |ηiEi〈ηi|ψ〉. As Feynman puts it: φ is just some ‘cooked-up‘ state which you get by taking each of the base states |ηi〉 in the amount Ei〈ηi|ψ〉 (as opposed to the 〈ηi|ψ〉 amounts we took for ψ).

I know: you’re getting tired and you wonder why we need all this stuff. Just hang in there. We’re almost done. I just need to do a few more unpleasant things, one of which is to remind you that this business of the energy states being eigenstates (and the energy levels being eigenvalues) of our Hamiltonian matrix (see my post on N-state systems) comes with a number of interesting properties, including this one:

H |ηi〉 = Eii〉 = |ηiEi

Just think about it: on the left-hand side, we’re multiplying a matrix with a (base) state vector, and on the left-hand side we’re multiplying it with a scalar. So our |φ〉 = ∑ |ηiEi〈ηi|ψ〉 sum now becomes:

|φ〉 = ∑ H |ηi〉〈ηi|ψ〉 over all (i = 1, 2,… , N)

Now we can manipulate that expression some more so as to get the following:

|φ〉 = H ∑|ηi〉〈ηi|ψ〉 = H|ψ〉

Finally, we can re-combine this now with the 〈Eav〉 = 〈ψ|φ〉 equation above, and so we get the fantastic result we wanted:

Eav〉 = 〈 ψ | φ 〉 = 〈 ψ | H ψ 〉

Huh? Yes! To get the average energy, you operate on |ψ with H, and then you multiply the result with ψ|. It’s a beautiful formula. As Feynman notes, the new formula for the average energy is not only pretty but also useful, because now we don’t need to say anything about any particular set of base states. We don’t even have to know all of the possible energy levels. When we go to calculate, we’ll need to describe our state in terms of some set of base states, but if we know the Hamiltonian matrix for that set, we can get the average energy.

Of course, you know that, if you’ve the Hamiltonian, you always have everything, so… Well… Yes. You’re right: it’s less of a big deal than it seems. Having said that, the whole development above is very interesting because of something else: we can easily generalize it for other physical measurements. I call it the ‘average value’ operator idea, but you won’t find that term in any textbook. :-) Let me explain the idea.

The average value operator (A)

The development above illustrates how we can relate a physical observable, like the (average) energy (E), to a quantum-mechanical operator (H). Now, the development above can easily be generalized to any observable that would be proportional to the energy. It’s perfectly reasonable, for example, to assume the angular momentum – as measured in some direction, of course, which we usually refer to as the z-direction – would be proportional to the energy, and so then it would be easy to define a new operator Lz, which we’d define as the operator of the z-component of the angular momentum L. [I know… That’s a bit of a long name but… Well… You get the idea.] So we can write:

Lzav = 〈 ψ | Lψ 〉

In fact, further generalization yields the following grand result:

If a physical observable A is related to a suitable quantum-mechanical operator Â, then the average value of A for the state | ψ 〉 is given by:

Aav = 〈 ψ |  ψ 〉 = 〈 ψ | φ 〉 with | φ 〉 =  ψ 〉

At this point, you may have second thoughts, and wonder: what state | ψ 〉? The answer is: it doesn’t matter. It can be any state, as long as we’re able to describe in terms of a chosen set of base states. :-)

OK. So far, so good. The next step is to look at how this works for the continuity case.

The energy operator for wavefunctions (H)

We can start thinking about the continuous equivalent of the 〈Eav〉 = 〈ψ|H|ψ〉 expression by first expanding it. We write:

e average continuous function

You know the continuous equivalent of a sum like this is an integral, i.e. an infinite sum. Now, because we’ve got two subscripts here (i and j), we get the following double integral:

double integral

Now, I did take my time to walk you through Feynman’s derivation of the energy operator for the discrete case, i.e. the operator when we’re dealing with matrix mechanics, but I think I can simplify my life here by just copying Feynman’s succinct development:

Feynman

Done! Given a wavefunction ψ(x), we get the average energy by doing that integral above. Now, the quantity in the braces of that integral can be written as that operator we introduced when we started this post:

H

So now we can write that integral much more elegantly. It becomes:

Eav = ∫ ψ*(xH ψ(x) dx

You’ll say that doesn’t look like 〈Eav〉 = 〈 ψ | H ψ 〉! It does. Remember that 〈 ψ | = ψ 〉*. :-) Done!

I should add one qualifier though: the formula above assumes our wavefunction has been normalized, so all probabilities add up to one. But that’s a minor thing. The only thing left to do now is to generalize to three dimensions. That’s easy enough. Our expression becomes a volume integral:

Eav = ∫ ψ*(rH ψ(r) dV

Of course, dV stands for dVolume here, not for any potential energy, and, of course, once again we assume all probabilities over the volume add up to 1, so all is normalized. Done! :-)

We’re almost done with this post. What’s left is the position and momentum operator. You may think this is going to another lengthy development but… Well… It turns out the analysis is remarkably simple. Just stay with me a few more minutes and you’ll have earned your degree. :-)

The position operator (x)

The thing we need to solve here is really easy. Look at the illustration below as representing the probability density of some particle being at x. Think about it: what’s the average position?

average position

Well? What? The (expected value of the) average position is just this simple integral: 〈xav = ∫ P(x) dx, over all the whole range of possible values for x. :-) That’s all. Of course, because P(x) = |ψ(x)|2 =ψ*(x)·ψ(x), this integral now becomes:

xav = ∫ ψ*(x) x ψ(x) dx

That looks exactly the same as 〈Eav = ∫ ψ*(xH ψ(x) dx, and so we can look at as an operator too!

Huh? Yes. It’s an extremely simple operator: it just means “multiply by x“. :-)

I know you’re shaking your head now: is it that easy? It is. Moreover, the ‘matrix-mechanical equivalent’ is equally simple but, as it’s getting late here, I’ll refer you to Feynman for that. :-)

The momentum operator (px)

Now we want to calculate the average momentum of, say, some electron. What integral would you use for that? […] Well… What? […] It’s easy: it’s the same thing as for x. We can just substitute replace for in that 〈xav = ∫ P(x) dformula, so we get:

pav = ∫ P(p) dp, over all the whole range of possible values for p

Now, you might think the rest is equally simple, and… Well… It actually is simple but there’s one additional thing in regard to the need to normalize stuff here. You’ll remember we defined a momentum wavefunction (see my post on the Uncertainty Principle), which we wrote as:

φ(p) = 〈 mom p | ψ 〉

Now, in the mentioned post, we related this momentum wavefunction to the particle’s ψ(x) = 〈x|ψ〉 wavefunction—which we should actually refer to as the position wavefunction, but everyone just calls it the particle’s wavefunction, which is a bit of a misnomer, as you can see now: a wavefunction describes some property of the system, and so we can associate several wavefunctions with the same system, really! In any case, we noted the following there:

  • The two probability density functions, φ(p) and ψ(x), look pretty much the same, but the half-width (or standard deviation) of one was inversely proportional to the half-width of the other. To be precise, we found that the constant of proportionality was equal to ħ/2, and wrote that relation as follows: σp = (ħ/2)/σx.
  • We also found that, when using a regular normal distribution function for ψ(x), we’d have to normalize the probability density function by inserting a (2πσx2)−1/2 in front of the exponential.

Now, it’s a bit of a complicated argument, but the upshot is that we cannot just write what we usually write, i.e. Pi = |Ci|2 or P(x) = |ψ(x)|2. No. We need to put a normalization factor in front, which combines the two factors I mentioned above. To be precise, we have to write:

P(p) = |〈p|ψ〉|2/(2πħ)

So… Well… Our 〈pav = ∫ P(p) dp integral can now be written as:

pav = ∫ 〈ψ|ppp|ψ〉 dp/(2πħ)

So that integral is totally like what we found for 〈xav and so… We could just leave it at that, and say we’ve solved the problem. In that sense, it is easy. However, having said that, it’s obvious we’d want some solution that’s written in terms of ψ(x), rather than in terms of φ(p), and that requires some more manipulation. I’ll refer you, once more, to Feynman for that, and I’ll just give you the result:

momentum operator

So… Well… I turns out that the momentum operator – which I tentatively denoted as px above – is not so simple as our position operator (x). Still… It’s not hugely complicated either, as we can write it as:

px ≡ (ħ/i)·(∂/∂x)

Of course, the purists amongst you will, once again, say that I should be more careful and put a hat wherever I’d need to put one so… Well… You’re right. I’ll wrap this all up by copying Feynman’s overview of the operators we just explained, and so he does use the fancy symbols. :-)

overview

Well, folks—that’s it! Off we go! You know all about quantum physics now! We just need to work ourselves through the exercises that come with Feynman’s Lectures, and then you’re ready to go and bag a degree in physics somewhere. So… Yes… That’s what I want to do now, so I’ll be silent for quite a while now. Have fun! :-)

Dirac’s delta function and Schrödinger’s equation in three dimensions

Feynman’s rather informal derivation of Schrödinger’s equation – following Schrödinger’s own logic when he published his famous paper on it back in 1926 – is wonderfully simple but, as I mentioned in my post on it, does lack some mathematical rigor here and there. Hence, Feynman hastens to dot all of the i‘s and cross all of the t‘s in the subsequent Lectures. We’ll look at two things here:

  1. Dirac’s delta function, which ensures proper ‘normalization’. In fact, as you’ll see in a moment, it’s more about ‘orthogonalization’ than normalization. :-)
  2. The generalization of Schrödinger’s equation to three dimensions (in space) and also including the presence of external force fields (as opposed to the usual ‘free space’ assumption).

The second topic is the most interesting, of course, and also the easiest, really. However, let’s first use our energy to grind through the first topic. :-)

Dirac’s delta function

When working with a finite set of discrete states, a fundamental condition is that the base states be ‘orthogonal’, i.e. they must satisfy the following equation:

ij 〉 = δij, with δij = 1 if i = j and δij = 0 if ij

Needless to say, the base states and j are rather special vectors in a rather special mathematical space (a so-called Hilbert space) and so it’s rather tricky to interpret their ‘orthogonality’ in any geometric way, although such geometric interpretation is often actually possible in simple quantum-mechanical systems: you’ll just notice a ‘right’ angle may actually be 45°, or 180° angles, or whatever. :-) In any case, that’s not the point here. The question is: if we move an infinite number of base states – like we did when we introduced the ψ(x) and φ(p) wavefunctions – what happens to that condition?

Your first reaction is going to be: nothing. Because… Well… Remember that, for a two-state system, in which we have two base states only, we’d fully describe some state | φ 〉 as a linear combination of the base states, so we’d write:

| φ 〉 =| I 〉 CI + | II 〉 CII 

Now, while saying we were talking a Hilbert space here, I did add we could use the same expression to define the base states themselves, so I wrote the following triviality:

M1Trivial but sensible. So we’d associate the base state | I 〉 with the base vector (1, 0) and, likewise, base state | II 〉 with the base vector (0, 1). When explaining this, I added that we could easily extend to an N-state system and so there’s a perfect analogy between the 〈 i | j 〉 bra-ket expression in quantum math and the ei·ej product in the run-of-the-mill coordinate spaces that you’re used to. So why can’t we just extend the concept to an infinite-state system and move to base vectors with an infinite number of elements, which we could write as ei =(…, 0, ei = 1, 0, 0,,…) and ej =(…, 0, 0, ej = 1, 0,…), thereby ensuring 〈 i | j 〉 = ei·ej = δijalways! The ‘orthogonality’ condition looks simple enough indeed, and so we could re-write it as:

xx’ 〉 = δxx’, with δxx’ = 1 if x = x’ and δxx’ = 0 if if x ≠ x’

However, when moving from a space with a finite number of dimensions to a space with an infinite number of dimensions, there are some issues. They pop up, for example, when we insert that 〈 xx’ 〉 = δxx’ function (note that we’re talking some function here of x and x’, indeed, so we’ll write it as f(x, x’) in the next step) in that 〈φ|ψ〉 = ∫〈φ|x〉〈x|ψ〉dx integral.

Huh? What integral? Relax: that 〈φ|ψ〉 = ∫〈φ|x〉〈x|ψ〉dx integral just generalizes our 〈φ|ψ〉 = ∑〈φ|x〉〈x|ψ〉 expression for discrete settings for the continuous case. Just look at it. When substituting φ for x’, we get:

x’|ψ〉 = ψ(x’) = ∫ 〈x’|x〉 〈x|ψ〉 dx ⇔ ψ(x’) = ∫ 〈x’|x〉 ψ(x) dx

You’ll say: what’s the problem? Well… From a mathematical point of view, it’s a bit difficult to find a function 〈x’|x〉 = f(x, x’) which, when multiplied with a wavefunction ψ(x), and integrated over all x, will just give us ψ(x’). A bit difficult? Well… It’s worse than that: it’s actually impossible!

Huh? Yes. Feynman illustrates the difficulty for x’ = 0, but he could have picked whatever value, really. In any case, if x’ = 0, we can write f(x, 0) = f(x), and our integral now reduces to:

ψ(0) = ∫ f(x) ψ(0) dx

This is a weird expression: the value of the integral (i.e. the right-hand side of the expression) does not depend on x: it is just some non-zero value ψ(0). However, we know that the f(x) in the integrand is zero for all x ≠ 0. Hence, this integral will be zero. So we have an impossible situation: we wish a function to be zero everywhere but for one point, and, at the same time, we also want it to give us a finite integral when using it in that integral above.

You’re likely to shake your head now and say: what the hell? Does it matter? It does: it is an actual problem in quantum math. Well… I should say: it was an actual problem in quantum math. Dirac solved it. He invented a new function which looks a bit less simple than our suggested generalization of Kronecker’s delta for the continuous case (i.e. that 〈 xx’ 〉 = δxx’ conjecture above). Dirac’s function is – quite logically – referred to as the Dirac delta function, and it’s actually defined by that integral above, in the sense that we impose the following two conditions on it:

  • δ(x‘) = 0 if x ≠ x’ (so that’s just like the first of our two conditions for that 〈 xx’ 〉 = δxx’ function)
  • δ(x)ψ(x) dx = ψ(x’) (so that’s not like the second of our two condition for that 〈 xx’ 〉 = δxx’ function)

Indeed, that second condition is much more sophisticated than our 〈 xx’ 〉 = 1 if x = x’ condition. In fact, one can show that the second condition amounts to finding some function satisfying this condition:

δ(x)dx = 1

We get this by equating x’ to zero once more and, additionally, by equating ψ(x) to 1. [Please do double-check yourself.] Of course, this ‘normalization’ (or ‘orthogonalization’) problem all sounds like a lot of hocus-pocus and, in many ways, it is. In fact, we’re actually talking a mathematical problem here which had been lying around for centuries (for a brief overview, see the Wikipedia article on it). So… Well… Without further ado, I’ll just give you the mathematical expression now—and please don’t stop reading now, as I’ll explain it in a moment:

dirac

I will also credit Wikipedia with the following animation, which shows that the expression above is just the normal distribution function, and which shows what happens when that a, i.e. its standard deviation, goes to zero: Dirac’s delta function is just the limit of a sequence of (zero-centered) normal distributions. That’s all. Nothing more, nothing less.

Dirac_function_approximation

But how do we interpret it? Well… I can’t do better than Feynman as he describes what’s going on really:

“Dirac’s δ(xfunction has the property that it is zero everywhere except at x = 0 but, at the same time, it has a finite integral equal to unity. [See the δ(x)dx = 1 equation.] One should imagine that the δ(x) function has such fantastic infinity at one point that the total area comes out equal to one.”

Well… That says it all, I guess. :-) Don’t you love the way he puts it? It’s not an ‘ordinary’ infinity. No. It’s fantastic. Frankly, I think these guys were all fantastic. :-) The point is: that special function, Dirac’s delta function, solves our problem. The equivalent expression for the 〈 ij 〉 = δij condition for a finite and discrete set of base states is the following one for the continuous case:

xx’ 〉 = δ(x − x’)

The only thing left now is to generalize this result to three dimensions. Now that’s fairly straightforward. The ‘normalization’ condition above is all that’s needed in terms of modifying the equations for dealing with the continuum of base states corresponding to the points along a line. Extending the analysis to three dimensions goes as follows:

  • First, we replace the x coordinate by the vector r = (x, y, z)
  • As a result, integrals over x, become integrals over x, y and z. In other words, they become volume integrals.
  • Finally, the one-dimensional δ-function must be replaced by the product of three δ-functions: one in x, one in y and one in z. We write:

r | r 〉 = δ(x − x’) δ(y − y’)δ(z − z’)

Feynman summarizes it all together as follows:

summary

What if we have two particles, or more? Well… Once again, I won’t bother to try to re-phrase the Grand Master as he explains it. I’ll just italicize or boldface the key points:

Suppose there are two particles, which we can call particle 1 and particle 2. What shall we use for the base states? One perfectly good set can be described by saying that particle 1 is at xand particle 2 is at x2, which we can write as | xx〉. Notice that describing the position of only one particle does not define a base state. Each base state must define the condition of the entire system, so you must not think that each particle moves independently as a wave in three dimensions. Any physical state | ψ 〉 can be defined by giving all of the amplitudes 〈 xx| ψ 〉 to find the two particles at x1 and x2. This generalized amplitude is therefore a function of the two sets of coordinates x1 and x1. You see that such a function is not a wave in the sense of an oscillation that moves along in three dimensions. Neither is it generally simply a product of two individual waves, one for each particle. It is, in general, some kind of a wave in the six dimensions defined by x1 and x1Hence, if there are two particles in Nature which are interacting, there is no way of describing what happens to one of the particles by trying to write down a wave function for it alone. The famous paradoxes that we considered in earlier chapters—where the measurements made on one particle were claimed to be able to tell what was going to happen to another particle, or were able to destroy an interference—have caused people all sorts of trouble because they have tried to think of the wave function of one particle alone, rather than the correct wave function in the coordinates of both particles. The complete description can be given correctly only in terms of functions of the coordinates of both particles.

Now we really know it all, don’t we? :-)

Well… Almost. I promised to tackle another topic as well. So here it is:

Schrödinger’s equation in three dimensions

Let me start by jotting down what we had found already, i.e. Schrödinger’s equation when only one coordinate in space is involved. It’s written as:

schrodinger 3

Now, the extension to three dimensions is remarkably simple: we just substitute the ∂/∂xoperator by the ∇operator, i.e. ∇= ∂/∂x2  + ∂/∂y+ ∂/∂z2. We get:

schrodinger 4

Finally, we can also put forces on the particle, so now we are not looking at a particle moving in free space: we’ve got some force field working on it. It turns out the required modification is equally simple. The grand result is Schrödinger’s original equation in three dimensions:

schrodinger 5

V = V(x, y, z) is, of course, just the potential here. Remarkably simple equations but… How do we get these? Well… Sorry. The math is not too difficult, but you’re well equipped now to look at Feynman’s Lecture on it yourself now. You really are. Trust me. I really dealt with all of the ‘serious’ stuff you need to understand how he’s going about it in my previous posts so, yes, now I’ll just sit back and relax. Or go biking. Or whatever. :-)

The Uncertainty Principle

In my previous post, I showed how Feynman derives Schrödinger’s equation using a historical and, therefore, quite intuitive approach. The approach was intuitive because the argument used a discrete model, so that’s stuff we are well acquainted with—like a crystal lattice, for example. However, now we’re now going to think continuity from the start. Let’s first see what changes in terms of notation.

New notations

Our C(xn, t) = 〈xn|ψ〉 now becomes C(x) = 〈x|ψ〉. This notation does not explicitly show the time dependence but then you know amplitudes like this do vary in space as well as in time. Having said that, the analysis below focuses mainly on their behavior in space, so it does make sense to not explicitly mention the time variable. It’s the usual trick: we look at how stuff behaves in space or, alternatively, in time. So we temporarily ‘forget’ about the other variable. That’s just how we work: it’s hard for our mind to think about these wavefunctions in both dimensions simultaneously although, ideally, we should do that.

Now, you also know that quantum physicists prefer to denote the wavefunction C(x) with some Greek letter: ψ (psi) or φ (phi). Feynman think it’s somewhat confusing because we use the same to denote a state itself, but I don’t agree. I think it’s pretty straightforward. In any case, we write:

ψ(x) = Cψ(x) = C(x) = 〈x|ψ〉

The next thing is the associated probabilities. From your high school math course, you’ll surely remember that we have two types of probability distributions: they are either discrete or, else, continuous. If they’re continuous, then our probability distribution becomes a probability density function (PDF) and, strictly speaking, we should no longer say that the probability of finding our particle at any particular point x at some time t is this or that. That probability is, strictly speaking, zero: if our variable is continuous, then our probability is defined for an interval only, and the P[x] value itself is referred to as a probability density. So we’ll look at little intervals Δx, and we can write the associated probability as:

prob (x, Δx) = |〈x|ψ〉|2Δx = |ψ(x)|2Δx

The idea is illustrated below. We just re-divide our continuous scale in little intervals and calculate the surface of some tiny elongated rectangle now. :-)

image024

It is also easy to see that, when moving to an infinite set of states, our 〈φ|ψ〉 = ∑〈φ|x〉〈x|ψ〉 (over all x) formula for calculating the amplitude for a particle to go from state ψ to state φ should now be written as an infinite sum, i.e. as the following integral:

amplitude continuous

Now, we know that 〈φ|x〉 = 〈x|φ〉* and, therefore, this integral can also be written as:

integral

For example, if φ(x) =  〈x|φ〉 is equal to a simple exponential, so we can write φ(x) = a·eiθ, then φ*(x) =  〈φ|x〉 = a·e+iθ.

With that, we’re ready for the plat de résistance, except for one thing, perhaps: we don’t look at spin here. If we’d do that, we’d have to take two sets of base sets: one for up and one for down spin—but we don’t worry about this, for the time being, that is. :-)

The momentum wavefunction

Our wavefunction 〈x|ψ〉 varies in time as well as in space. That’s obvious. How exactly depends on the energy and the momentum: both are related and, hence, if there’s uncertainty in the momentum, there will be uncertainty in the momentum, and vice versa. Uncertainty in the momentum changes the behavior of the wavefunction in space—through the p = ħk factor in the argument of the wavefunction (θ = ω·t − k·x)—while uncertainty in the energy changes the behavior of the wavefunction in time—through the E = ħω relation. As mentioned above, we focus on the variation in space here. We’ll do so y defining a new state, which is referred to as a state of definite momentum. We’ll write it as mom p, and so now we can use the Dirac notation to write the amplitude for an electron to have a definite momentum equal to p as:

φ(p) = 〈 mom p | ψ 〉

Now, you may think that the 〈x|ψ〉 and 〈mom p|ψ〉 amplitudes should be the same because, surely, we do associate the state with a definite momentum p, don’t we? Well… No! If we want to localize our wave ‘packet’, i.e. localize our particle, then we’re actually not going to associate it with a definite momentum. See my previous posts: we’re going to introduce some uncertainty so our wavefunction is actually a superposition of more elementary waves with slightly different (spatial) frequencies. So we should just go through the motions here and apply our integral formula to ‘unpack’ this amplitude. That goes as follows:

integral 2

So, as usual, when seeing a formula like this, we should remind ourselves of what we need to solve. Here, we assume we somehow know the ψ(x) = 〈x|ψ〉 wavefunction, so the question is: what do we use for 〈 mom p | x 〉? At this point, Feynman wanders off to start a digression on normalization, which really confuses the picture. When everything is said and done, the easiest thing to do is to just jot down the formula for that 〈mom p | x〉 in the integrand and think about it for a while:

〈mom p | x〉 = ei(p/ħ)∙x

I mean… What else could it be? This formula is very fundamental, and I am not going to try to explain it. As mentioned above, Feynman tries to ‘explain’ it by some story about probabilities and normalization, but I think his ‘explanation’ just confuses things even more. Really, what else would it be? The formula above really encapsulates what it means if we say that p and x are conjugate variables. [I can already note, of course, that symmetry implies that we can write something similar for energy and time. Indeed, we can define a state of definite energy as 〈E | ψ〉, and then ‘unpack’ it in the same way, and see that one of the two factors in the integrand would be equal to 〈E | t〉 and, of course, we’d associate a similar formula with it:

E | t〉 = ei(E/ħ)∙t

But let me get back to the lesson here. We’re analyzing stuff in space now, not in time. Feynman gives a simple example here. He suggests a wavefunction which has the following form:

ψ(x) = K·ex2/4σ2

The example is somewhat disingenuous because this is not a complex– but real-valued function. In fact, squaring it, and then calculating applying the normalization condition (all probabilities have to add up to one), yields the normal probability distribution:

prob (x, Δx) = P(x)dx = (2πσ2)−1/2ex2/2σ2dx

So that’s just the normal distribution for μ = 0, as illustrated below.

720px-Normal_Distribution_PDF

In any case, the integral we have to solve now is:Integral 3

Now, I hate integrals as much as you do (probably more) and so I assume you’re also only interested in the result (if you want the detail: check it in Feynman), which we can write as:

φ(p) = (2πη2)−1/4·ep2/4η2, with η = ħ/2σ

This formula is totally identical to the ψ(x) = (2πσ2)−1/4·ex2/4σdistribution we started with, except that it’s got another sigma value, which we denoted by η (and that’s not nu but eta), with 

η = ħ/2σ

Just for the record, Feynman refers to η and σ as the ‘half-width’ of the respective distributions. Mathematicians would say they’re the standard deviation. The concept are nearly the same, but not quite. In any case, that’s another thing I’ll let you find our for yourself. :-) The point is: η and σ are inversely proportional to each other, and the constant of proportionality is equal to ħ/2.

Now, if we take η and σ as measures of the uncertainty in and respectively – which is what they are, obviously ! – then we can re-write that η = ħ/2σ as ησ = ħ/2 or, better still, as the Uncertainty Principle itself:

ΔpΔx = ħ/2

You’ll say: that’s great, but we usually see the Uncertainty Principle written as:

ΔpΔx ≥ ħ/2

So where does that come from? Well… We choose a normal distribution (or the Gaussian distribution, as physicists call it), and so that yields the ΔpΔx = ħ/2 identity. If we’d chosen another one, we’d find a slightly different relation and so… Well… Let me quote Feynman here: “Interestingly enough, it is possible to prove that for any other form of a distribution in x or p, the product ΔpΔcannot be smaller than the one we have found here, so the Gaussian distribution gives the smallest possible value for the ΔpΔproduct.”

This is great. So what about the even more approximate ΔpΔx ≥ ħ formula? Where does that come from? Well… That’s more like a qualitative version of it: it basically says the minimum value of the same product is of the same order as ħ which, as you know, is pretty tiny: it’s about 0.0000000000000000000000000000000006626 J·s. :-) The last thing to note is its dimension: momentum is expressed in newton-second and position in meter, obviously. So the uncertainties in them are expressed in the same unit, and so the dimension of the product is N·m·s = J·s. So this dimension combines force, distance and time. That’s quite appropriate, I’d say. The ΔEΔproduct obviously does the same. But… Well… That’s it, folks! I enjoyed writing this – and I cannot always say the same of other posts! So I hope you enjoyed reading it. :-)

Schrödinger’s equation: the original approach

Of course, your first question when seeing the title of this post is: what’s original, really? Well… The answer is simple: it’s the historical approach, and it’s original because it’s actually quite intuitive. Indeed, Lecture no. 16 in Feynman’s third Volume of Lectures on Physics is like a trip down memory lane as Feynman himself acknowledges, after presenting Schrödinger’s equation using that very rudimentary model we developed in our previous post:

“We do not intend to have you think we have derived the Schrödinger equation but only wish to show you one way of thinking about it. When Schrödinger first wrote it down, he gave a kind of derivation based on some heuristic arguments and some brilliant intuitive guesses. Some of the arguments he used were even false, but that does not matter; the only important thing is that the ultimate equation gives a correct description of nature.”

So… Well… Let’s have a look at it. :-) We were looking at some electron we described in terms of its location at one or the other atom in a linear array (i.e. just a line). We did so by defining base states |n〉 = |xn〉, noting that the state of the electron at any point in time could then be written as:

|φ〉 = ∑ |xnCn(t) = ∑ |xn〉〈xn|φ〉 over all n

The Cn(t) = 〈xn|φ〉 coefficient is the amplitude for the electron to be at xat t. Hence, the Cn(t) amplitudes vary with t as well as with x. We’ll re-write them as Cn(t) = C(xn, t) = C(xn). Note that the latter notation does not explicitly show the time dependence. The Hamiltonian equation we derived in our previous post is now written as:

iħ·(∂C(xn)/∂t) = E0C(xn) − AC(xn+b) − AC(xn−b)

Note that, as part of our move from the Cn(t) to the C(xn) notation, we write the time derivative dCn(t)/dt now as ∂C(xn)/∂t, so we use the partial derivative symbol now (∂). Of course, the other partial derivative will be ∂C(x)/∂x) as we move from the count variable xto the continuous variable x, but let’s not get ahead of ourselves here. The solution we found for our C(xn) functions was the following wavefunction:

C(xn) = a·ei(k∙xn−ω·t) ei∙ω·t·ei∙k∙xn ei·(E/ħ)·t·ei·k∙xn

We also found the following relationship between E and k:

E = E0 − 2A·cos(kb)

Now, even Feynman struggles a bit with the definition of E0 and k here, and their relationship with E, which is graphed below.

energy

Indeed, he first writes, as he starts developing the model, that E0 is, physically, the energy the electron would have if it couldn’t leak away from one of the atoms, but then he also adds: “It represents really nothing but our choice of the zero of energy.”

This is all quite enigmatic because we cannot just do whatever we want when discussing the energy of a particle. As I pointed out in one of my previous posts, when discussing the energy of a particle in the context of the wavefunction, we generally consider it to be the sum of three different energy concepts:

  1. The particle’s rest energy m0c2, which de Broglie referred to as internal energy (Eint), and which includes the rest mass of the ‘internal pieces’, as Feynman puts it (now we call those ‘internal pieces’ quarks), as well as their binding energy (i.e. the quarks’ interaction energy).
  2. Any potential energy it may have because of some field (i.e. if it is not traveling in free space), which we usually denote by U. This field can be anything—gravitational, electromagnetic: it’s whatever changes the energy of the particle because of its position in space.
  3. The particle’s kinetic energy, which we write in terms of its momentum p: m·v2/2 = m2·v2/(2m) = (m·v)2/(2m) = p2/(2m).

It’s obvious that we cannot just “choose” the zero point here: the particle’s rest energy is its rest energy, and it’s velocity is its velocity. So it’s not quite clear what the E0 in our model really is. As far as I am concerned, it represents the average energy of the system really, so it’s just like the E0 for our ammonia molecule, or the E0 for whatever two-state system we’ve seen so far. In fact, when Feynman writes that we can “choose our zero of energy so that E0 − 2A = 0″, what he does, really, is actually make some assumption in regard to the relative magnitude of the various amplitudes involved.

We should probably think about it in this way: −(i/ħ)·E0 is the amplitude for the electron to just stay where it is, while i·A/ħ is the amplitude to go somewhere else—and note we’ve got two possibilities here: the electron can go to |xn+1〉,  or, alternatively, it can go to |xn−1〉. Now, amplitudes can be associated with probabilities by taking the absolute square, so I’d re-write the E0 − 2A = 0 assumption as:

E0 = 2A ⇔ |−(i/ħ)·E0|= |(i/ħ)·2A|2

Hence, in my humble opinion, Feynman’s assumption that E0 − 2A = 0 has nothing to do with ‘choosing the zero of energy’. It’s more like a symmetry assumption: we’re basically saying it’s as likely for the electron to stay where it is as it is to move to the next position. It’s an idea I need to develop somewhat further, as Feynman seems to just gloss over these little things. For example, I am sure it is not a coincidence that the EI, EIIEIII and EIV energy levels we found when discussing the hyperfine splitting of the hydrogen ground state also add up to 0. In fact, you’ll remember we could actually measure those energy levels (E= EII = EIII = A ≈ 9.23×10−6 eV, and EIV = −3A ≈ −27.7×10−6 eV), so saying that we can “choose” some zero energy point is plain nonsense. The question just doesn’t arise. In any case, as I have to continue the development here, I’ll leave this point for further analysis in the future. So… Well… Just note this E0 − 2A = 0 assumption, as we’ll need it in a moment.

The second assumption we’ll need concerns the variation in k. As you know, we can only get a wave packet if we allow for uncertainty in k which, in turn, translates into uncertainty for E. We write:

ΔE = Δ[E0 − 2A·cos(kb)]

Of course, we’d need to interpret the Δ as a variance (σ2) or a standard deviation (σ) so we can apply the usual rules – i.e. var(a) = 0, var(aX) = a2·var(X), and var(aX ± bY) = a2·var(X) + b2·var(Y) ± 2ab·cov(X, Y) – to be a bit more precise about what we’re writing here, but you get the idea. In fact, let me quickly write it out:

var[E0 − 2A·cos(kb)] = var(E0) + 4A2·var[cos(kb)] ⇔ var(E) = 4A2·var[cos(kb)]

Now, you should check my post scriptum to my page on the Essentials, to see how the probability density function of the cosine of a randomly distributed variable looks like, and then you should go online to find a formula for its variance, and then you can work it all out yourself, because… Well… I am not going to do it for you. What I want to do here is just show how Feynman gets Schrödinger’s equation out of all of these simplifications.

So what’s the second assumption? Well… As the graph shows, our k can take any value between −π/b and +π/b, and therefore, the kb argument in our cosine function can take on any value between −π and +π. In other words, kb could be any angle. However, as Feynman puts it—we’ll be assuming that kb is ‘small enough’, so we can use the small-angle approximations whenever we see the cos(kb) and/or sin(kb) functions. So we write: sin(kb) ≈ kb and cos(kb) ≈ 1 − (kb)2/2 = 1 − k2b2/2. Now, that assumption led to another grand result, which we also derived in our previous post. It had to do with the group velocity of our wave packet, which we calculated as:

= dω/dk = (2Ab2/ħ)·k

Of course, we should interpret our k here as “the typical k“. Huh? Yes… That’s how Feynman refers to it, and I have no better term for it. It’s some kind of ‘average’ of the Δk interval, obviously, but… Well… Feynman does not give us any exact definition here. Of course, if you look at the graph once more, you’ll say that, if the typical kb has to be “small enough”, then its expected value should be zero. Well… Yes and no. If the typical kb is zero, or if is zero, then is zero, and then we’ve got a stationary electron, i.e. an electron with zero momentum. However, because we’re doing what we’re doing (that is, we’re studying “stuff that moves”—as I put it unrespectfully in a few of my posts, so as to distinguish from our analyses of “stuff that doesn’t move”, like our two-state systems, for example), our “typical k” should not be zero here. OK… We can now calculate what’s referred to as the effective mass of the electron, i.e. the mass that appears in the classical kinetic energy formula: K.E. = m·v2/2. Now, there are two ways to do that, and both are somewhat tricky in their interpretation:

1. Using both the E0 − 2A = 0 as well as the “small kb” assumption, we find that E = E0 − 2A·(1 − k2b2/2) = A·k2b2. Using that for the K.E. in our formula yields:

meff = 2A·k2b2/v= 2A·k2b2/[(2Ab2/ħ)·k]= ħ2/(2Ab2)

2. We can use the classical momentum formula (p = m·v), and then the 2nd de Broglie equation, which tells us that each wavenumber (k) is to be associated with a value for the momentum (p) using the p = ħk (so p is proportional to k, with ħ as the factor of proportionality). So we can now calculate meff as meff = ħk/v. Substituting again for what we’ve found above, gives us the same:

meff = 2A·k2b2/v = ħ·k/[(2Ab2/ħ)·k] = ħ2/(2Ab2)

Of course, we’re not supposed to know the de Broglie relations at this point in time. :-) But, now that you’ve seen them anyway, note how we have two formulas for the momentum:

  • The classical formula (p = m·v) tells us that the momentum is proportional to the classical velocity of our particle, and m is then the factor of proportionality.
  • The quantum-mechanical formula (p = ħk) tells us that the (typical) momentum is proportional to the (typical) wavenumber, with Planck’s constant (ħ) as the factor of proportionality. Combining both combines the classical and quantum-mechanical perspective of a moving particle:

v = ħk

I know… It’s an obvious equation but… Well… Think of it. It’s time to get back to the main story now. Remember we were trying to find Schrödinger’s equation? So let’s get on with it. :-)

To do so, we need one more assumption. It’s the third major simplification and, just like the others, the assumption is obvious on first, but not on second thought. :-( So… What is it? Well… It’s easy to see that, in our meff = ħ2/(2Ab2) formula, all depends on the value of 2Ab2. So, just like we should wonder what happens with that kb factor in the argument of our sine or cosine function if b goes to zero—i.e. if we’re letting the lattice spacing go to zero, so we’re moving from a discrete to a continuous analysis now—we should also wonder what happens with that 2Ab2 factor! Well… Think about it. Wouldn’t it be reasonable to assume that the effective mass of our electron is determined by some property of the material, or the medium (so that’s the silicon in our previous post) and, hence, that it’s constant really. Think of it: we’re not changing the fundamentals really—we just have some electron roaming around in some medium and all that we’re doing now is bringing those xcloser together. Much closer. It’s only logical, then, that our amplitude to jump from xn±1 to xwould also increase, no? So what we’re saying is that 2Ab2 is some constant which we write as ħ2/meff or, what amounts to the same, that Ab= ħ2/2·meff.

Of course, you may raise two objections here:

  1. The Ab= ħ2/2·meff assumption establishes a very particular relation between A and b, as we can write A as A = [ħ2/(2meff)]·b−2 now. So we’ve got like an y = 1/x2 relation here. Where the hell does that come from?
  2. We were talking some real stuff here: a crystal lattice with atoms that, in reality, do have some spacing, so that corresponds to some real value for b. So that spacing gives some actual physical significance to those xvalues.

Well… What can I say? I think you should re-read that quote of Feynman when I started this post. We’re going to get Schrödinger’s equation – i.e. the ultimate prize for all of the hard work that we’ve been doing so far – but… Yes. It’s really very heuristic, indeed! :-) But let’s get on with it now! We can re-write our Hamiltonian equation as:

iħ·(∂C(xn)/∂t) = E0C(xn) − AC(xn+b) − AC(xn−b)]

= (E0−2A)C(xn) + A[2C(xn) − C(xn+b) − C(xn−b) = A[2C(xn) − C(xn+b) − C(xn−b)]

Now, I know your brain is about to melt down but, fiddling with this equation as we’re doing right now, Schrödinger recognized a formula for the second-order derivative of a function. I’ll just jot it down, and you can google it so as to double-check where it comes from:

second derivative

Just substitute f(x) for C(xn) in the second part of our equation above, and you’ll see we can effectively write that 2C(xn) − C(xn+b) − C(xn−b) factor as:

formula 1

We’re done. We just iħ·(∂C(xn)/∂t) on the left-hand side now and multiply the expression above with A, to get what we wanted to get, and that’s – YES! – Schrödinger’s equation:

Schrodinger 2

Whatever your objections to this ‘derivation’, it is the correct equation. For a particle in free space, we just write m instead of meff, but it’s exactly the same. I’ll now give you Feynman’s full quote, which is quite enlightening:

“We do not intend to have you think we have derived the Schrödinger equation but only wish to show you one way of thinking about it. When Schrödinger first wrote it down, he gave a kind of derivation based on some heuristic arguments and some brilliant intuitive guesses. Some of the arguments he used were even false, but that does not matter; the only important thing is that the ultimate equation gives a correct description of nature. The purpose of our discussion is then simply to show you that the correct fundamental quantum mechanical equation [i.e. Schrödinger’s equation] has the same form you get for the limiting case of an electron moving along a line of atoms. We can think of it as describing the diffusion of a probability amplitude from one point to the next along the line. That is, if an electron has a certain amplitude to be at one point, it will, a little time later, have some amplitude to be at neighboring points. In fact, the equation looks something like the diffusion equations which we have used in Volume I. But there is one main difference: the imaginary coefficient in front of the time derivative makes the behavior completely different from the ordinary diffusion such as you would have for a gas spreading out along a thin tube. Ordinary diffusion gives rise to real exponential solutions, whereas the solutions of Schrödinger’s equation are complex waves.”

So… That says it all, I guess. Isn’t it great to be where we are? We’ve really climbed a mountain here. And I think the view is gorgeous. :-)

Oh—just in case you’d think I did not give you Schrödinger’s equation, let me write it in the form you’ll usually see it:

schrodinger 3

Done! :-)

Quantum math in solid-state physics

I’ve said it a couple of times already: so far, we’ve only studied stuff that does not move in space. Hence, till now, time was the only variable in our wavefunctions. So now it’s time to… Well… Study stuff that does move in space. :-)

Is that compatible with the title of this post? Solid-state physics? Solid-state stuff doesn’t move, does it? Well… No. But what we’re going to look at is how an electron travels through a solid crystal or, more generally, how an atomic excitation can travel through. In fact, what we’re really going to look at is how the wavefunction itself travels through space. However, that’s a rather bold statement, and so you should just read this post and judge for yourself. To be specific, we’re going to look at what happens in semiconductor material, like the silicon that’s used in microelectronic components like transistors and integrated circuits (ICs). You surely know the classical idea of that, which involves imagining an electron can be situated in a kind of ‘pit’ at one particular atom (or an electron hole, as it’s usually referred to), and it just moves from pit to pit. The Wikipedia article on it defines an electron hole as follows: an electron hole is the absence of an electron from a full valence band: the concept is used to conceptualize the interactions of the electrons within a nearly full system, i.e. a system which is missing just a few electrons. But here we’re going to forget about the classical picture. We’ll try to model it using the wavefunction concept. So how does that work? Feynman approaches it as follows.

If we look at a (one-dimensional) line of atoms – we can extend to a two- and three-dimensional analysis later – we may define an infinite number of base states for the extra electron that we think of as moving through the crystal. If the electron is with the n-th atom, then we’ll say it’s in a base state which we shall write as |n〉. Likewise, if it’s at atom n+1 or n−1, then we’ll associate that with base state |n+1〉 and |n−1〉 respectively. That’s what visualized below, and you should just along with the story here: don’t think classically, i.e. in terms of the electron is either here or, else, somewhere else. No. It’s got an amplitude to be anywhere. If you can’t take that… Well… I am sorry but that’s what QM is all about!

electron moving

As usual, we write the amplitude for the electron to be in one of those states |n〉 as Cn(t) = 〈n|φ〉, and so we can the write the electron’s state at any point in time t by superposing all base states, so that’s the weighted sum of all base states, with the weights being equal to the associated amplitudes. So we write:

|φ〉 = ∑ |nCn(t) = ∑ |n〉〈n|φ〉 over all n

Now we add some assumptions. One assumption is that an electron cannot directly jump to its next nearest neighbor: if it goes to the next nearest one, it will first have to go to nearest one. So two steps are needed to go from state |n−1〉 to state |n+1〉. This assumption simplifies the analysis: we can discuss more general cases later. To be specific, we’ll assume the amplitude to go from one base state to another, e.g. from |n〉 to |n+1〉, or |n−1〉 to state |n〉, is equal to i·A/ħ. You may wonder where this comes from, but it’s totally in line with equating our Hamiltonian non-diagonal elements to –A. Let me quickly insert a small digression here—for those who do really wonder where this comes from. :-)

START OF DIGRESSION

Just check out those two-state systems we described, or that post of mine in which I explained why the following formulas are actually quite intuitive and easy to understand:

  • U12(t + Δt, t) = − (i/ħ)·H12(t)·Δt = (i/ħ)·A·Δt and
  • U21(t + Δt, t) = − (i/ħ)·H21(t)·Δt = (i/ħ)·A·Δt

More generally, you’ll remember that we wrote Uij(t + Δt, t) as:

Uij(t + Δt, t) = Uij(t, t) + Kij·Δt = δij(t, t) + Kij·Δt = δij(t, t) − (i/ħ)·Hij(t)·Δt

That looks monstrous but, frankly, what we have here is just a formula like this:

 f(x+Δx) = f(x) + [df(x)/dt]·Δx

In case you didn’t notice, the formula is just the definition of the derivative if we write it as Δy/Δx = df(x)/dt for Δx → 0. Hence, the Kij coefficient in this formula is to be interpreted as a time derivative. Now, we re-wrote that Kij coefficient as the amplitude −(i/ħ)·Hij(t) and, therefore, that amplitude – i.e. the i·A/ħ factor (for ij) I introduced above – is to be interpreted as a time derivative. [Now that we’re here, let me quickly add that a time derivative gives the time rate of change of some quantity per unit time. So that i·A/ħ factor is also expressed per unit time.] We’d then just move the − (i/ħ) factor in that −(i/ħ)·Hij(t) coefficient to the other side to get the grand result we got for two-state systems, i.e. the Hamiltonian equations, which we could write in a number of ways, as shown below:

hamiltonian equations

So… Well… That’s all there is to it, basically. Quantum math is not easy but, if anything, it is logical. You just have to get used to that imaginary unit (i) in front of stuff. That makes it always look very mysterious. :-) However, it should never scare you. You can just move it in or out of the differential operator, for example: i·df(x)/dt = d[i·f(x)]/dt. [Of course, i·f(x) ≠ f(i·x)!] So just think of as a reminder that the number that follows it points in a different direction. To be precise: its angle with the other number is 90°. It doesn’t matter what we call those two numbers. The convention is to say that one is the real part of the wavefunction, while the other is the imaginary part but, frankly, in quantum math, both numbers are just as real. :-)

END OF DIGRESSION

Yes. Let me get back to the lesson here. The assumption is that the Hamiltonian equations for our system here, i.e. the electron traveling from hole to hole, look like the following equation:

Hamiltonian

It’s really like those iħ·(dC1/dt) = E0C1 − AC2 and iħ·(dC2/dt) = − AC1 + E0C2 equations above, except that we’ve got three terms here:

  1. −(i/ħ)·E0 is the amplitude for the electron to just stay where it is, so we multiply that with the amplitude of the electron to be there at that time, i.e. the amplitude Cn(t), and bingo! That’s the first contribution to the time rate of change of the Cn amplitude (i.e. dCn/dt). [Note that all I brought that iħ factor in front to the other side: 1/(iħ) = −(i/ħ).] Of course, you also need to know what Eis now: that’s just the (average) energy of our electron. So it’s really like the Eof our ammonia molecule—or the average energy of any two-state system, really.
  2. −(i/ħ)·(−A) = i·A/ħ is the amplitude to go from one base state to another, i.e. from |n+1〉 to |n〉, for example. In fact, the second term models exactly that: i·A/ħ times the amplitude to be in state |n+1〉 is the second contribution to to the time rate of change of the Cn amplitude.
  3. Finally, the electron may also be in state |n−1〉 and go to |n〉 from there, so i·A/ħ times the amplitude to be in state |n−1〉 is yet another contribution to to the time rate of change of the Cn amplitude.

Now, we don’t want to think about what happens at the start and the end of our line of atoms, so we’ll just assume we’ve got an infinite number of them. As a result, we get an infinite number of equations, which Feynman summarizes as:

hamiltonian equations - 2

Holy cow! How do we solve that? We know that the general solution for those Cn amplitudes is likely to be some function like this:

Cn(t) = an·e−(i/ħ)·E·t

In case you wonder where this comes from, check my post on the general solution for N-state systems. If we substitute that trial solution in that iħ·(dCn/dt) = E0Cn − ACn+1 − ACn−1, we get:

Ea= E0an − Aan+1 − Aan−1

[Just do that derivative, and you’ll see the iħ can be scrapped. Also, the exponentials on both sides of the equation cancel each other out.] Now, that doesn’t look too bad, and we can also write it as (E − E0a= − A(an+1 + an−1 ), but… Well… What’s the next step? We’ve got an infinite number of coefficients ahere, so we can’t use the usual methods to solve this set of equations. Feynman tries something completely different here. It looks weird but… Well… He gets a sensible result, so… Well… Let’s go for it.

He first writes these coefficients aas a function of a distance, which he defines as xn = xn−1 + b, with the atomic spacing, i.e. the distance between two atoms (see the illustration). So now we write a= f(xn) = a(xn). Note that we don’t write a= fn(xn) = an(xn). No. It’s just one function f = a, not an infinite number of functions f= an. Of course, once you see what comes of it, you’ll say: sure! The (complex) acoefficient in that function is the non-time-varying part of our function, and it’s about time we insert some part that’s varying in space and so… Well… Yes, of course! Our acoefficients don’t vary in time, so they must vary in space. Well… Yes. I guess so. :-) Our Ea= E0an − Aan+1 − Aan−1 equation becomes:

a(xn) = E0·a(xn) − a(xn+1) − A·a(xn+1) = E0·a(xn) − a(xn+b) − A·a(xn−b)

We can write this, once again, as (E − E0a(xn) = − A·[a(xn+b) + a(xn−b)]. Feynman notes this equation is like a differential equation, in the sense that it relates the value of some function (i.e. our a function, of course) at some point x to the values of the same function at nearby points, i.e. ± b here. Frankly, I struggle a bit to see how it works exactly but Feynman now offers the following trial solution:

a(xn) = eikxn

Huh? Why? And what’s k? Be patient. Just go along with this for a while. Let’s first do a graph. Think of xas a nearly continuous variable representing position in space. We then know that this parameter k is then equal to the spatial frequency of our wavefunction: larger values for k give the wavefunction a higher density in space, as shown below. 

graph 

In fact, I shouldn’t confuse you here, but you’ll surely think of the wavefunction you saw so many times already:

ψ(x, t) = a·ei·[(E/ħ)·t − (p/ħ)∙x] = a·ei·(ω·t − k∙x) = a·ei(k∙x−ω·t) = a·ei∙k∙x·ei∙ω·t

This was the elementary wavefunction we’d associate with any particle, and so would be equal to p/ħ, which is just the second of the two de Broglie relations: E = ħω and p = ħk (or, what amounts to the same: E = hf and λ = h/p). But you shouldn’t get confused. Not at this point. Or… Well… Not yet. :-)

Let’s just take this proposed solution and plug it in. We get:

eikxn = E0·eikxn − eik(xn+b) − A·eik(xn−b) ⇔ E = E0 − eikb − A·eikb ⇔ E = E0 − 2A·cos(kb)

[In case you wonder what happens here: we just divide both sides by the common factor eikxand then we also know that eiθ+eiθ = 2·cosθ property.] So each is associated with some energy E. In fact, to be precise, that E = E0 − 2A·cos(kb) function is a periodic function: it’s depicted below, and it reaches a maximum at k = ± π/b. [It’s easy to see why: E0 − 2A·cos(kb) reaches a maximum if cos(kb) = −1, i.e. if kb = ± π.]

energy

Of course, we still don’t really know what k or E are really supposed to represent, but think of it: it’s obvious that E can never be larger or smaller than E ± 2A, whatever the value of k. Note that, once again, it doesn’t matter if we used +A or −A in our equations: the energy band remains the same. And… Well… We’ve dropped the term now: the energy band of a semiconductor. That’s what it’s all about. What we’re saying here is that our electron, as it moves about, can have no other energies than the values in this band. Having said, that still doesn’t determine its energy: any energy level within that energy band is possible. So what does that mean? Hmm… Let’s take a break and not bother too much about k for the moment. Let’s look at our Cn(t) equations once more. We can now write them as:

Cn(t) =  eikxn·e−(i/ħ)·E·t = eikxn·e−(i/ħ)·[E0 − 2A·cos(kb)]·t

You have enough experience now to sort of visualize what happens here. We can look at a certain xvalue – read: a certain position in the lattice and watch, as time goes by, how the real and imaginary part of our little Cwavefunction varies sinusoidally. We can also do it the other way around, and take a snapshot of the lattice at a certain point in time, and then we see how the amplitudes vary from point to point. That’s easy enough.

The thing is: we’re interested in probabilities in the end, and our wavefunction does not satisfy us in that regard: if we take the absolute square, its phase vanishes, and so we get the same probability everywhere! [Note that we didn’t normalize our wavefunctions here. It doesn’t matter. We can always do that later.] Now that’s not great. So what can we do about that? Now that’s where that comes back in the game. Let’s have a look.

The effective mass of an electron

We’d like to find a solution which sort of ‘localizes’ our electron in space. Now, we know that we can do, in general, by superposing wavefunctions having different frequencies. There are a number of ways to go about, but the general idea is illustrated below.

Fourier_series_and_transform beats

The first animation (for which credit must go to Wikipedia once more) is, obviously, the most sophisticated one. It shows how a new function – in red, and denoted by s6(x) – is constructed by summing six sine functions of different amplitudes and with harmonically related frequencies. This particular sum is referred to as a Fourier series, and the so-called Fourier transform, i.e. the S(f) function (in blue), depicts the six frequencies and their amplitudes.

We’re more interested in the second animation here (for which credit goes to another nice site), which shows how a pattern of beats is created by just mixing two slightly different cosine waves. We want to do something similar here: we want to get a ‘wave packet‘ like the one below, which shows the real part only—but you can imagine the imaginary part :-) of course. [That’s exactly the same but with a phase shift, cf. the sine and cosine bit in Euler’s formula: eiθ = cosθ + i·sinθ.]

image

As you know, we must know make a distinction between the group velocity of the wave, and its phase velocity. That’s got to do with the dispersion relation, but we’re not going to get into the nitty-gritty here. Just remember that the group velocity corresponds to the classical velocity of our particle – so that must be the classical velocity of our electron here – and, equally important, also remember the following formula for that group velocity:

group velocity

Let’s see how that plays out. The ω in this equation is equal to E/ħ = [E0 − 2A·cos(kb)]/ħ, so dω/dk = d[− (2A/ħ)·cos(kb)]/dk = (2Ab/ħ)·sin(kb). However, we’ll usually assume k is fairly small, so the variation of the amplitude from one xn to the other is fairly small. In that case, kb will be fairly small, and then we can use the so-called small angle approximation formula sin(ε) ≈ ε. [Note the reasoning here is a bit tricky, though, because – theoretically – k may vary between −π/b and +π/b and, hence, kb can take any value between −π and +π.] Using the small angle approximation, we get:

solution velocity

So we’ve got a quantum-mechanical calculation here that yields a classical velocity. Now, we can do something interesting now: we can calculate what is known as the effective mass of the electron, i.e. the mass that appears in the classical kinetic energy formula: K.E. = m·v2/2. Or in the classical momentum formula: p = m·vSo we can now write: K.E. = meff·v2/2 and p = meff·vBut… Well… The second de Broglie equation tells us that p = ħk, so we find that meff = ħk/v. Substituting for what we’ve found above, gives us:

formula for m eff

Unsurprisingly, we find that the value of meff is inversely proportional to A. It’s usually stated in units of the true mass of the electron, i.e. its mass in free space (m≈ 9.11×10−31 kg) and, in these units, it’s usually in the range of 0.01 to 10. You’ll say: 0.01, i.e. one percent of its actual mass? Yes. An electron may travel more freely in matter than it does in free space. :-) That’s weird but… Well… Quantum mechanics is weird.

In any case, I’ll wrap this post up now. You’ ve got a nice model here. As Feynman puts it:

“We have now explained a remarkable mystery—how an electron in a crystal (like an extra electron put into germanium) can ride right through the crystal and flow perfectly freely even though it has to hit all the atoms. It does so by having its amplitudes going pip-pip-pip from one atom to the next, working its way through the crystal. That is how a solid can conduct electricity.”

Well… There you go. :-)

Systems with 2 spin-1/2 particles (II)

In our previous post, we noted the Hamiltonian for a simple system of two spin-1/2 particles—a proton and an electron (i.e. a hydrogen atom, in other words):

hamil

After noting that this Hamiltonian is “the only thing that it can be, by the symmetry of space, i.e. so long as there is no external field,” Feynman also notes the constant term (A) depends on the level we choose to measure energies from, so one might just as well take E= 0, in which case the formula reduces to H = Aσe·σp. Feynman analyzes this term as follows:

If there are two magnets near each other with magnetic moments μe and μp, the mutual energy will depend on μe·μp = |μe||μp|cosα = μeμpcosα — among other things. Now, the classical thing that we call μe or μp appears in quantum mechanics as μeσand μpσrespectively (where μis the magnetic moment of the proton, which is about 1000 times smaller than μe, and has the opposite sign). So the H = Aσe·σp equation says that the interaction energy is like the interaction between two magnets—only not quite, because the interaction of the two magnets depends on the radial distance between them. But the equation could be—and, in fact, is—some kind of an average interaction. The electron is moving all around inside the atom, and our Hamiltonian gives only the average interaction energy. All it says is that for a prescribed arrangement in space for the electron and proton there is an energy proportional to the cosine of the angle between the two magnetic moments, speaking classically. Such a classical qualitative picture may help you to understand where the H = Aσe·σequation comes from.

That’s loud and clear, I guess. The next step is to introduce an external field. The formula for the Hamiltonian (we don’t distinguish between the matrix and the operator here) then becomes:

H = Aσe·σp − μeσe·B − μpσp·B

The first term is the term we already had. The second term is the energy the electron would have in the magnetic field if it were there alone. Likewise, the third term is the energy the proton would have in the magnetic field if it were there alone. When reading this, you should remember the following convention: classically, we write the energy U as U = −μ·B, because the energy is lowest when the moment is along the field. Hence, for positive particles, the magnetic moment is parallel to the spin, while for negative particles it’s opposite. In other words, μp is a positive number, while μe is negative. Feynman sums it all up as follows:

Classically, the energy of the electron and the proton together, would be the sum of the two, and that works also quantum mechanically. In a magnetic field, the energy of interaction due to the magnetic field is just the sum of the energy of interaction of the electron with the external field, and of the proton with the field—both expressed in terms of the sigma operators. In quantum mechanics these terms are not really the energies, but thinking of the classical formulas for the energy is a way of remembering the rules for writing down the Hamiltonian.

That’s also loud and clear. So now we need to solve those Hamiltonian equations once again. Feynman does so first assuming B is constant and in the z-direction. I’ll refer you to him for the nitty-gritty. The important thing is the results here:

energy

He visualizes these – as a function of μB/A – as follows:

fig1Fig2

The illustration shows how the four energy levels have a different B-dependence:

  • EI, EII, EIII start at (0, 1) but EI increases linearly with B—with slope μ, to be precise (cf. the EI = A + μB expression);
  • In contrast, EII decreases linearly with B—again, with slope μ (cf. the EII = A − μB expression);
  • We then have the EIII and EIV curves, which start out horizontally, to then curve and approach straight lines for large B, with slopes equal to μ’.

Oh—I realize I forget to define μ and μ’. Let me do that now: μ = −(μep) and μ’ = −(μe−μp). And remember what we said above: μis about 1000 times smaller than μe, and has opposite sign. OK. The point is: the magnetic field shifts the energy levels of our hydrogen atom. This is referred to as the Zeeman effect. Feynman describes it as follows:

The curves show the Zeeman splitting of the ground state of hydrogen. When there is no magnetic field, we get just one spectral line from the hyperfine structure of hydrogen. The transitions between state IV and any one of the others occurs with the absorption or emission of a photon whose (angular) frequency is 1/ħ times the energy difference 4A. [See my previous post for the calculation.] However, when the atom is in a magnetic field B, there are many more lines, and there can be transitions between any two of the four states. So if we have atoms in all four states, energy can be absorbed—or emitted—in any one of the six transitions shown by the vertical arrows in the illustration above.

The last question is: what makes the transitions go? Let me also quote Feynman’s answer to that:

The transitions will occur if you apply a small disturbing magnetic field that varies with time (in addition to the steady strong field B). It’s just as we saw for a varying electric field on the ammonia molecule. Only here, it is the magnetic field which couples with the magnetic moments and does the trick. But the theory follows through in the same way that we worked it out for the ammonia. The theory is the simplest if you take a perturbing magnetic field that rotates in the xy-plane—although any horizontal oscillating field will do. When you put in this perturbing field as an additional term in the Hamiltonian, you get solutions in which the amplitudes vary with time—as we found for the ammonia molecule. So you can calculate easily and accurately the probability of a transition from one state to another. And you find that it all agrees with experiment.

Alright! All loud and clear. :-)

The magnetic quantum number

At very low magnetic fields, we still have the Zeeman splitting, but we can now approximate it as follows:

magnetic quantum number

This simplified representation of things explains an older concept you may still see mentioned: the magnetic quantum number, which is usually denoted by m. Feynman’s explanation of it is quite straightforward, and so I’ll just copy it as is:

Capture

As he notes: the concept of the magnetic quantum number has nothing to do with new physics. It’s all just a matter of notation. :-)

Well… This concludes our short study of four-state systems. On to the next! :-)

Systems with 2 spin-1/2 particles (I)

I agree: this is probably the most boring title of a post ever. However, it should be interesting, as we’re going to apply what we’ve learned so far – i.e. the quantum-mechanical model of two-state systems – to a much more complicated problem—the solution of which can then be generalized to describe even more complicated situations.

Two spin-1/2 particles? Let’s recall the most obvious example. In the ground state of a hydrogen atom (H), we have one electron that’s bound to one proton. The electron occupies the lowest energy state in its ground state, which – as Feynman shows in one of his first quantum-mechanical calculations – is equal to −13.6 eV. More or less, that is. :-)  You’ll remember the reason for the minus sign: the electron has more energy when it’s unbound, which it releases as radiation when it joins an ionized hydrogen atom or, to put it simply, when a proton and an electron come together. In-between being bound and unbound, there are other discrete energy states – illustrated below – and we’ll learn how to describe the patterns of motion of the electron in each of those states soon enough.

bohr_transitions

Not in this post, however. :-( In this post, we want to focus on the ground state only. Why? Just because. That’s today’s topic. :-) The proton and the electron can be in either of two spin states. As a result, the so-called ground state is not really a single definite-energy state. The spin states cause the so-called hyperfine structure in the energy levels: it splits them into several nearly equal energy levels, so that’s what referred to as hyperfine splitting.

[…] OK. Let’s go for it. As Feynman points out, the whole model is reduced to a set of four base states:

  1. State 1: |++〉 = |1〉 (the electron and proton are both ‘up’)
  2. State 2: |+−〉 = |2〉  (the electron is ‘up’ and the proton is ‘down’)
  3. State 3: |−+〉 = |3〉  (the electron is ‘down’ and the proton is ‘up’)
  4. State 4: |−−〉 = |4〉  (the electron and proton are both ‘down’)

The simplification is huge. As you know, the spin of electrically charged elementary particles is related to their motion in space, but so we don’t care about exact spatial relationships here: the direction of spin can be in any direction, but all that matters here is the relative orientation, and so all is simplified to some direction as defined by the proton and the electron itself. Full stop.

You know that the whole problem is to find the Hamiltonian coefficients, i.e. the energy matrix. Let me give them to you straight away. The energy levels involved are the following:

  • E= EII = EIII = A ≈ 9.23×10−6 eV
  • EIV = −3A ≈ 27.7×10−6 eV

So the difference in energy levels is measured in ten-millionths of an electron-volt and, hence, the hyperfine splitting is really hyper-fine. The question is: how do we get these values? So that is what this post is about. Let’s start by reminding ourselves of what we learned so far.

The Hamiltonian operator

We know that, in quantum mechanics, we describe any state in terms of the base states. In this particular case, we’d do so as follows:

|ψ〉 = |1〉C1 + |2〉C2 + |3〉C3 +|4〉C4 with Ci = 〈i|ψ〉

We refer to |ψ〉 as the spin state of the system, and so it’s determined by those four Ci amplitudes. Now, we know that those Ci amplitudes are functions of time, and they are, in turn, determined by the Hamiltonian matrix. To be precise, we find them by solving a set of linear differential equations that we referred to as Hamiltonian equations. To be precise, we’d describe the behavior of |ψ〉 in time by the following equation:

hamiltonian operator

In case you forgot, the expression above is a short-hand for the following expression:

hamiltonian operator 2The index would range over all base states and, therefore, this expression gives us everything we want: it really does describe the behavior, in time, of an N-state system. You’ll also remember that, when we’d use the Hamiltonian matrix in the way it’s used above (i.e. as an operator on a state), we’d put a little hat over it, so we defined the Hamiltonian operator as:

operator

So far, so good—but this does not solve our problem: how do we find the Hamiltonian for this four-state system? What is it?

Well… There’s no one-size-fits-all answer to that: the analysis of two different two-state systems, like an ammonia molecule, or one spin-1/2 particle in a magnetic field, was different. Having said that, we did find we could generalize some of the solutions we’d find. For example, we’d write the Hamiltonian for a spin-1/2 particle, with a magnetic moment that’s assumed to be equal to μ, in a magnetic field B = (Bx, By, Bz) as:

sigma matrices

In this equation, we’ve got a set of 4 two-by-two matrices (three so-called sigma matrices (σx, σy, σz), and then the unit matrix δij = 1) which we referred to as the Pauli spin matrices, and which we wrote as:

Capture

You’ll remember that expression – which we further abbreviated, even more elegantly, to H = −μσ·B – covered all two-state systems involving a magnetic moment in a magnetic field. In fact, you’ll remember we could actually easily adapt the model to cover two-state systems in electric fields as well.

In short, these sigma matrices made our life very easy—as they covered a whole range of two-state models. So… Well… To make a long story short, what we want to do here is find some similar sigma matrices for four-state problems. So… Well… Let’s do that.

First, you should remind yourself of the fact that we could also use these sigma matrices as little operators themselves. To be specific, we’d let them ‘operate’ on the base states, and we’d find they’d do the following:

P3

You need to read this carefully. What it says that the σz matrix, as an operator, acting on the ‘up’ base state, yields the same base state (i.e. ‘up’), and that the same operator, acting on the ‘down’ state, gives us the same but with a minus sign in front. Likewise, the σy matrix operating on the ‘up’ and ‘down’ states respectively, will give us i·|down〉 and −i·|up〉 respectively.

The trick to solve our problem here (i.e. our four-state system) is to apply those sigma matrices to the electron and the proton separately. Feynman introduces a new notation here by distinguishing the electron and proton sigma operators: the electron sigma operators (σxe, σye, and σze) operate on the electron spin only, while – you guessed it – the proton sigma operator ((σxp, σyp, and σzp) acts on the proton spin only. Applying it to the four states we’re looking at (i.e. |++〉, |+−〉, |−+〉 and |−−〉), we get the following bifurcation for our σx operator:

  1. σxe|++〉 = |−+〉
  2. σxe|+−〉 = |−−〉
  3. σxe|−+〉 = |++〉
  4. σxe|−−〉 = |+−〉
  5. σxp|++〉 = |+−〉
  6. σxp|+−〉 = |++〉
  7. σxp|−+〉 = |−−〉
  8. σxp|−−〉 = |−+〉

You get the idea. We had three operators acting on two states, i.e. 6 possibilities. Now we combine these three operators with two different particles, so we have six operators now, and we let them act on four possible system states, so we have 24 possibilities now. Now, we can, of course, let these operators act one after another. Check the following for example:

 σxeσzp|+−〉 = σxezp|+−〉] = –σxe|+−〉 = –|–−〉

[I now realize that I should have used the ↑ and ↓ symbols for the ‘up’ and ‘down’ states, as the minus sign is used to denote two very different things here, but… Well… So be it.]

Note that we only have nine possible σxeσzp-like combinations, because σxeσz= σzpσxe, and then we have the 2×3 = six σe and σp operators themselves, so that makes for 15 new operators. [Note that the commutativity of these operators (σxeσz= σzpσxe) is not some general property of quantum-mechanical operators.] If we include the unit operator (δij = 1) – i.e. an operator that leaves all unchanged – we’ve got 16 in total. Now, we mentioned that we could write the Hamiltonian for a two-state system – i.e. a two-by-two matrix – as a linear combination of the four Pauli spin matrices. Likewise, one can demonstrate that the Hamiltonian for a four-state system can always be written as some linear combination of those sixteen ‘double-spin’ matrices. To be specific, we can write it as:

hamil

We should note a few things here. First, the E0 constant is, of course, to be multiplied by the unit matrix, so we should actually write E0δij instead of E0, but… Well… Quantum physicists always want to confuse you. :-) Second, the σeσis like the σ·notation: we can look at the σxe, σye, σze and σxp, σyp, σzp matrices as being the three components of two new (matrix) vectors, which we write as σand σrespectively. Thirdly, and most importantly, you’ll want proof of that equation above. Well… I am sorry but I am going to refer you to Feynman here: he shows that the expression above “is the only thing that the Hamiltonian can be.” The proof is based on the fundamental symmetry of space. He also adds that space is symmetrical only so long as there is no external field. :-)

Final question: what’s A? Well… Feynman is quite honest here as he says the following: “A can be calculated accurately once you understand the complete quantum theory of the hydrogen atom—which we so far do not. It has, in fact, been calculated to an accuracy of about 30 parts in one million. So, unlike the flip-flop constant A of the ammonia molecule, which couldn’t be calculated at all well by a theory, our constant A for the hydrogen can be calculated from a more detailed theory. But never mind, we will for our present purposes think of the A as a number which could be determined by experiment, and analyze the physics of the situation.”

So… Well… So far so good. We’ve got the Hamiltonian. That’s all we wanted, actually. But, now that we have come so far, let’s write it all out now.

Solving the equations

If that expression above is the Hamiltonian – and we assume it is, of course! – then our system of Hamiltonian equations can be written as:

dyna

[Note that we’ve switched to Newton’s ‘over-dot’ notation to denote time derivatives here.] Now, I could walk you through Feynman’s exposé but I guess you’ll trust the result. The equation above is equivalent to the following set of four equations:

set

We know that, because the Hamiltonian looks like this:

hamil-2

How do we know that? Well… Sorry: just check Feynman. :-) He just writes it all out. Now, we want to find those Ci functions. [When studying physics, the most important thing is to remember what it is that you’re trying to do. :-) ] Now, from my previous post (i.e. my post on the general solution for N-state systems), you’ll remember that those Ci functions should have the following functional form:

Ci(t) = ai·ei·(E/ħ)·t 

If we substituting Ci(t) for that functional form in our set of Hamiltonian equations, we can cancel the exponentials so we get the following delightfully simple set of new equations:

sol1

The trivial solution, of course, is that all of the ai coefficients are zero, but – as mentioned in my previous post – we’re looking for non-trivial solutions here. Well… From what you see above, it’s easy to appreciate that one non-trivial but simple solution is:

a1 = 1 and a2 = a3 = a4 = 0

So we’ve got one set of ai coefficients here, and we’ll associate it with the first eigenvalue, or energy level, really—which we’ll denote as EI. [I am just being consistent here with what I wrote in my previous post, which explained how general solutions to N-state systems look like.] So we find the following:

E= A

[Another thing you learn when studying physics is that the most amazing things are often summarized in super-terse equations, like this one here. :-) ]

But – Hey! Look at the symmetry between the first and last equation! 

We immediately get another simple – but non-trivial! – solution:

a4 = 1 and a1 = a2 = a3 = 0

We’ll associate the second energy level with that, so we write:

EII = A

We’ve got two left. I’ll leave that to Feynman to solve:

feDone! Four energy levels En (n = I, II, III, IV), and four associated energy state vectors – |n〉 – that describe their configuration (and which, as Feynman puts it, have the time dependence “factored out”). Perfect!

Now, we mentioned the experimental values:

  • E= EII = EIII = A ≈ 9.23×10−6 eV
  • EIV = −3A ≈ 27.7×10−6 eV

How can scientists measure these values? The theoretical analysis gives us the A and −3A values, but what about the empirical measurements? Well… We should find those values as the hydrogen atoms in state I, II or III should get rid of the energy by emitting some radiation. Now, the frequency of that radiation will give us the information we need, as illustrated below. The difference between E= EII = EIII = A and EIV = −3A (i.e. 4A) should correspond to the (angular) frequency of the radiation that’s being emitted or absorbed as atoms go from one energy state to the other. Now, hydrogen atoms do absorb and emit microwave radiation with a frequency that’s equal to 1,420,405,751.8 Hz. More or less, that is. :-) The standard error in the measurement is about two parts in 100 billion—and I am quoting some measurement done in the early 1960s here!]

diagram

Bingo! If = ω/2π = (4A/ħ)/2π = 1,420,405,751.8 Hz, then A = f·2π·ħ/4 ≈ 9.23×10−6 eV.

So… Well… We’re done! I’ll see you tomorrow. :-) Tomorrow, we’re going to look at what happens when space is not symmetric, i.e. when we would have some external field! C u ! Cheers !

N-state systems

On the 10th of December, last year, I wrote that my next post would generalize the results we got for two-state systems. That didn’t happen: I didn’t write the ‘next post’—not till now, that is. No. Instead, I started digging—as you can see from all the posts in-between this one and the 10 December piece. And you may also want to take a look at my new Essentials page. :-) In any case, it is now time to get back to Feynman’s Lectures on quantum mechanics. Remember where we are: halfway, really. The first half was all about stuff that doesn’t move in space. The second half, i.e. all that we’re going to study now, is about… Well… You guessed it. :-) That’s going to be about stuff that does move in space. To see how that works, we first need to generalize the two-state model to an N-state model. Let’s do it.

You’ll remember that, in quantum mechanics, we describe stuff by saying it’s in some state which, as long as we don’t measure in what state exactly, is written as some linear combination of a set of base states. [And please do think about what I highlight here: some state, measureexactly. It all matters. Think about it!] The coefficients in that linear combination are complex-valued functions, which we referred to as wavefunctions, or (probability) amplitudes. To make a long story short, we wrote:

eq1

These Ci coefficients are a shorthand for 〈 i | ψ(t) 〉 amplitudes. As such, they give us the amplitude of the system to be in state i as a function of time. Their dynamics (i.e. the way they evolve in time) are governed by the Hamiltonian equations, i.e.:

Eq2

The Hij coefficients in this set of equations are organized in the Hamiltonian matrix, which Feynman refers to as the energy matrix, because these coefficients do represent energies indeed. So we applied all of this to two-state systems and, hence, things should not be too hard now, because it’s all the same, except that we have N base states now, instead of just two.

So we have a N×N matrix whose diagonal elements Hij are real numbers. The non-diagonal elements may be complex numbers but, if they are, the following rule applies: Hij* = Hji. [In case you wonder: that’s got to do with the fact that we can write any final 〈χ| or 〈φ| state as the conjugate transpose of the initial |χ〉 or |φ〉 state, so we can write: 〈χ| = |χ〉*, or 〈φ| = |φ〉*.]

As usual, the trick is to find those N Ci(t) functions: we do so by solving that set of N equations, assuming we know those Hamiltonian coefficients. [As you may suspect, the real challenge is to determine the Hamiltonian, which we assume to be given here. But… Well… You first need to learn how to model stuff. Once you get your degree, you’ll be paid to actually solve problems using those models. :-) ] We know the complex exponential is a functional form that usually does that trick. Hence, generalizing the results from our analysis of two-state systems once more, the following general solution is suggested:

Ci(t) = ai·ei·(E/ħ)·t 

Note that we introduce only one E variable here, but N ai coefficients, which may be real- or complex-valued. Indeed, my examples – see my previous posts – often involved real coefficients, but that’s not necessarily the case. Think of the C2(t) = i·e(i/ħ)·E0·t·sin[(A/ħ)·t] function describing one of the two base state amplitudes for the ammonia molecule—for example. :-)

Now, that proposed general solution allows us to calculate the derivatives in our Hamiltonian equations (i.e. the d[Ci(t)]/dt functions) as follows:

d[Ci(t)]/dt = −i·(E/ħ)·ai·ei·(E/ħ)·t 

You can now double-check that the set of equations reduces to the following:

Eq4

Please do write it out: because we have one E only, the ei·(E/ħ)·t factor is common to all terms, and so we can cancel it. The other stuff is plain arithmetic: i·i = i2 = 1, and the ħ constants cancel out too. So there we are: we’ve got a very simple set of N equations here, with N unknowns (i.e. these a1, a2,…, aN coefficients, to be specific). We can re-write this system as:

Eq5

The δij here is the Kronecker delta, of course (it’s one for i = j and zero for j), and we are now looking at a homogeneous system of equations here, i.e. a set of linear equations in which all the constant terms are zero. You should remember it from your high school math course. To be specific, you’d write it as Ax = 0, with A the coefficient matrix. The trivial solution is the zero solution, of course: all a1, a2,…, aN coefficients are zero. But we don’t want the trivial solution. Now, as Feynman points out – tongue-in-cheek, really – we actually have to be lucky to have a non-trivial solution. Indeed, you may or may not remember that the zero solution was actually the only solution if the determinant of the coefficient matrix was not equal to zero. So we only had a non-trivial solution if the determinant of A was equal to zero, i.e. if Det[A] = 0. So A has to be some so-called singular matrix. You’ll also remember that, in that case, we got an infinite number of solutions, to which we could apply the so-called superposition principle: if x and y are two solutions to the homogeneous set of equations Ax = 0, then any linear combination of x and y is also a solution. I wrote an addendum to this post (just scroll down and you’ll find it), which explains what systems of linear equations are all about, so I’ll refer you to that in case you’d need more detail here. I need to continue our story here. The bottom line is: the [Hij–δijE] matrix needs to be singular for the system to have meaningful solutions, so we will only have a non-trivial solution for those values of E for which

Det[Hij–δijE] = 0

Let’s spell it out. The condition above is the same as writing:

Eq7

So far, so good. What’s next? Well… The formula for the determinant is the following:

det physicists

That looks like a monster, and it is, but, in essence, what we’ve got here is an expression for the determinant in terms of the permutations of the matrix elements. This is not a math course so I’ll just refer you Wikipedia for a detailed explanation of this formula for the determinant. The bottom line is: if we write it all out, then Det[Hij–δijE] is just an Nth order polynomial in E. In other words: it’s just a sum of products with powers of E up to EN, and so our Det[Hij–δijE] = 0 condition amounts to equating it with zero.

In general, we’ll have N roots, but – sorry you need to remember so much from your high school math classes here – some of them may be multiple roots (i.e. two or more roots may be equal). We’ll call those roots—you guessed it:

EI, EII,…, En,…, EN

Note I am following Feynman’s exposé, and so he uses n, rather than k, to denote the nth Roman numeral (as opposed to Latin numerals). Now, I know your brain is near the melting point… But… Well… We’re not done yet. Just hang on. For each of these values E = EI, EII,…, En,…, EN, we have an associated set of solutions ai. As Feynman puts it: you get a set which belongs to En. In order to not forget that, for each En, we’re talking a set of N coefficients ai (= 1, 2,…, N), we denote that set not by ai(n) but by ai(n). So that’s why we use boldface for our index n: it’s special—and not only because it denotes a Roman numeral! It’s just one of Feynman’s many meaningful conventions.

Now remember that Ci(t) = ai·ei·(E/ħ)·t formula. For each set of ai(n) coefficients, we’ll have a set of Ci(n) functions which, naturally, we can write as:

Ci(n) = ai(nei·(En/ħ)·t

So far, so good. We have N ai(n) coefficients and N Ci(n) functions. That’s easy enough to understand. Now we’ll define also define a set of N new vectors,  which we’ll write as |n〉, and which we’ll refer to as the state vectors that describe the configuration of the definite energy states En (n = I, II,… N). [Just breathe right now: I’ll (try to) explain this in a moment.] Moreover, we’ll write our set of coefficients ai(n) as 〈i|n〉. Again, the boldface n reminds us we’re talking a set of N complex numbers here. So we re-write that set of N Ci(n) functions as follows:

Ci(n) = 〈i|n〉·ei·(En/ħ)·t

We can expand this as follows:

Ci(n) = 〈 i | ψn(t) 〉 = 〈 i | 〉·ei·(En/ħ)·t

which, of course, implies that:

| ψn(t) 〉 = |n〉·ei·(En/ħ)·t

So now you may understand Feynman’s description of those |n〉 vectors somewhat better. As he puts it:

“The |n〉 vectors – of which there are N – are the state vectors that describe the configuration of the definite energy states En (n = I, II,… N), but have the time dependence factored out.”

Hmm… I know. This stuff is hard to swallow, but we’re not done yet: if your brain hasn’t melted yet, it may do so now. You’ll remember we talked about eigenvalues and eigenvectors in our post on the math behind the quantum-mechanical model of our ammonia molecule. Well… We can generalize the results we got there:

  1. The energies EI, EII,…, En,…, EN are the eigenvalues of the Hamiltonian matrix H.
  2. The state vectors |n〉 that are associated with each energy En, i.e. the set of vectors |n〉, are the corresponding eigenstates.

So… Well… That’s it! We’re done! This is all there is to it. I know it’s a lot but… Well… We’ve got a general description of N-state systems here, and so that’s great!

Let me make some concluding remarks though.

First, note the following property: if we let the Hamiltonian matrix act on one of those state vectors |n〉, the result is just En times the same state. We write:

Eq-12

We’re writing nothing new here really: it’s just a consequence of the definition of eigenstates and eigenvalues. The more interesting thing is the following. When describing our two-state systems, we saw we could use the states that we associated with the Eand EII as a new base set. The same is true for N-state systems: the state vectors |n〉 can also be used as a base set. Of course, for that to be the case, all of the states must be orthogonal, meaning that for any two of them, say |n〉 and |m〉, the following equation must hold:

n|m〉 = 0

Feynman shows this will be true automatically if all the energies are different. If they’re not – i.e. if our polynomial in E would accidentally have two (or more) roots with the same energy – then things are more complicated. However, as Feynman points out, this problem can be solved by ‘cooking up’ two new states that do have the same energy but are also orthogonal. I’ll refer you to him for the detail, as well as for the proof of that 〈n|m〉 = 0 equation.

Finally, you should also note that – because of the homogeneity principle – it’s possible to multiply the N ai(n) coefficients by a suitable factor so that all the states are normalized, by which we mean:

n|n〉 = 1

Well… We’re done! For today, at least! :-)

Addendum on Systems of Linear Equations

It’s probably good to briefly remind you of your high school math class on systems of linear equations. First note the difference between homogeneous and non-homogeneous equations. Non-homogeneous equations have a non-zero constant term. The following three equations are an example of a non-homogeneous set of equations:

  • 3x + 2y − z = 1
  • 2x − 2y + 4z = −2
  • −x + y/2 − z = 0

We have a point solution here: (x, y, z) = (1, −2, −2). The geometry of the situation is something like this:

Secretsharing_3-point

One of the equations may be a linear combination of the two others. In that case, that equation can be removed without affecting the solution set. For the three-dimensional case, we get a line solution, as illustrated below.  Intersecting_Planes_2

Homogeneous and non-homogeneous sets of linear equations are closely related. If we write a homogeneous set as Ax = 0, then a non-homogeneous set of equations can be written as Ax = b. They are related. More in particular, the solution set for Ax = b is going to be a translation of the solution set for Ax = 0. We can write that more formally as follows:

If p is any specific solution to the linear system Ax = b, then the entire solution set can be described as {p + v|v is any solution to Ax = 0}

The solution set for a homogeneous system is a linear subspace. In the example above, which had three variables and, hence, for which the vector space was three-dimensional, there were three possibilities: a point, line or plane solution. All are (linear) subspaces—although you’d want to drop the term ‘linear’ for the point solution, of course. :-) Formally, a subspace is defined as follows: if V is a vector space, then W is a subspace if and only if:

  1. The zero vector (i.e. 0) is in W.
  2. If x is an element of W, then any scalar multiple ax will be an element of W too (this is often referred to as the property of homogeneity).
  3. If x and y are elements of W, then the sum of x and y (i.e. x + y) will be an element of W too (this is referred to as the property of additivity).

As you can see, the superposition principle actually combines the properties of homogeneity and additivity: if x and y are solutions, then any linear combination of them will be a solution too.

The solution set for a non-homogeneous system of equations is referred to as a flat. It’s a subset too, so it’s like a subspace, except that it need not pass through the origin. Again, the flats in two-dimensional space are points and lines, while in three-dimensional space we have points, lines and planes. In general, we’ll have flats, and subspaces, of every dimension from 0 to n−1 in n-dimensional space.

OK. That’s clear enough, but what is all that talk about eigenstates and eigenvalues about? Mathematically, we define eigenvectors, aka as characteristic vectors, as follows:

  • The non-zero vector v is an eigenvector of a square matrix A if Av is a scalar multiple of v, i.e. Av = λv.
  • The associated scalar λ is known as the eigenvalue (or characteristic value) associated with the eigenvector v.

Now, in physics, we talk states, rather than vectors—although our states are vectors, of course. So we’ll call them eigenstates, rather than eigenvectors. But the principle is the same, really. Now, I won’t copy what you can find elsewhere—especially not in an addendum to a post, like this one. So let me just refer you elswhere. Paul’s Online Math Notes, for example, are quite good on this—especially in the context of solving a set of differential equations, which is what we are doing here. And you can also find a more general treatment in the Wikipedia article on eigenvalues and eigenstates which, while being general, highlights their particular use in quantum math.

Freewheeling once more…

You remember the elementary wavefunction Ψ(x, t) = Ψ(θ), with θ = ω·t−k∙x = (E/ħ)·t − (p/ħ)∙x = (E·t−p∙x)/ħ. Now, we can re-scale θ and define a new argument, which we’ll write as:

φ = θ/ħ = E·t−p∙x

The Ψ(θ) function can now be written as:

Ψ(x, t) = Ψ(θ) = [ei·(θ/ħ)]ħ = Φ(φ) = [ei·φ]ħ with φ = E·t−p∙x

This doesn’t change the fundamentals: we’re just re-scaling E and p here, by measuring them in units of ħ. 

You’ll wonder: can we do that? We’re talking physics here, so our variables represent something real. Not all we can do in math, should be done in physics, right? So what does it mean? We need to look at the dimensions of our variables. Does it affect our time and distance units, i.e. the second and the meter? Well… I’d say it’s OK.

Energy is expressed in joule: 1 J = 1 N·m. [In SI base units, we write: J = N·m = (kg·m/s2)·m = kg·(m/s)2.] So if we divide it by ħ, whose dimension is joule-second (J·s), we get some value expressed per second, i.e. a (temporal) frequency. That’s what we want, as we’re multiplying it with t in the argument of our wavefunction!

Momentum is expressed in newton-second (N·s). Now, 1 J = 1 N·m, so 1 N = 1 J/m. Hence, if we divide the momentum value by ħ, we get some value expressed per meter: N·s/J·s = N/J = N/N·m = 1/m. So we get a spatial frequency here. That’s what we want, as we’re multiplying it with x!

So the answer is yes: we can re-scale energy and momentum and we get a temporal and spatial frequency respectively, which we can multiply with t and x respectively: we do not need to change our time and distance units when re-scaling E and p by dividing by ħ!

The next question is: if we express energy and momentum as temporal and spatial frequencies, do our E = m·cand p = m·formulas still apply? They should: both and v are expressed in meter per second (m/s) and, as mentioned above, the re-scaling does not affect our time and distance units. Hence, the energy-mass equivalence relation, and the definition of p (p = m·v), imply that we can re-write the argument (φ) of our ‘new’ wavefunction – i.e. Φ(φ) – as:

φ = E·t−p∙x = m·c2∙t − m∙v·x = m·c2[t – (v/c)∙(x/c)] = m·c2[t – (v/c)∙(x/c)]

In effect, when re-scaling our energy and momentum values, we’ve also re-scaled our unit of inertia, i.e. the unit in which we measure the mass m, which is directly related to both energy as well as momentum. To be precise, from a math point of view, m is nothing but a proportionality constant in both the E = m·cand p = m·formulas.

The next step is to fiddle with the time and distance units. If we

  1. measure x and t in equivalent units (so c = 1);
  2. denote v/c by β; and
  3. re-use the x symbol to denote x/c (that’s just to simplify by saving symbols);

we get:

φ = m·(t–β∙x)

This argument is the product of two factors: (1) m and (2) t–β∙x.

  1. The first factor – i.e. the mass m – is an inherent property of the particle that we’re looking at: it measures its inertia, i.e. the key variable in any dynamical model (i.e. any model – classical or quantum-mechanical – representing the motion of the particle).
  2. The second factor – i.e. t–v∙x – reminds one of the argument of the wavefunction that’s used in classical mechanics, i.e. x–vt, with v the velocity of the wave. Of course, we should note two major differences between the t–β∙x and x–vt expressions:
  1. β is a relative velocity (i.e. a ratio between 0 and 1), while v is an absolute velocity (i.e. a number between 0 and ≈ 299,792,458 m/s).
  2. The t–β∙x expression switches the time and distance variables as compared to the x–vt expression, and vice versa.

Both differences are important, but let’s focus on the second one. From a math point of view, the t–β∙x and x–vt expressions are equivalent. However, time is time, and distance is distance—in physics, that is. So what can we conclude here? To answer that question, let’s re-analyze the x–vt expression. Remember its origin: if we have some wave function F(x–vt), and we add some time Δt to its argument – so we’re looking at F[x−v(t+Δt)] now, instead of F(x−vt) – then we can restore it to its former value by also adding some distance Δx = v∙Δt to the argument: indeed, if we do so, we get F[x+Δx−v(t+Δt)] = F(x+vΔt–vt−vΔt) = F(x–vt). Of course, we can do the same analysis the other way around, so we add some Δx and then… Well… You get the idea.

Can we do that for for the F(t–β∙x) expression too? Sure. If we add some Δt to its argument, then we can restore it to its former value by also adding some distance Δx = Δt/β. Just check it: F[(t+Δt)–β(x+Δx)] = F(t+Δt–βx−βΔx) = F(t+Δt–βx−βΔt/β) = F(t–β∙x).

So the mathematical equivalence between the t–β∙x and x–vt expressions is surely meaningful. The F(x–vt) function uniquely determines the waveform and, as part of that determination (or definition, if you want), it also defines its velocity v. Likewise, we can say that the Φ(φ) = Φ[m·(t–β∙x)] function defines the (relative) velocity (β) of the particle that we’re looking at—quantum-mechanically, that is.

You’ll say: we’ve got two variables here: m and β. Well… Yes and no. We can look at m as an independent variable here. In fact, if you want, we could define yet another variable –χ = φ/m = t–β∙x – and, hence, yet another wavefunction here:

Ψ(θ) = [ei·(θ/ħ)]ħ = [ei·φ]ħ = Φ(φ) = Χ(χ) = [ei·φ/m]ħ·m = [ei·χ]ħ·m = [ei·θ/(ħ·m)]ħ·m

Does that make sense? Maybe. Think of it: the spatial dimension of the wave pulse F(x–vt) – if you don’t know what I am talking about: just think of its ‘position’ – is defined by its velocity v = x/t, which – from a math point of view – is equivalent to stating: x – v∙t = 0. Likewise, if we look at our wavefunction as some pulse in space, then its spatial dimension would also be defined by its (relative) velocity, which corresponds to the classical (relative) velocity of the particle we’re looking at. So… Well… As I said, I’ll let you think of all this.

Post Scriptum:

  1. You may wonder what that ħ·m factor in that Χ(χ) = [ei·χ]ħ·m = [ei·(t–β∙x)/(ħ·m)]ħ·m function actually stands for. Well… If we measure time and distance in equivalent units (so = 1 and, therefore, E = m), and if we measure energy in units of ħ, then ħ·m corresponds to our old energy unit, i.e. E measured in joule, rather than in terms of ħ. So… Well… I don’t think we can say much more about it.
  2. Another thing you may want to think about is the relativistic transformation of the wavefunction. You know that we should correct Newton’s Law of Motion for velocities approaching c. We do so by integrating the Lorentz factor. In light of the fact that we’re using the relative velocity (β) in our wave function, do you think we still need to apply such corrections for the wavefunction? What’s your guess? :-)

The Hamiltonian revisited

I want to come back to something I mentioned in a previous post: when looking at that formula for those Uij amplitudes—which I’ll jot down once more:

Uij(t + Δt, t) = δij + ΔUij(t + Δt, t) = δij + Kij(t)·Δt ⇔ Uij(t + Δt, t) = δij − (i/ħ)·Hij(t)·Δt

—I noted that it resembles the general y(t + Δt) = y(t) + Δy = y(t) + (dy/dt)·Δt formula. So we can look at our Kij(t) function as being equal to the time derivative of the Uij(t + Δt, t) function. I want to re-visit that here, as it triggers a whole range of questions, which may or may not help to understand quantum math somewhat more intuitively.  Let’s quickly sum up what we’ve learned so far: it’s basically all about quantum-mechanical stuff that does not move in space. Hence, the x in our wavefunction ψ(x, t) is some fixed point in space and, therefore, our elementary wavefunction—which we wrote as:

ψ(x, t) = a·ei·θ a·ei·(ω·t − k∙x) = a·ei·[(E/ħ)·t − (p/ħ)∙x]

—reduces to ψ(t) = a·ei·ω·t = a·ei·[(E/ħ)·t.

Unlike what you might think, we’re not equating x with zero here. No. It’s the p = m·v factor that becomes zero, because our reference frame is that of the system that we’re looking at, so its velocity is zero: it doesn’t move in our reference frame. That immediately answers an obvious question: does our wavefunction look any different when choosing another reference frame? The answer is obviously: yes! It surely matters if the system moves or not, and it also matters how fast it moves, because it changes the energy and momentum values from E and p to some E’ and p’. However, we’ll not consider such complications here: that’s the realm of relativistic quantum mechanics. Let’s start with the simplest of situations.

A simple two-state system

One of the simplest examples of a quantum-mechanical system that does not move in space, is the textbook example of the ammonia molecule. The picture was as simple as the one below: an ammonia molecule consists of one nitrogen atom and three hydrogen atoms, and the nitrogen atom could be ‘up’ or ‘down’ with regard to the motion of the NH3 molecule around its axis of symmetry, as shown below.

Capture

It’s important to note that this ‘up’ or ‘down’ direction is, once again, defined with respect to the reference frame of the system itself. The motion of the molecule around its axis of symmetry is referred to as its spin—a term that’s used in a variety of contexts and, therefore, is annoyingly ambiguous. When we use the term ‘spin’ (up or down) to describe an electron state, for example, we’d associate it with the direction of its magnetic moment. Such magnetic moment arises from the fact that, for all practical purposes, we can think of an electron as a spinning electric charge. Now, while our ammonia molecule is electrically neutral, as a whole, the two states are actually associated with opposite electric dipole moments, as illustrated below. Hence, when we’d apply an electric field (denoted as ε) below, the two states are effectively associated with different energy levels, which we wrote as E0 ± εμ.

ammonia

But we’re getting ahead of ourselves here. Let’s revert to the system in free space, i.e. without an electromagnetic force field—or, what amounts to saying the same, without potential. Now, the ammonia molecule is a quantum-mechanical system, and so there is some amplitude for the nitrogen atom to tunnel through the plane of hydrogens. I told you before that this is the key to understanding quantum mechanics really: there is an energy barrier there and, classically, the nitrogen atom should not sneak across. But it does. It’s like it can borrow some energy – which we denote by A – to penetrate the energy barrier.

In quantum mechanics, the dynamics of this system are modeled using a set of two differential equations. These differential equations are really the equivalent of Newton’s classical Law of Motion (I am referring to the F = m·(dv/dt) = m·a equation here) in quantum mechanics, so I’ll have to explain them—which is not so easy as explaining Newton’s Law, because we’re talking complex-valued functions, but… Well… Let me first insert the solution of that set of differential equations:

graph

This graph shows how the probability of the nitrogen atom (or the ammonia molecule itself) being in state 1 (i.e. ‘up’) or, else, in state 2 (i.e. ‘down’), varies sinusoidally in time. Let me also give you the equations for the amplitudes to be in state 1 or 2 respectively:

  1. C1(t) = 〈 1 | ψ 〉 = (1/2)·e(i/ħ)·(E− A)·t + (1/2)·e(i/ħ)·(E+ A)·t = e(i/ħ)·E0·t·cos[(A/ħ)·t]
  2. C2(t) = 〈 2 | ψ 〉 = (1/2)·e(i/ħ)·(E− A)·t – (1/2)·e(i/ħ)·(E+ A)·t = i·e(i/ħ)·E0·t·sin[(A/ħ)·t]

So the P1(t) and P2(t) probabilities above are just the absolute square of these C1(t) and C2(t) functions. So as to help you understand what’s going on here, let me quickly insert the following technical remarks:

  • In case you wonder how we go from those exponentials to a simple sine and cosine factor, remember that the sum of complex conjugates, i.e eiθ eiθ reduces to 2·cosθ, while eiθ − eiθ reduces to 2·i·sinθ.
  • As for how to take the absolute square… Well… I shouldn’t be explaining that here, but you should be able to work that out remembering that (i) |a·b·c|2 = |a|2·|b|2·|c|2; (ii) |eiθ|2 = |e−iθ|= 12 = 1 (for any value of θ); and (iii) |i|2 = 1.
  • As for the periodicity of both probability functions, note that the period of the squared sine and cosine functions is equal to π. Hence, the argument of our sine and cosine function will be equal to 0, π, 2π, 3π etcetera if (A/ħ)·t = 0, π, 2π, 3π etcetera, i.e. if t = 0·ħ/A, π·ħ/A, 2π·ħ/A, 3π·ħ/A etcetera. So that’s why we measure time in units of ħ/A above.

The graph above is actually tricky to interpret, as it assumes that we know in what state the molecule starts out with at t = 0. This assumption is tricky because we usually do not know that: we have to make some observation which, curiously enough, will always yield one of the two states—nothing in-between. Or, else, we can use a state selector—an inhomogeneous electric field which will separate the ammonia molecules according to their state. It’s a weird thing really, and it summarizes all of the ‘craziness’ of quantum-mechanics: as long as we don’t measure anything – by applying that force field – our molecule is in some kind of abstract state, which mixes the two base states. But when we do make the measurement, always along some specific direction (which we usually take to be the z-direction in our reference frame), we’ll always find the molecule is either ‘up’ or, else, ‘down’. We never measure it as something in-between. Personally, I like to think the measurement apparatus – I am talking the electric field here – causes the nitrogen atom to sort of ‘snap into place’. However, physicists use more precise language here: they would say that the electric field does result in the two positions having very different energy levels (E0 + εμ and E0 – εμ, to be precise) and that, as a result, the amplitude for the nitrogen atom to flip back and forth has little effect. Now how do we model that?

The Hamiltonian equations

I shouldn’t be using the term above, as it usually refers to a set of differential equations describing classical systems. However, I’ll also use it for the quantum-mechanical analog, which amounts to the following for our simple two-state example above:

Hamiltonian maser

Don’t panic. We’ll explain. The equations above are all the same but use different formats: the first block writes them as a set of equations, while the second uses the matrix notation, which involves the use of that rather infamous Hamiltonian matrix, which we denote by H = [Hij]. Now, we’ve postponed a lot of technical stuff, so… Well… We can’t avoid it any longer. Let’s look at those Hamiltonian coefficients Hij first. Where do they come from?

You’ll remember we thought of time as some kind of apparatus, with particles entering in some initial state φ and coming out in some final state χ. Both are to be described in terms of our base states. To be precise, we associated the (complex) coefficients C1 and C2 with |φ〉 and D1 and D2 with |χ〉. However, the χ state is a final state, so we have to write it as 〈χ| = |χ〉† (read: chi dagger). The dagger symbol tells us we need to take the conjugate transpose of |χ〉, so the column vector becomes a row vector, and its coefficients are the complex conjugate of D1 and D2, which we denote as D1* and D2*. We combined this with Dirac’s bra-ket notation for the amplitude to go from one base state to another, as a function in time (or a function of time, I should say):

Uij(t + Δt, t) = 〈i|U(t + Δt, t)|j〉

This allowed us to write the following matrix equation:

U coefficients

To see what it means, you should write it all out:

〈χ|U(t + Δt, t)|φ〉 = D1*·(U11(t + Δt, t)·C1 + U12(t + Δt, t)·C2) + D2*·(U21(t + Δt, t)·C1 + U22(t + Δt, t)·C2)

= D1*·U11(t + Δt, t)·C+ D1*·U12(t + Δt, t)·C+ D2*·U21(t + Δt, t)·C+ D2*·U22(t + Δt, t)·C2

It’s a horrendous expression, but it’s a complex-valued amplitude or, quite simply, a complex number. So this is not nonsensical. We can now take the next step, and that’s to go from those Uij amplitudes to the Hij amplitudes of the Hamiltonian matrix. The key is to consider the following: if Δt goes to zero, nothing happens, so we write: Uij = 〈i|U|j〉 → 〈i|j〉 = δij for Δt → 0, with δij = 1 if i = j, and δij = 0 if i ≠ j. We then assume that, for small t, those Uij amplitudes should differ from δij (i.e. from 1 or 0) by amounts that are proportional to Δt. So we write:

Uij(t + Δt, t) = δij + ΔUij(t + Δt, t) = δij + Kij(t)·Δt

We then equated those Kij(t) factors with − (i/ħ)·Hij(t), and we were done: Uij(t + Δt, t) = δij − (i/ħ)·Hij(t)·Δt. […] Well… I show you how we get those differential equations in a moment. Let’s pause here for a while to see what’s going on really. You’ll probably remember how one can mathematically ‘construct’ the complex exponential eiθ by using the linear approximation eiε = 1 + iε near θ = 0 and for infinitesimally small values of ε. In case you forgot, we basically used the definition of the derivative of the real exponential eε for ε going to zero:

FormulaSo we’ve got something similar here for U11(t + Δt, t) = 1 − i·[H11(t)/ħ]·Δt and U22(t + Δt, t) = 1 − i·[H22(t)/ħ]·Δt. Just replace the ε in eiε = 1 + iε by ε = − (E0/ħ)·Δt. Indeed, we know that H11 = H22 = E0, and E0/ħ is, of course, just the energy measured in (reduced) Planck units, i.e. in its natural unit. Hence, if our ammonia molecule is in one of the two base states, we start at θ = 0 and then we just start moving on the unit circle, clockwise, because of the minus sign in eiθ. Let’s write it out:

U11(t + Δt, t) = 1 − i·[H11(t)/ħ]·Δt = 1 − i·[E0/ħ]·Δt and

U22(t + Δt, t) = 1 − i·[H22(t)/ħ]·Δt = 1 − i·[E0/ħ]·Δt

But what about U12 and U21? Is there a similar interpretation? Let’s write those equations down and think about them:

U12(t + Δt, t) = 0 − i·[H12(t)/ħ]·Δt = 0 + i·[A/ħ]·Δt and

U21(t + Δt, t) = 0 − i·[H21(t)/ħ]·Δt = 0 + i·[A/ħ]·Δt

We can visualize this as follows:

circle

Let’s remind ourselves of the definition of the derivative of a function by looking at the illustration below:izvodThe f(x0) value in this illustration corresponds to the Uij(t, t), obviously. So now things make somewhat more sense: U11(t, t) = U11(t, t) = 1, obviously, and U12(t, t) = U21(t, t) = 0. We then add the ΔUij(t + Δt, t) to Uij(t, t). Hence, we can, and probably should, think of those Kij(t) coefficients as the derivative of the Uij(t, t) functions with respect to time. So we can write something like this:

H and U

These derivatives are pure imaginary numbers. That does not mean that the Uij(t + Δt, t) functions are purely imaginary: U11(t + Δt, t) and U22(t + Δt, t) can be approximated by 1 − i·[E0/ħ]·Δt for small Δt, so they do have a real part. In contrast, U12(t + Δt, t) and U21(t + Δt, t) are, effectively, purely imaginary (for small Δt, that is).

I can’t help thinking these formulas reflect a deep and beautiful geometry, but its meaning escapes me so far. :-( When everything is said and done, none of the reflections above makes things somewhat more intuitive: these wavefunctions remain as mysterious as ever.

I keep staring at those P1(t) and P2(t) functions, and the C1(t) and C2(t) functions that ‘generate’ them, so to speak. They’re not independent, obviously. In fact, they’re exactly the same, except for a phase difference, which corresponds to the phase difference between the sine and cosine. So it’s all one reality, really: all can be described in one single functional form, so to speak. I hope things become more obvious as I move forward. :-/

Post scriptum: I promised I’d show you how to get those differential equations but… Well… I’ve done that in other posts, so I’ll refer you to one of those. Sorry for not repeating myself. :-)

The de Broglie relations, the wave equation, and relativistic length contraction

You know the two de Broglie relations, also known as matter-wave equations:

f = E/h and λ = h/p

You’ll find them in almost any popular account of quantum mechanics, and the writers of those popular books will tell you that is the frequency of the ‘matter-wave’, and λ is its wavelength. In fact, to add some more weight to their narrative, they’ll usually write them in a somewhat more sophisticated form: they’ll write them using ω and k. The omega symbol (using a Greek letter always makes a big impression, doesn’t it?) denotes the angular frequency, while k is the so-called wavenumber.  Now, k = 2π/λ and ω = 2π·f and, therefore, using the definition of the reduced Planck constant, i.e. ħ = h/2π, they’ll write the same relations as:

  1. λ = h/p = 2π/k ⇔ k = 2π·p/h
  2. f = E/h = (ω/2π)

⇒ k = p/ħ and ω = E/ħ

They’re the same thing: it’s just that working with angular frequencies and wavenumbers is more convenient, from a mathematical point of view that is: it’s why we prefer expressing angles in radians rather than in degrees (k is expressed in radians per meter, while ω is expressed in radians per second). In any case, the ‘matter wave’ – even Wikipedia uses that term now – is, of course, the amplitude, i.e. the wave-function ψ(x, t), which has a frequency and a wavelength, indeed. In fact, as I’ll show in a moment, it’s got two frequencies: one temporal, and one spatial. I am modest and, hence, I’ll admit it took me quite a while to fully distinguish the two frequencies, and so that’s why I always had trouble connecting these two ‘matter wave’ equations.

Indeed, if they represent the same thing, they must be related, right? But how exactly? It should be easy enough. The wavelength and the frequency must be related through the wave velocity, so we can write: f·λ = v, with the velocity of the wave, which must be equal to the classical particle velocity, right? And then momentum and energy are also related. To be precise, we have the relativistic energy-momentum relationship: p·c = mv·v·c = mv·c2·v/c = E·v/c. So it’s just a matter of substitution. We should be able to go from one equation to the other, and vice versa. Right?

Well… No. It’s not that simple. We can start with either of the two equations but it doesn’t work. Try it. Whatever substitution you try, there’s no way you can derive one of the two equations above from the other. The fact that it’s impossible is evidenced by what we get when we’d multiply both equations. We get:

  1. f·λ = (E/h)·(h/p) = E/p
  2. v = f·λ  ⇒ f·λ = v = E/p ⇔ E = v·p = v·(m·v)

⇒ E = m·v2

Huh? What kind of formula is that? E = m·v2? That’s a formula you’ve never ever seen, have you? It reminds you of the kinetic energy formula of course—K.E. = m·v2/2—but… That factor 1/2 should not be there. Let’s think about it for a while. First note that this E = m·vrelation makes perfectly sense if v = c. In that case, we get Einstein’s mass-energy equivalence (E = m·c2), but that’s besides the point here. The point is: if v = c, then our ‘particle’ is a photon, really, and then the E = h·f is referred to as the Planck-Einstein relation. The wave velocity is then equal to c and, therefore, f·λ = c, and so we can effectively substitute to find what we’re looking for:

E/p = (h·f)/(h/λ) = f·λ = c ⇒ E = p·

So that’s fine: we just showed that the de Broglie relations are correct for photons. [You remember that E = p·c relation, no? If not, check out my post on it.] However, while that’s all nice, it is not what the de Broglie equations are about: we’re talking the matter-wave here, and so we want to do something more than just re-confirm that Planck-Einstein relation, which you can interpret as the limit of the de Broglie relations for v = c. In short, we’re doing something wrong here! Of course, we are. I’ll tell you what exactly in a moment: it’s got to do with the fact we’ve got two frequencies really.

Let’s first try something else. We’ve been using the relativistic E = mv·c2 equation above. Let’s try some other energy concept: let’s substitute the E in the f = E/h by the kinetic energy and then see where we get—if anywhere at all. So we’ll use the Ekinetic = m∙v2/2 equation. We can then use the definition of momentum (p = m∙v) to write E = p2/(2m), and then we can relate the frequency f to the wavelength λ using the v = λ∙f formula once again. That should work, no? Let’s do it. We write:

  1. E = p2/(2m)
  2. E = h∙f = h·v

⇒ λ = h·v/E = h·v/(p2/(2m)) = h·v/[m2·v2/(2m)] = h/[m·v/2] = 2∙h/p

So we find λ = 2∙h/p. That is almost right, but not quite: that factor 2 should not be there. Well… Of course you’re smart enough to see it’s just that factor 1/2 popping up once more—but as a reciprocal, this time around. :-) So what’s going on? The honest answer is: you can try anything but it will never work, because the f = E/h and λ = h/p equations cannot be related—or at least not so easily. The substitutions above only work if we use that E = m·v2 energy concept which, you’ll agree, doesn’t make much sense—at first, at least. Again: what’s going on? Well… Same honest answer: the f = E/h and λ = h/p equations cannot be related—or at least not so easily—because the wave equation itself is not so easy.

Let’s review the basics once again.

The wavefunction

The amplitude of a particle is represented by a wavefunction. If we have no information whatsoever on its position, then we usually write that wavefunction as the following complex-valued exponential:

ψ(x, t) = a·ei·[(E/ħ)·t − (p/ħ)∙x] = a·ei·(ω·t − kx= a·ei(kx−ω·t) = a·eiθ = (cosθ + i·sinθ)

θ is the so-called phase of our wavefunction and, as you can see, it’s the argument of a wavefunction indeed, with temporal frequency ω and spatial frequency k (if we choose our x-axis so its direction is the same as the direction of k, then we can substitute the k and x vectors for the k and x scalars, so that’s what we’re doing here). Now, we know we shouldn’t worry too much about a, because that’s just some normalization constant (remember: all probabilities have to add up to one). However, let’s quickly develop some logic here. Taking the absolute square of this wavefunction gives us the probability of our particle being somewhere in space at some point in time. So we get the probability as a function of x and t. We write:

P(x ,t) = |a·ei·[(E/ħ)·t − (p/ħ)∙x]|= a2

As all probabilities have to add up to one, we must assume we’re looking at some box in spacetime here. So, if the length of our box is Δx = x2 − x1, then (Δx)·a2 = (x2−x1a= 1 ⇔ Δx = 1/a2. [We obviously simplify the analysis by assuming a one-dimensional space only here, but the gist of the argument is essentially correct.] So, freezing time (i.e. equating t to some point t = t0), we get the following probability density function:

Capture

That’s simple enough. The point is: the two de Broglie equations f = E/h and λ = h/p give us the temporal and spatial frequencies in that ψ(x, t) = a·ei·[(E/ħ)·t − (p/ħ)∙x] relation. As you can see, that’s an equation that implies a much more complicated relationship between E/ħ = ω and p/ħ = k. Or… Well… Much more complicated than what one would think of at first.

To appreciate what’s being represented here, it’s good to play a bit. We’ll continue with our simple exponential above, which also illustrates how we usually analyze those wavefunctions: we either assume we’re looking at the wavefunction in space at some fixed point in time (t = t0) or, else, at how the wavefunction changes in time at some fixed point in space (x = x0). Of course, we know that Einstein told us we shouldn’t do that: space and time are related and, hence, we should try to think of spacetime, i.e. some ‘kind of union’ of space and time—as Minkowski famously put it. However, when everything is said and done, mere mortals like us are not so good at that, and so we’re sort of condemned to try to imagine things using the classical cut-up of things. :-) So we’ll just an online graphing tool to play with that a·ei(k∙x−ω·t) = a·eiθ = (cosθ + i·sinθ) formula.

Compare the following two graps, for example. Just imagine we either look at how the wavefunction behaves at some point in space, with the time fixed at some point t = t0, or, alternatively, that we look at how the wavefunction behaves in time at some point in space x = x0. As you can see, increasing k = p/ħ or increasing ω = E/ħ gives the wavefunction a higher ‘density’ in space or, alternatively, in time.

density 1

density 2That makes sense, intuitively. In fact, when thinking about how the energy, or the momentum, affects the shape of the wavefunction, I am reminded of an airplane propeller: as it spins, faster and faster, it gives the propeller some ‘density’, in space as well as in time, as its blades cover more space in less time. It’s an interesting analogy: it helps—me, at least—to think through what that wavefunction might actually represent.

propeller

So as to stimulate your imagination even more, you should also think of representing the real and complex part of that ψ = a·ei(k∙x−ω·t) = a·eiθ = (cosθ + i·sinθ) formula in a different way. In the graphs above, we just showed the sine and cosine in the same plane but, as you know, the real and the imaginary axis are orthogonal, so Euler’s formula a·eiθ (cosθ + i·sinθ) = cosθ + i·sinθ = Re(ψ) + i·Im(ψ) may also be graphed as follows:

5d_euler_f

The illustration above should make you think of yet another illustration you’ve probably seen like a hundred times before: the electromagnetic wave, propagating through space as the magnetic and electric field induce each other, as illustrated below. However, there’s a big difference: Euler’s formula incorporates a phase shift—remember: sinθ = cos(θ − π/2)—and you don’t have that in the graph below. The difference is much more fundamental, however: it’s really hard to see how one could possibly relate the magnetic and electric field to the real and imaginary part of the wavefunction respectively. Having said that, the mathematical similarity makes one think!

FG02_06

Of course, you should remind yourself of what E and B stand for: they represent the strength of the electric (E) and magnetic (B) field at some point x at some time t. So you shouldn’t think of those wavefunctions above as occupying some three-dimensional space. They don’t. Likewise, our wavefunction ψ(x, t) does not occupy some physical space: it’s some complex number—an amplitude that’s associated with each and every point in spacetime. Nevertheless, as mentioned above, the visuals make one think and, as such, do help us as we try to understand all of this in a more intuitive way.

Let’s now look at that energy-momentum relationship once again, but using the wavefunction, rather than those two de Broglie relations.

Energy and momentum in the wavefunction

I am not going to talk about uncertainty here. You know that Spiel. If there’s uncertainty, it’s in the energy or the momentum, or in both. The uncertainty determines the size of that ‘box’ (in spacetime) in which we hope to find our particle, and it’s modeled by a splitting of the energy levels. We’ll say the energy of the particle may be E0, but it might also be some other value, which we’ll write as En = E0 ± n·ħ. The thing to note is that energy levels will always be separated by some integer multiple of ħ, so ħ is, effectively , the quantum of energy for all practical—and theoretical—purposes. We then super-impose the various wave equations to get a wave function that might—or might not—resemble something like this:

Photon waveWho knows? :-) In any case, that’s not what I want to talk about here. Let’s repeat the basics once more: if we write our wavefunction a·ei·[(E/ħ)·t − (p/ħ)∙x] as a·ei·[ω·t − k∙x], we refer to ω = E/ħ as the temporal frequency, i.e. the frequency of our wavefunction in time (i.e. the frequency it has if we keep the position fixed), and to k = p/ħ as the spatial frequency (i.e. the frequency of our wavefunction in space (so now we stop the clock and just look at the wave in space). Now, let’s think about the energy concept first. The energy of a particle is generally thought of to consist of three parts:

  1. The particle’s rest energy m0c2, which de Broglie referred to as internal energy (Eint): it includes the rest mass of the ‘internal pieces’, as Feynman puts it (now we call those ‘internal pieces’ quarks), as well as their binding energy (i.e. the quarks’ interaction energy);
  2. Any potential energy it may have because of some field (so de Broglie was not assuming the particle was traveling in free space), which we’ll denote by U, and note that the field can be anything—gravitational, electromagnetic: it’s whatever changes the energy because of the position of the particle;
  3. The particle’s kinetic energy, which we write in terms of its momentum p: m·v2/2 = m2·v2/(2m) = (m·v)2/(2m) = p2/(2m).

So we have one energy concept here (the rest energy) that does not depend on the particle’s position in spacetime, and two energy concepts that do depend on position (potential energy) and/or how that position changes because of its velocity and/or momentum (kinetic energy). The two last bits are related through the energy conservation principle. The total energy is E = mvc2, of course—with the little subscript (v) ensuring the mass incorporates the equivalent mass of the particle’s kinetic energy.

So what? Well… In my post on quantum tunneling, I drew attention to the fact that different potentials , so different potential energies (indeed, as our particle travels one region to another, the field is likely to vary) have no impact on the temporal frequency. Let me re-visit the argument, because it’s an important one. Imagine two different regions in space that differ in potential—because the field has a larger or smaller magnitude there, or points in a different direction, or whatever: just different fields, which corresponds to different values for U1 and U2, i.e. the potential in region 1 versus region 2. Now, the different potential will change the momentum: the particle will accelerate or decelerate as it moves from one region to the other, so we also have a different p1 and p2. Having said that, the internal energy doesn’t change, so we can write the corresponding amplitudes, or wavefunctions, as:

  1. ψ11) = Ψ1(x, t) = a·eiθ1 = a·e−i[(Eint + p12/(2m) + U1)·t − p1∙x]/ħ 
  2. ψ22) = Ψ2(x, t) = a·e−iθ2 = a·e−i[(Eint + p22/(2m) + U2)·t − p2∙x]/ħ 

Now how should we think about these two equations? We are definitely talking different wavefunctions. However, their temporal frequencies ω= Eint + p12/(2m) + U1 and ω= Eint + p22/(2m) + Umust be the same. Why? Because of the energy conservation principle—or its equivalent in quantum mechanics, I should say: the temporal frequency f or ω, i.e. the time-rate of change of the phase of the wavefunction, does not change: all of the change in potential, and the corresponding change in kinetic energy, goes into changing the spatial frequency, i.e. the wave number k or the wavelength λ, as potential energy becomes kinetic or vice versa. The sum of the potential and kinetic energy doesn’t change, indeed. So the energy remains the same and, therefore, the temporal frequency does not change. In fact, we need this quantum-mechanical equivalent of the energy conservation principle to calculate how the momentum and, hence, the spatial frequency of our wavefunction, changes. We do so by boldly equating ω= Eint + p12/(2m) + Uand ω2 = Eint + p22/(2m) + U2, and so we write:

ω= ω2 ⇔ Eint + p12/(2m) + U=  Eint + p22/(2m) + U

⇔ p12/(2m) − p22/(2m) = U– U⇔ p2=  (2m)·[p12/(2m) – (U– U1)]

⇔ p2 = (p12 – 2m·ΔU )1/2

We played with this in a previous post, assuming that p12 is larger than 2m·ΔU, so as to get a positive number on the right-hand side of the equation for p22, so then we can confidently take the positive square root of that (p12 – 2m·ΔU ) expression to calculate p2. For example, when the potential difference ΔU = U– U1 was negative, so ΔU < 0, then we’re safe and sure to get some real positive value for p2.

Having said that, we also contemplated the possibility that p2= p12 – 2m·ΔU was negative, in which case p2 has to be some pure imaginary number, which we wrote as p= i·p’ (so p’ (read: p prime) is a real positive number here). We could work with that: it resulted in an exponentially decreasing factor ep’·x/ħ that ended up ‘killing’ the wavefunction in space. However, its limited existence still allowed particles to ‘tunnel’ through potential energy barriers, thereby explaining the quantum-mechanical tunneling phenomenon.

This is rather weird—at first, at least. Indeed, one would think that, because of the E/ħ = ω equation, any change in energy would lead to some change in ω. But no! The total energy doesn’t change, and the potential and kinetic energy are like communicating vessels: any change in potential energy is associated with a change in p, and vice versa. It’s a really funny thing. It helps to think it’s because the potential depends on position only, and so it should not have an impact on the temporal frequency of our wavefunction. Of course, it’s equally obvious that the story would change drastically if the potential would change with time, but… Well… We’re not looking at that right now. In short, we’re assuming energy is being conserved in our quantum-mechanical system too, and so that implies what’s described above: no change in ω, but we obviously do have changes in p whenever our particle goes from one region in space to another, and the potentials differ. So… Well… Just remember: the energy conservation principle implies that the temporal frequency of our wave function doesn’t change. Any change in potential, as our particle travels from one place to another, plays out through the momentum.

Now that we know that, let’s look at those de Broglie relations once again.

Re-visiting the de Broglie relations

As mentioned above, we usually think in one dimension only: we either freeze time or, else, we freeze space. If we do that, we can derive some funny new relationships. Let’s first simplify the analysis by re-writing the argument of the wavefunction as:

θ = E·t − p·x

Of course, you’ll say: the argument of the wavefunction is not equal to E·t − p·x: it’s (E/ħ)·t − (p/ħ)∙x. Moreover, θ should have a minus sign in front. Well… Yes, you’re right. We should put that 1/ħ factor in front, but we can change units, and so let’s just measure both E as well as p in units of ħ here. We can do that. No worries. And, yes, the minus sign should be there—Nature choose a clockwise direction for θ—but that doesn’t matter for the analysis hereunder.

The E·t − p·x expression reminds one of those invariant quantities in relativity theory. But let’s be precise here. We’re thinking about those so-called four-vectors here, which we wrote as pμ = (E, px, py, pz) = (E, p) and xμ = (t, x, y, z) = (t, x) respectively. [Well… OK… You’re right. We wrote those four-vectors as pμ = (E, px·c , py·c, pz·c) = (E, p·c) and xμ = (c·t, x, y, z) = (t, x). So what we write is true only if we measure time and distance in equivalent units so we have = 1. So… Well… Let’s do that and move on.] In any case, what was invariant was not E·t − p·x·c or c·t − x (that’s a nonsensical expression anyway: you cannot subtract a vector from a scalar), but pμ2 = pμpμ = E2 − (p·c)2 = E2 − p2·c= E2 − (px2 + py2 + pz2c2 and xμ2 = xμxμ = (c·t)2 − x2 = c2·t2 − (x2 + y2 + z2) respectively. [Remember pμpμ and xμxμ are four-vector dot products, so they have that +— signature, unlike the p2 and x2 or a·b dot products, which are just a simple sum of the squared components.] So… Well… E·t − p·x is not an invariant quantity. Let’s try something else.

Let’s re-simplify by equating ħ as well as c to one again, so we write: ħ = c = 1. [You may wonder if it is possible to ‘normalize’ both physical constants simultaneously, but the answer is yes. The Planck unit system is an example.]  then our relativistic energy-momentum relationship can be re-written as E/p = 1/v. [If c would not be one, we’d write: E·β = p·c, with β = v/c. So we got E/p = c/β. We referred to β as the relative velocity of our particle: it was the velocity, but measured as a ratio of the speed of light. So here it’s the same, except that we use the velocity symbol v now for that ratio.]

Now think of a particle moving in free space, i.e. without any fields acting on it, so we don’t have any potential changing the spatial frequency of the wavefunction of our particle, and let’s also assume we choose our x-axis such that it’s the direction of travel, so the position vector (x) can be replaced by a simple scalar (x). Finally, we will also choose the origin of our x-axis such that x = 0 zero when t = 0, so we write: x(t = 0) = 0. It’s obvious then that, if our particle is traveling in spacetime with some velocity v, then the ratio of its position x and the time t that it’s been traveling will always be equal to = x/t. Hence, for that very special position in spacetime (t, x = v·t) – so we’re talking the actual position of the particle in spacetime here – we get: θ = E·t − p·x = E·t − p·v·t = E·t − m·v·v·t= (E −  m∙v2)·t. So… Well… There we have the m∙v2 factor.

The question is: what does it mean? How do we interpret this? I am not sure. When I first jotted this thing down, I thought of choosing a different reference potential: some negative value such that it ensures that the sum of kinetic, rest and potential energy is zero, so I could write E = 0 and then the wavefunction would reduce to ψ(t) = ei·m∙v2·t. Feynman refers to that as ‘choosing the zero of our energy scale such that E = 0’, and you’ll find this in many other works too. However, it’s not that simple. Free space is free space: if there’s no change in potential from one region to another, then the concept of some reference point for the potential becomes meaningless. There is only rest energy and kinetic energy, then. The total energy reduces to E = m (because we chose our units such that c = 1 and, therefore, E = mc2 = m·12 = m) and so our wavefunction reduces to:

ψ(t) = a·ei·m·(1 − v2)·t

We can’t reduce this any further. The mass is the mass: it’s a measure for inertia, as measured in our inertial frame of reference. And the velocity is the velocity, of course—also as measured in our frame of reference. We can re-write it, of course, by substituting t for t = x/v, so we get:

ψ(x) = a·ei·m·(1/vv)·x

For both functions, we get constant probabilities, but a wavefunction that’s ‘denser’ for higher values of m. The (1 − v2) and (1/vv) factors are different, however: these factors becomes smaller for higher v, so our wavefunction becomes less dense for higher v. In fact, for = 1 (so for travel at the speed of light, i.e. for photons), we get that ψ(t) = ψ(x) = e0 = 1. [You should use the graphing tool once more, and you’ll see the imaginary part, i.e. the sine of the (cosθ + i·sinθ) expression, just vanishes, as sinθ = 0 for θ = 0.]

graph

The wavefunction and relativistic length contraction

Are exercises like this useful? As mentioned above, these constant probability wavefunctions are a bit nonsensical, so you may wonder why I wrote what I wrote. There may be no real conclusion, indeed: I was just fiddling around a bit, and playing with equations and functions. I feel stuff like this helps me to understand what that wavefunction actually is somewhat better. If anything, it does illustrate that idea of the ‘density’ of a wavefunction, in space or in time. What we’ve been doing by substituting x for x = v·t or t for t = x/v is showing how, when everything is said and done, the mass and the velocity of a particle are the actual variables determining that ‘density’ and, frankly, I really like that ‘airplane propeller’ idea as a pedagogic device. In fact, I feel it may be more than just a pedagogic device, and so I’ll surely re-visit it—once I’ve gone through the rest of Feynman’s Lectures, that is. :-)

That brings me to what I added in the title of this post: relativistic length contraction. You’ll wonder why I am bringing that into a discussion like this. Well… Just play a bit with those (1 − v2) and (1/vv) factors. As mentioned above, they decrease the density of the wavefunction. In other words, it’s like space is being ‘stretched out’. Also, it can’t be a coincidence we find the same (1 − v2) factor in the relativistic length contraction formula: L = L0·√(1 − v2), in which L0 is the so-called proper length (i.e. the length in the stationary frame of reference) and is the (relative) velocity of the moving frame of reference. Of course, we also find it in the relativistic mass formula: m = mv = m0/√(1−v2). In fact, things become much more obvious when substituting m for m0/√(1−v2) in that ψ(t) = ei·m·(1 − v2)·t function. We get:

ψ(t) = a·ei·m·(1 − v2)·t = a·ei·m0·√(1−v2)·t 

Well… We’re surely getting somewhere here. What if we go back to our original ψ(x, t) = a·ei·[(E/ħ)·t − (p/ħ)∙x] function? Using natural units once again, that’s equivalent to:

ψ(x, t) = a·ei·(m·t − p∙x) = a·ei·[(m0/√(1−v2))·t − (m0·v/√(1−v2)∙x)

= a·ei·[m0/√(1−v2)]·(t − v∙x)

Interesting! We’ve got a wavefunction that’s a function of x and t, but with the rest mass (or rest energy) and velocity as parameters! Now that really starts to make sense. Look at the (blue) graph for that 1/√(1−v2) factor: it goes from one (1) to infinity (∞) as v goes from 0 to 1 (remember we ‘normalized’ v: it’s a ratio between 0 and 1 now). So that’s the factor that comes into play for t. For x, it’s the red graph, which has the same shape but goes from zero (0) to infinity (∞) as v goes from 0 to 1.

graph 2Now that makes sense: the ‘density’ of the wavefunction, in time and in space, increases as the velocity v increases. In space, that should correspond to the relativistic length contraction effect: it’s like space is contracting, as the velocity increases and, therefore, the length of the object we’re watching contracts too. For time, the reasoning is a bit more complicated: it’s our time that becomes more dense and, therefore, our clock that seems to tick faster.

[…]

I know I need to explore this further—if only so as to assure you I have not gone crazy. Unfortunately, I have no time to do that right now. Indeed, from time to time, I need to work on other stuff besides this physics ‘hobby’ of mine. :-/

Post scriptum 1: As for the E = m·vformula, I also have a funny feeling that it might be related to the fact that, in quantum mechanics, both the real and imaginary part of the oscillation actually matter. You’ll remember that we’d represent any oscillator in physics by a complex exponential, because it eased our calculations. So instead of writing A = A0·cos(ωt + Δ), we’d write: A = A0·ei(ωt + Δ) = A0·cos(ωt + Δ) + i·A0·sin(ωt + Δ). When calculating the energy or intensity of a wave, however, we couldn’t just take the square of the complex amplitude of the wave – remembering that E ∼ A2. No! We had to get back to the real part only, i.e. the cosine or the sine only. Now the mean (or average) value of the squared cosine function (or a squared sine function), over one or more cycles, is 1/2, so the mean of A2 is equal to 1/2 = A02. cos(ωt + Δ). I am not sure, and it’s probably a long shot, but one must be able to show that, if the imaginary part of the oscillation would actually matter – which is obviously the case for our matter-wave – then 1/2 + 1/2 is obviously equal to 1. I mean: try to think of an image with a mass attached to two springs, rather than one only. Does that make sense? :-) […] I know: I am just freewheeling here. :-)

Post scriptum 2: The other thing that this E = m·vequation makes me think of is – curiously enough – an eternally expanding spring. Indeed, the kinetic energy of a mass on a spring and the potential energy that’s stored in the spring always add up to some constant, and the average potential and kinetic energy are equal to each other. To be precise: 〈K.E.〉 + 〈P.E.〉 = (1/4)·k·A2 + (1/4)·k·A= k·A2/2. It means that, on average, the total energy of the system is twice the average kinetic energy (or potential energy). You’ll say: so what? Well… I don’t know. Can we think of a spring that expands eternally, with the mass on its end not gaining or losing any speed? In that case, is constant, and the total energy of the system would, effectively, be equal to Etotal = 2·〈K.E.〉 = (1/2)·m·v2/2 = m·v2.

Post scriptum 3: That substitution I made above – substituting x for x = v·t – is kinda weird. Indeed, if that E = m∙v2 equation makes any sense, then E − m∙v2 = 0, of course, and, therefore, θ = E·t − p·x = E·t − p·v·t = E·t − m·v·v·t= (E −  m∙v2)·t = 0·t = 0. So the argument of our wavefunction is 0 and, therefore, we get a·e= for our wavefunction. It basically means our particle is where it is. :-)

The Pauli spin matrices as operators

You must be despairing by now. More theory? Haven’t we had enough? Relax. We’re almost there. The next post is going to generalize our results for n-state systems. However, before we do that, we need one more building block, and that’s this one. So… Well… Let’s go for it. It’s a bit long but, hopefully, interesting enough—so you don’t fall asleep before the end. :-) Let’s first review the concept of an operator itself.

The concept of an operator

You’ll remember Feynman‘s ‘Great Law of Quantum Mechanics’:

| = ∑ | i 〉〈 i | over all base states i.

We also talked of all kinds of apparatuses: a Stern-Gerlach spin filter, a state selector for a maser, a resonant cavity or—quite simply—just time passing by. From a quantum-mechanical point of view, we think of this as particles going into the apparatus in some state φ, and coming out of it in some other state χ. We wrote the amplitude for that as 〈 χ | A | φ 〉. [Remember the right-to-left reading, like Arab or Hebrew script.] Then we applied our ‘Great Law’ to that 〈 χ | A | φ 〉 expression – twice, actually – to get the following expression:

A1

We’re just ‘unpacking’ the φ and χ states here, as we can only describe those states in terms of base states, which we denote as and j here. That’s all. If we’d add another apparatus in series, we’d get:

B1

We just put the | bar between B and A and apply the same trick. The | bar is really like a factor 1 in multiplication—in the sense that we can insert it anywhere: a×b = a×1×b = 1×a×b = a×b×1 = 1×a×1×b×1 = 1×a×b×1 etc. Anywhere? Hmm… It’s not quite the same, but I’ll let you check out the differences. :-) The point is that, from a mathematical point of view, we can fully describe the apparatus A, or the combined apparatus BA, in terms of those 〈 i | A | j 〉 or 〈 i | BA | j 〉 amplitudes. Depending on the number of base states, we’d have a three-by-three, or a two-by-two, or, more generally, an n-by-n matrix, i.e. a square matrix of order n. For example, there are 3×3 = 9 amplitudes if we have three possible states, for example—and, equally obviously, 2×2 = 4 amplitudes for the example involving spin-1/2 particles. [If you think things are way too complicated,… Well… At least we’ve got square matrices here—not n-by-matrices.] We simply called such matrix the matrix of amplitudes, and we usually denoted it by A. However, sometimes we’d also denote it by Aij, or by [Aij], depending on our mood. :-) The preferred notation was A, however, so as to avoid confusion with the matrix elements, which we’d write as Aij.

The Hamiltonian matrix – which, very roughly speaking, is like the quantum-mechanical equivalent of the  dp/dt term of Newton’s Law of Motion: F = dp/dt = m·dv/dt = m·a – is a matrix of amplitudes as well, and we’ll come back to it in a minute. Let’s first continue our story on operators here. The idea of an operator comes up when we’re creative again, and when we drop the 〈 χ | state from the 〈 χ | A | φ〉 expression, so we write:

C1

So now we think of the particle entering the ‘apparatus’ A in the state ϕ and coming out of A in some state ψ (‘psi’). But our psi is a ket, i.e. some initial state. That’s why we write it as | ψ 〉. It doesn’t mean anything until we combine with some bra, like a base state 〈 i |, or with a final state, which we’d denote by 〈 χ | or some other Greek letter between a 〈 and a | symbol. So then we get 〈 χ | ψ 〉 = 〈 χ | A | φ〉 or 〈 i | ψ 〉 = 〈 i | A | φ 〉. So then we’re ‘unpacking’ our bar once more. Let me be explicit here: it’s kinda weird, but if you’re going to study quantum math, you’ll need to accept that, when discussing the state of a system or a particle, like ψ or φ, it does make a difference if they’re initial or final states. To be precise, the final 〈 χ | or 〈 φ | states are equal to the conjugate transpose of the initial | χ 〉 or | φ 〉 states, so we write: 〈 χ | = | χ 〉 or 〈 φ | = | φ 〉. I’ll come back to that, because it’s kind of counter-intuitive: a state should be a state, no? Well… No. Not from a quantum-math point of view at least. :-( But back to our operator. Feynman defines an operator in the following rather intuitive way:

The symbol A is neither an amplitude, nor a vector; it is a new kind of thing called an operator. It is something which “operates on” some state | φ 〉 to produce some new state | ψ 〉.”

But… Well… Be careful! What’s a state? As I mentioned, | ψ 〉 is not the same as 〈 ψ |. We’re talking an initial state | ψ 〉 here, not 〈 ψ |. That’s why we need to ‘unpack’ the operator to see what it does: we have to combine it with some final state that we’re interested in, or a base state. Then—and only then—we get a proper amplitude, i.e. some complex number – or some complex function – that we can work with. To be precise, we then get the amplitude to be in that final state, or in that base state. In practical terms, that means our operator, or our apparatus, doesn’t mean very much as long as we don’t measure what comes out—and measuring something implies we have to choose some set of base states, i.e. a representation, which allows us to describe the final state, which we denoted as 〈 χ | above.

Let’s wrap this up by being clear on the notation once again. We’ll write: Aij = 〈 i | A | j 〉, or Uij = 〈 i | U | j 〉, or Hij = 〈 i | H | j 〉. In other words, we’ll really be consistent now with those subscripts: if they are there, we’re talking a coefficient, or a matrix element. If they’re not there, we’re talking the matrix itself, i.e. A, U or H. Now, to give you a sort of feeling for how that works in terms of the matrix equations that we’ll inevitably have to deal with, let me just jot one of them down here:

time

The Di* numbers are the ‘coordinates’ of the (final) 〈 χ | state in terms of the base states, which we denote as i = +, 0 or − here. So we have three states here. [That’s just to remind you that the two-state systems we’ve seen so far are pretty easy. We’ll soon be working with four-state systems—and then the sky is the limit. :-)] In fact, you’ll remember that those coordinates were the complex conjugate of the ‘coordinates’ of the initial | χ 〉 state, i.e. D+, D0, D, so that 1-by-3 matrix above, i.e. the row vector 〈 χ |[D+*  D0*  D*], is the so-called conjugate transpose of the column vector | χ 〉 = [D+  D0  D]T. [I can’t do columns with this WordPress editor, so I am just putting the T for transpose so as to make sure you understand | χ 〉 is a column vector.]

Now, you’ll wonder – if you don’t, you should :-) – how that Aij = 〈 i | A | j 〉, Uij = 〈 i | U | j 〉, or Hij = 〈 i | H | j 〉 notation works out in terms of matrices. It’s extremely simple really. If we have only two states (yes, back to simplicity), which we’ll also write as + and − (forget about the 0 state), then we can write Aij = 〈 i | A | j 〉 in matrix notation as:

matrix

Huh? Is is that simple? Yes. We can make things more complicated by involving a transformation matrix so we can write our base states in terms of another, different, set of base states but, in essence, this is what we are talking about here. Of course, you should absolutely not try to give a geometric interpretation to our [1 0] or [0 1] ‘coordinates’. If you do that, you get in trouble, because then you want to give the transformed base states the same geometric interpretation and… Well… It just doesn’t make sense. I gave an example of that in my post on the hydrogen molecule as a two-state system. Symmetries in quantum physics are not geometric… Well… Not in a physical sense, that is. As I explained in my previous post, describing spin-1/2 particles involves stuff like 720 degree symmetries and all that. So… Well… Just don’t! :-)

Onwards!

The Hamiltonian as a matrix and as an operator

As mentioned above, our Hamiltonian is a matrix of amplitudes as well, and we can also write it as H, Hij, or [Hij] respectively, depending on our mood. :-) For some reason, Feynman often writes it as Hij, instead of H, which creates a lot of confusion because, in most contexts, Hij refers to the matrix elements, rather than the matrix itself. I guess Feynman likes to keep the subscripts, i.e ij or I,II, as they refer to the representation that was chosen. However, Hij should really refer to the matrix element, and then we can use H for the matrix itself. So let’s be consistent. As I’ve shown above, the Hij notation – and so I am talking the Hamiltonian coefficients here – is actually a shorthand for writing:

Hij = 〈 i | H | j 〉

So the Hamiltonian coefficient (Hij) connects two base states (i and j) through the Hamiltonian matrix (H). Connect? How? Our language in the previous posts, and some of Feynman’s language, may have suggested the Hamiltonian coefficients are amplitudes to go from state j to state i. However, that’s not the case. Or… Well… We need to qualify that statement. What does it mean? The i and j states are base states and, hence, 〈 i | j 〉 = δij, with δij = 1 if i = j and δij = 0 if i ≠ j. Hence, stating that the Hamiltonian coefficients are the amplitudes to go from one state to another is… Well… Let’s say that language is rather inaccurate. We need to include the element of time, so we need to think in terms of those amplitudes C1 and C2, or Cand CII, which are functions in time: Ci = Ci(t). Now, the Hamiltonian coefficients are obviously related to those amplitudes. Sure! That’s quite obvious from the fact they appear in those differential equations for Cand C2, or Cand CII, i.e. the amplitude to be in state 1 or state 2, or state I or state II, respectively. But they’re not the same.

Let’s go back to the basics here. When we derived the Hamiltonian matrix as we presented Feynman’s brilliant differential analysis of it, we wrote the amplitude to go from one base state to another, as a function in time (or a function of time, I should say), as:

Uij = Uij(t + Δt, t) = 〈 i | U | j 〉 = 〈 i | U(t + Δt, t) | j 〉

Our ‘unpacking’ rules then allowed us to write something like this for t = t1 and t + Δt = t2 or – let me quickly circle back to that monster matrix notation above – for Δt = t− t1:

time

The key – as presented by Feynman – to go from those Uij amplitudes to the Hij amplitudes is to consider the following: if Δt goes to zero, nothing happens, so we wrote: Uij = 〈 i | U | j 〉 → 〈 i | j 〉 = δij for Δt → 0. We also assumed that, for small t, those Uij amplitudes should differ from δij (i.e. from 1 or 0) by amounts that are proportional to Δt. So we wrote:

Uij(t + Δt, t) = δij + ΔUij(t + Δt, t) = δij + Kij(t)·Δt ⇔ Uij(t + Δt, t) = δij − (i/ħ)·Hij(t)·Δt

There’s several things here. First, note the first-order linear approximation: it’s just like the general y(t + Δt) = y(t) + Δy = y(t) + (dy/dt)·Δt formula. So can we look at our Kij(t) function as being the time derivative of the Uij(t + Δt, t) function? The answer is, unambiguously, yes. Hence, −(i/ħ)·Hij(t) is the same time derivative. [Why? Because Kij(t) = −(i/ħ)·Hij(t).] Now, the time derivative of a function, i.e. dy/dt, is equal to Δy/Δt for Δt → 0 and, of course, we know that Δy = 0 for Δt → 0. We are now in a position to understand Feynman’s interpretation of the Hamiltonian coefficients:

The −(i/ħ)·Hij(t) = −(i/ħ)·〈 i | H | j 〉 factor is the amplitude that—under the physical conditions described by H—a state j will, during the time dt, “generate” the state i.

I know I shouldn’t make this post too long (I promised to write about the Pauli spin matrices, and I am not even halfway there) but I should note a funny thing there: in that Uij(t + Δt, t) = δij + ΔUij(t + Δt, t) = δij + Kij(t)·Δt = δij − (i/ħ)·Hij(t)·Δt formula, for Δt → 0, we go from real to complex numbers. I shouldn’t anticipate anything but… Well… We know that the Hij coefficients will (usually) represent some energy level, so they are real numbers. Therefore, − (i/ħ)·Hij(t) = Kij(t) is complex-valued, as we’d expect, because Uij(t + Δt, t) is, in general, complex-valued, and δij is just 0 or 1. I don’t have too much time to linger on this, but it should remind you of how one may mathematically ‘construct’ the complex exponential eiby using the linear approximation eiε = 1 + iε near s = 0 or, what amounts to the same, for small ε. My post on this shows how Feynman takes the magic out of Euler’s formula doing that – and I should re-visit it, because I feel the formula above, and that linear approximation formula for a complex exponential, go to the heart of the ‘mystery’, really. But… Well… No time. I have to move on.

Let me quickly make another small technical remark here. When Feynman talks about base states, he always writes them as a bra or a ket, just like any other state. So he talks about “base state | i 〉”, or “base state 〈 i |”. If you look it up, you’ll see he does the same in that quote: he writes | j 〉 and | i 〉, rather than j and i. In fact, strictly speaking, he should write 〈 i | instead of | i 〉. Frankly, I really prefer to just write “base state i”, or base state j”, without specifying if it’s a bra or a ket. A base state is a base state: 〈 i | and | i 〉 represent the same. Of course, it’s rather obvious that 〈 χ | and | χ 〉 are not the same. In fact, as I showed above, they’re each other’s complex conjugate, so 〈 χ |* = | χ 〉. To be precise, I should say: they’re each other’s conjugate transpose, because we’re talking row and column vectors respectively. Likewise, we can write: 〈 χ | φ 〉* = 〈 φ | χ 〉. For base states, this becomes 〈 i | j 〉* = 〈 j | i 〉. Now, 〈 i | and | j 〉 were matrices, really – row and column vectors, to be precise – so we can apply the following rule: the conjugate transpose of the product of two matrices is the product of the conjugate transpose of the same matrices, but with the order of the matrices reversed. So we have: (AB)* = B*A*. In this case: 〈 i | j 〉* = | j 〉*〈 i |*. Huh? Yes. Think about it. I should probably use the dagger notation for the conjugate transpose, rather than the simple * notation, but… Well… It works. The bottom line is: 〈 i | j 〉* = 〈 j | i 〉 = | j 〉*〈 i |* and, therefore, 〈 j | = | j 〉* and | i 〉 = 〈 i |*. Conversely, 〈 j | i 〉* = 〈 i | j 〉 = | i 〉*〈 j |* and, therefore, we also have 〈 j |* = | j 〉 and | i 〉* = 〈 i |. Now, we know the coefficients of these row and column vectors are either one or zero. In short, 〈 i | and | i 〉, or 〈 j | and | j 〉 are really one and the same ‘object’. The only reason why we would use the bra-ket notation is to indicate whether we’re using them in an initial condition, or in a final state. In the specific case that we’re dealing with here, it’s obvious that j is used in an initial condition, and i is a final condition.

We’re now ready to look at these differential equations once more, and try to truly understand them:

cotribu

The summation over all base states j amounts to adding the contribution, so to speak, of all those base states j, during the infinitesimally small time interval dt, to the change in the amplitude (during the same infinitesimal time interval, of course) to be in state i. Does that make sense?

You’ll say: yes. Or maybe. Or maybe not. :-) And I know you’re impatient. We were supposed to talk about the Hamiltonian operator here. So what about that? Why this long story on the Hamiltonian coefficients? Well… Let’s take the next step. An operator is all about ‘abstracting away’, or ‘dropping terms’, as Feynman calls it—more down to the ground. :-) So let’s do that in two successive rounds, as shown below. First we drop the 〈 i |, because the equation holds for any i. Then we apply the grand | = ∑ | i 〉〈 i | rule—which is somewhat tricky, as it also gets rid of the summation. We then define the Hamiltonian operator as H, but we just put a little hat on top of it. That’s all.

operator

As this is all rather confusing, let me show what it means in terms of matrix algebra:

operator2

So… Frankly, it’s not all that difficult. It’s basically introducing a summary notation, which is what operators usually do. Note that the H = (i/ħ)·d/dt operator (sorry if I am not always putting the hat) is not just the d/dt with an extra division by ħ and a multiplication by the imaginary unit i. From a mathematical point of view, of course, that’s what it seems to be, and actually is. From a mathematical point of view, it’s just an n-by-n matrix, and so we can effectively apply it to some n-by-1 column vector to get another n-by-1 column vector.

But its meaning is much deeper: as Feynman puts it: the equation(s) above are the dynamical law of Nature—the law of motion for a quantum system. In a way, it’s like that invariant (1−v2)−1/2·d/dt operator that we introduced when discussing relativity, and things like the proper time and invariance under Lorentz transformation. That operator really did something. It ‘fixed’ things as we applied to the four-vectors in relativistic spacetime. So… Well… Think about it.

Before I move on – because, when everything is said and done, I promised to use the Pauli matrices as operators – I’ll just copy Feynman as he approaches the equations from another angle:

alternative

Of course, that’s the equation we started out with, before we started ‘abstracting away’:

cotribu

So… Well… You can go through the motions once more. Onward!

The Pauli spin matrices as operators

If the Hamiltonian matrix can be used as an operator, then we can use the Pauli spin matrices as little operators too! Indeed, from my previous post, you’ll remember we can write the Hamiltonian in terms of the Pauli spin matrices:

Pauli

Now, if we think of the Hamiltonian matrix as an operator, we can put a little hat everywhere, so we get:

P2

It’s really as simple as that. Now, we get a little bit in trouble with the x, y and subscripts as we’re going to want to write the matrix elements as σij, so we’ll just move them and write them as superscripts, so our matrix elements will be written as σxij = 〈 i | σx | j 〉, σyij = 〈 i | σy | j 〉 and σzij = 〈 i | σz | j 〉 respectively. Now, we introduced all kinds of properties of the Pauli matrices themselves, but let’s now look at the properties of these matrices as an operator. To do that, we’ll let them loose on the base states. We get the following:

P3

[You can check this in Feynman, but it’s really very straightforward, so you should try to get this result yourself.] The next thing is to create even more operators by multiplying the operators two by two. We get stuff like:

σxσy|+〉 = σxy|+〉) = σx(i|−〉) = i·(σx|−〉) = i·|+〉

The thing to note here is that it’s business as usual: we can move factors like out of the operators, as the operators work on the state vectors only. Oh… And sorry I am not putting the hat again. It’s the limitations of the WordPress editor here (I always need to ‘import’ my formulas from Word or some other editor, so I can’t put them in the text itself). On the other hand, Feynman himself seems to doubt the use of the hat symbol, as he writes: “It is best, when working with these things, not to keep track of whether a quantity like σ or H is an operator or a matrix. All the (matrix) equations are the same anyway.

That makes it all rather tedious or, in fact, no! That makes it all quite easy, because our table with the properties of the sigma matrices is also valid for the sigma operators, so let’s just copy it, and then we’re done, so we can wrap up and do something else. :-)

products

To conclude, let me answer your most pressing question at this very moment: what’s the use of this? Well… To a large extent, it’s a nice way of rather things. For example, let’s look at our equations for the ammonia molecule once more. But… Well… No. I’ll refer you to Feynman here, as he re-visits all the systems we’ve studied before, but now approaches them with our new operators and notations. Have fun with it! :-)

Pauli’s spin matrices

Wolfgang Pauli’s life is as wonderful as his scientific legacy—but we’ll just talk about one of his many contributions to quantum mechanics here in this post—not about his life.

This post should be fairly straightforward. We just want to review some of the math. Indeed, we got the ‘Grand Result’ already in our previous post, as we found the Hamiltonian coefficients for a spin one-half particle—read: all matter-particles, practically speaking—in a magnetic field—but then we can just replace the magnetic dipole moment by an electric dipole moment, if needed, and we’ll find the same formulas, so we’ve basically covered everything you can possible think of.

[…] Well… Sort of… :-)

OK. Jokes aside, we have a magnetic field B, which we describe in terms of its components: B = (Bx, By, Bz), and we’ve defined two mutually exclusive states – call them ‘up’ or ‘down’, or 1 or 2, or + or −, whatever − along some direction, which we call the z-direction. Why? Convention. Historical accident. The z-direction is the direction in regard to which we measure stuff. What stuff? Well… Stuff like the spin of an electron: quantum-mechanical stuff. :-) In any case, the Hamiltonian that comes with this system is:

F1

Now, because this matrix doesn’t look impressive enough, we’re going to re-write it as:

Pauli

Huh? Yes. It looks good, doesn’t it? And the σx, σy and σz matrices are given below, so you can check it’s actually true. […] I mean: you can check that the two notations are equivalent, from a math point of view, that is. :-)

Capture

As Feynman puts it: “This is what the professionals use all of the time.” So… Well… Yes. We had better learn them by heart. :-)

The identity matrix is actually not one of the so-called Pauli spin matrices, but we need it when we’d decide to not equate the average energy of our system to zero, i.e. when we’d decide to shift the zero point of our energy scale so as to include the equivalent energy of the rest mass. In that case, we re-write the Hamiltonian as:

Capture2

In fact, as most academics want to hide their knowledge from us by confusing us deliberately, they’ll often omit the Kronecker delta, and simply write:

Capture3

It’s OK, as long as you know what it is that you’re trying to do. :-) The point is, we’ve got four ‘elementary’ matrices now which allow us to write any matrix – literally, any matrix – as a linear combination of them. In Feynman‘s words:

text

Now, the Pauli matrices have lots of interesting properties. Their products, for example, taken two at a time, are rather special:

products

The most interesting property, however, is that, when chosing some other represenation, i.e. when changing to another coordinate systemthe three Pauli matrices behave like the components of a vector. That vector is written as σ, and so it’s a matrix you can use in different coordinate systems, as though it’s a vector. It allows us to re-write the Hamiltonian we started out with in a particularly nice way:

Pauli vector

You should compare this to the classical formula for the energy of a little magnet with the magnetic moment μ in the same magnetic field:

m

There are several differences, of course. First, note that the quantum-mechanical magnetic moment is like the quantum-mechanical angular momentum: there’s only a limited set of discrete values, given by the following relation:

mm

That’s why we write it as a scalar in the quantum-mechanical equation, and as a vector, i.e. in boldface (μ), in the second equation. The two equations differ more fundamentally, however: the first one is a matrix equation, while the second one is… Well… Just a simple vector dot product.

The point is: the classical energy becomes the Hamiltonian matrix, and the classical μ vector becomes the μσ matrix. As Feynman puts it: “It is sometimes said that to each quantity in classical physics there corresponds a matrix in quantum mechanics, but it is really more correct to say that the Hamiltonian matrix corresponds to the energy, and any quantity that can be defined via energy has a corresponding matrix.”

[…]

What does he mean by a quantity that can be defined via energy? It’s simple: the magnetic moment, for example, can be defined via energy by saying that the energy, in an external field B, is equal to −μ·B.

Huh? Wasn’t it the other way around? Didn’t we define the energy by saying it’s equal to −μ·B?

We did. In our posts on electromagnetism. That was classical theory. However, in quantum mechanics, it’s the energy that’s the ‘currency’ we need to be dealing in. So it makes sense to look at things the other way around: we’ll first think about the energy, and then we try to find a matrix that corresponds to it.

So… Yes. Many classical quantities have their quantum-mechanical counterparts, and those quantum-mechanical counterparts are often some matrices. But not all of them. Sometimes there’s just no comparison, because the two worlds are actually different. Let me quote Feynman on what he thinks of how these two worlds relate, as he wraps up his discussion of the two equations above:

philosophy Well… That says it all, doesn’t it? :-) We’ll talk more tomorrow. :-)

The Hamiltonian of matter in a field

In this and the next post, I want to present some essential discussions in Feynman’s 10th, 11th and 12th Lectures on Quantum Mechanics. This post in particular will actually present the Hamiltonian for the spin state of an electron, but the discussion is much more general than that: it’s a model for any spin-1/2 particle, i.e. for all elementary fermions—so that’s the ‘matter-particles’ which you know: electrons, protons and neutrons. Or, taking into account protons and neutrons consists of quarks, we should say quarks, which also have spin 1/2. So let’s go for it. Let me first, by way of introduction, remind you of a few things.

What is it that we are trying to do?

That’s always a good question to start with. :-) Just for fun, and as we’ll be talking a lot about symmetries and directions in space, I’ve inserted an animation below of a four-dimensional object, as its author calls it. This ‘object’ returns to its original configuration after a rotation of 720 degrees only (after 360 degrees, the spiral flips between clockwise and counterclockwise orientations, so it’s not the same). For some rather obscure reason :-) he refers to it as a spin-1/2 particle, or a spinor.

Spin_One-Half_(Slow)

Are spin one-half particles, like an electron or a proton, really four-dimensional? Well… I guess so. All depends, of course, on your definition or concept of a dimension. :-) Indeed, the term is as well – I should say, as badly, really – defined as the ubiquitous term ‘vector’ and so… Well… Let me say that spinors are usually defined in four-dimensional vector spaces, indeed. […] So is this what it’s all about, and should we talk about spinors?

Not really. Feynman doesn’t push the math that far, so I won’t do that either. :-) In fact, I am not sure why he’s holding back here: spinors are just mathematical objects, like vectors or tensors, which we introduced in one of our posts on electromagnetism, so why not have a go at it? You’ll remember that our electromagnetic tensor was like a special vector cross-product which, using the four-potential vector Aμ and the ∇μ = (∂/∂t, −∂/∂x, −∂/∂y, −∂/∂z) operator, we could write as (∇μAμ) − (∇μAμ)T.

Huh? Hey! Relax! It’s a matrix equation. It looks like this:

matrix

In fact, I left out above, and so we should plug it in, remembering that B’s magnitude is 1/c times E’s magnitude. So the electromagnetic tensor – in one of its many forms at least – is the following matrix:

electromagnetic tensor final

Why do we need a beast like this? Well… Have a look at the mentioned post or, better, one of the subsequent posts: we used it in very powerful equations (read: very concise equations, because that’s what mathematicans, and physicists, like) describing the dynamics of a system. So we have something similar here: what we’re trying to describe the dynamics of a quantum-mechanical system in terms of the evolution of its state, which we express as a linear combination of ‘pure’ base states, which we wrote as:

|ψ〉 = |1〉C|2〉C= |1〉〈1|ψ〉 + |2 〉〈2|ψ〉

C1 and C2 are complex-valued wavefunctions, or amplitudes as we call them, and the dynamics of the system are captured in a set of differential equations, which we wrote as:

System

The trick was to know or guess our Hamiltonian, i.e. we had to know or, more likely, guess those Hij coefficients (and then find experiments to confirm our guesses). Once we got those, it was a piece of cake. We’d solve for C1 and C2, and then take their absolute square so as to get probability functions. like the ones we found for our ammonia (NH3) molecule: P1(t) = |C1(t)|2 = cos2[(A/ħ)·t] and P2(t) = |C2(t)|= sin2[(A/ħ)·t]. They say that, if we would take a measurement, then the probability of finding the molecule in the ‘up’ or ‘down’ state (i.e. state 1 versus state 2) varies as shown:

graph

So here we are going to generalize the analysis: rather than guessing, or assuming we know them (from experiment, for example, or because someone else told us so), we’re going to calculate what those Hamiltonian coefficients are in general.

Now, returning to those spinors, it’s rather daunting to think that such a simple thing as being in the ‘up’ or ‘down’ condition has to be represented by some mathematical object that’s at least as complicated as these tensors. But… Well… I am afraid that’s the way it is. Having said that, Feynman himself seems to consider that’s math for graduate students in physics, rather than the undergraduate public for which he wrote the course. Hence, while he presented all of the math in the Lecture Volume on electromagnetism, he keeps things as simple as possible in the Volume on quantum mechanics. So… No. We will not be talking about spinors here.

The only reason why I started out with that wonderful animation is to remind you of the weirdness of quantum mechanics as evidenced by, for example, the fact I almost immediately got into trouble when trying to associate base states with two-dimensional geometric vectors when writing my post on the hydrogen molecule, or when thinking about the magnitude of the quantum-mechanical equivalent of the angular momentum of a particle (see my post on spin and angular momentum).

Thinking of that, it’s probably good to remind ourselves of the latter discussion. If we denote the angular momentum as J, then we know that, in classical mechanics, any of J‘s components Jx, Jy or Jz, could take on any value from +J to −J and, therefore, the maximum value of any component of J – say Jz – would be equal to J. To be precise, J would be the value of the component of J in the direction of J itself. So, in classical mechanics, we’d write: |J| = +√(J·J) = +√JJ, and it would be the maximum value of any component of J.

However, in quantum mechanics, that’s not the case. If the spin number of J is j, then the maximum value of any component of J is equal to j·ħ. In this case, the spin number will be either +1/2 or −1/2. So, naturally, one would think that J, i.e. the magnitude of J, would be equal to J = |J| = +√(J·J) = +√J= j·ħ = ħ/2. But that’s not the case: J = |J| ≠ j·ħ = ħ/2. To calculate the magnitude, we need to calculate J= Jx+ Jy+ Jz2. So the idea is to measure these repeatedly and use the expected value for Jx2, Jy2 and Jz2 in the formula. Now, that’s pretty simple: we know that Jx, Jy or Jz are equal to either +ħ/2 or −ħ/2, and, in the absence of a field (i.e. in free space), there’s no preference, so both values are equally likely. To make a long story short, the expected value of Jx2, Jy2 and Jz2 is equal to (1/2)·(ħ/2)+ (1/2)·(−ħ/2)= ħ2/4, and J= 3·ħ2/4 = j(j+1)ħ, with j = 1/2. So J = |J| = +√J= √(3·ħ2/4) = √3·(ħ/2) ≈ 0.866·ħ. Now that’s a huge difference as compared to ħ/2 = ħ/2.

What we’re saying here is that the magnitude of the angular momentum is √3 ≈ 1.7 times the maximum value of the angular momentum in any direction. How is that possible? Thinking classically, this is nonsensical. However, we need to stop thinking classically here: it means that, when we’re atomic or sub-atomic particles, their angular momentum is never completely in one direction. This implies we need to revise our classical idea of an oriented (electric or magnetic) moment: to put it simply, we find it’s never in one direction only! Alternatively, we might want to re-visit our concept of direction itself, but then we do not want to go there: we continue to say we’re measuring this or that quantity in this or that direction. Of course we do! What’s the alternative? There’s none. You may think we didn’t use the proper definition of the magnitude of a quantity when calculating J as √3·(ħ/2), but… Well… You’ll find yourself alone with that opinion. :-)

This weird thing really comes with the experimental fact that, if you measure the angular momentum, along any axis, you’ll find it is always an integer or half-integer times ħ. Always! So it comes with the experimental fact that energy levels are discrete: they’re separated by the quantum of energy, which is ħ, and which explains why we have the 1/ħ factor in all coefficients in the coefficient matrix for our set of differential equations. The Hamiltonian coefficients represent energies indeed, and so we’ll want to measure them in units of ħ.

Of course, now you’ll wonder: why the −i? I wish I could you a simple answer here, like: “The −factor corresponds to a rotation by −π/2, and that’s the angle we use to go from our ‘up’ and ‘down’ base states to the ‘Uno‘ and ‘Duo‘ (I and II) base states.” :-) Unfortunately, this easy answer isn’t the answer. :-/ I need to refer you to my post on the Hamiltonian: the true answer is that it’s got to do with the in the e(i/ħ)·(E·t − pxfunction: the E, i.e. the energy, is real – most of the time, at least :-) – but the wavefunction is what it is: a complex exponential. So… Well…

Frankly, that’s more than enough as an introduction. You may want to think about the imaginary momentum of virtual particles here – i.e. ‘particles’ that are being exchanged as part of a ‘state switch’ –  but then we’d be babbling for hours! So let’s just do what we wanted to do here, and that is to find the Hamiltonian for a spin one-half particle in general, so that’s usually in some field, rather than in free space. :-)

So here we go. Finally!  :-)

The Hamiltonian of a spin one-half particle in a magnetic field

We’ve actually done some really advanced stuff already. For example, when discussing the ammonia maser, we agreed on the following Hamiltonian in order to make sense of what happens inside of the maser’s resonant cavity:

states

State 1 was the state with the ‘upper’ energy E0 + με, as the energy that’s associated with the electric dipole moment of the ammonia molecule was added to the (average) energy of the system (i.e. E0). State 2 was the state with the ‘lower’ energy level E0 − με, implying the electric dipole moment is opposite to that of state 1. The field could be dynamic or static, i.e. varying in time, or not, but it was the same Hamiltonian. Of course, solving the differential equations with non-constant Hamiltonian coefficients was much more difficult, but we did it.

We also have a “flip-flop amplitude” – I am using Feynman’s term for it :-) – in that Hamiltonian above. So that’s an amplitude for the system to go from one state to another in the absence of an electric field. For our ammonia molecule, and our hydrogen molecule too, it was associated with the energy that’s needed to tunnel through a potential barrier and, as we explained in our post on virtual particles, that’s usually associated with a negative value for the energy or, what amounts to the same, with a purely imaginary momentum, so that’s why we write minus A in the matrix. However, don’t rack your brain over this as it is a bit of convention, really: putting +A would just result in a phase difference for the amplitudes, but it would give us the same probabilities. If it helps you, you may also like to think of our nitrogen atom (or our electron when we were talking the hydrogen system) as borrowing some energy from the system so as to be able to tunnel through and, hence, temporarily reducing the energy of the system by an amount that’s equal to A. In any case… We need to move on.

As for these probabilities, we could see – after solving the whole thing, of course (and that was very complicated, indeed) – that they’re going up and down just like in that graph above. The only difference was that we were talking induced transitions here, and so the frequency of the transitions depended on με0, i.e. on the strength of the field, and the magnitude of the dipole moment itself of course, rather than on A. In fact, to be precise, we found that the ratio between the average periods was equal to:

Tinduced/Tspontaneous = [(π·ħ)/(2με0)]/[(π·ħ)/(2A)] = A/με0

But… Well… I need to move on. I just wanted to present the general philosophy behind these things. For a simple electron which, as you know, is either in a ‘up’ or a ‘down’ state – vis-á-vis a certain direction, of course – the Hamiltonian will be very simple. As usual, we’ll assume the direction is that z-direction. Of course, this ‘z-direction” is just a short-hand for our reference frame: we decide to measure something in this or that direction, and we call that direction the z-direction.

Fine. Next. As our z-direction is currently our reference direction, we assume it’s the direction of some magnetic field, which wel’ll write as B. So the components of B in the x– and y-direction are zero: all of the field is in the z-direction, so B = Bz. [Note that the magnetic field is not some quantum-mechanical quantity, and so we can have all of the magnitude in one direction. It’s just a classical thing.]

Fine. Next. The spin or the angular momentum of our electron is, of course, associated with some magnetic dipole moment, which we’ll write as μ. [And, yes, sometimes we use this symbol for an electric dipole moment and, at other times, for a magnetic dipole moment, like here. I can’t help that. You don’t want a zillion different symbols anyway.] Hence, just like we had two energy levels E0 ± με, we’ll now have two energy levels E0 ± μBz. We’ll just shift the energy scale so E0 = 0, so that’s as per our convention. [Feynman glosses over it, but this is a bit of a tricky point, really. Usually, one includes the rest mass, or rest energy, in the E in the argument of the wavefunction, but so here we’re equating m0 c2 with zero. Tough! However, you can think of this re-definition of the zero energy points as a phase shift in all wavefunctions, so it shouldn’t matter when taking the absolute square or looking at interference. Still… Think about it.]

Fine. Next. Well… We’ve got two energy levels, +μBz and +μBz, but no A to put in our Hamiltonian, so the following Hamiltonian may or may not make sense:

electron

Hmm… Why is there no flip-flop amplitude? Well… You tell me. Why would we have one? It’s not like the ammonia or hydrogen molecule here, so… Well… Where’s the potential barrier? Of course, you’ll now say that we can imagine it takes some energy to change the spin of an electron, like we were doing with those induced transitions. But… Yes and no. We’ve been selecting particles using our Stern-Gerlach apparatus, or that state selector for our maser, but were we actually flip-flopping things? The changing electric field in our resonant cavity is changes the transition frequency but, when everything is said and done, the transition itself has to do with that A. You’ll object again: a pure stationary state? So the electron is either ‘up’ or ‘down’, and it stays like that foreverReally?

Well… I am afraid I have to cut you off, because otherwise we’ll never get to the end. Stop being so critical. :-) Well… No. You should be critical. However, you’re right in saying that, when everything is said and done, these are all hypotheses that may or may not make sense. However, Feynman is also right when he says that, ultimately, the proof of the pudding is in the eating: at the end of this long, winding story, we’ll get some solutions that can be tested in experiment: they should give predictions, or probabilities rather, that agree with experiment. As Feynman writes: “[The objective is to find] “equations of motion for the spin states” of an electron in a magnetic field. We guess at them by making some physical argument, but the real test of any Hamiltonian is that it should give predictions in agreement with experiment. According to any tests that have been made, these equations are right. In fact, although we made our arguments only for constant fields, the Hamiltonian we have written is also right for magnetic fields which vary with time.”

So let’s get on with it: let’s assume the Hamiltonian above is the one we should use for a magnetic field in the z-direction, and that we have those pure stationary states with the energies they have, i.e. −μBz and +μBz. One minor technical point, perhaps: you may wonder why we write what we write and do not switch −μBz and +μBz in the Hamiltonian—so as to reflect these ‘upper’ and ‘lower’ energies in those other Hamiltonians. The answer is: it’s just convention. We choose state 1 to be the ‘up’ state, so its spin is ‘up’, but the magnetic moment is opposite to the spin, so the ‘up’ state has the minus sign. Full stop. Onwards!

We’re now going to assume our B field is not in the z-direction. Hence, its Bx and By components are not zero. What we want to see now is how the Hamiltonian looks like. [Yes. Sorry for regularly reminding you of what it is that we are trying to do.] Here you need to be creative. Whatever the direction of the field, we need to be consistent. If that Hamiltonian makes sense, i.e. if we’d have two pure stationary states with the energies they have, if the field is in the z-direction, then it’s rather obvious that, if the field is in some other direction, we should still be able to find two stationary states with exactly the same energy levels. As Feynman puts it: “We could have chosen our z-axis in its direction, and we would have found two stationary states with the energies ±μBz. Just choosing our axes in a different direction doesn’t change the physics. Our description of the stationary states will be different, but their energies will still be ±μBz.” Right. And because the magnetic field is a classical quantity, the relevant magnitude is just the square root of the squares of its components, so we write:

formula 1So we have the energies now, but we want the Hamiltonian coefficients. Here we need to work backwards. The general solution for any system with constant Hamiltonian coefficients always involves two stationary states with energy levels which we denoted as Eand EII, indeed. Let me remind you of the formula for them:

energies

[If you want to double-check and see how we get those, it’s probably best to check it in the original text, i.e. Feynman’s Lecture on the Ammonia Maser, Section 2.]

So how do we connect the two sets of equations? How do we get the Hij coefficients out of these square roots and all of that? [Again. I am just reminding you of what it is that we are trying to do.] We’ve got two equations and four coefficients, so… Well… There’s some rules we can apply. For example, we know that any Hij coefficient must equal Hji*, i.e. complex conjugate of Hji. [However, I should add that’s true only if i ≠ j.] But… Hey! We can already see that H11 must be equal to minus H22. Just compare the two sets. That comes out as a condition, clearly. Now that simplifies our square roots above significantly. Also noting that the absolute square of a complex number is equal to the product of the number with its complex conjugate, the two equations above imply the following:

formula 2

Let’s see what this means if we’d apply this to our ‘special’ direction once more, so let’s assume the field is in the z-direction once again. Perhaps we can some more ‘conditions’ out of that. If the field is in the z-direction itself, the equation above reduces to:

formula 3

That makes it rather obvious that, in this special case, at least, |H12|2 = 0. You’ll say: that’s nothing new, because we had those zeroes in that Hamiltonian already. Well… Yes and no! Here we need to introduce another constraint. I’ll let Feynman explain it: “We are going to make an assumption that there is a kind of superposition principle for the terms of the Hamiltonian. More specifically, we want to assume that if two magnetic fields are superposed, the terms in the Hamiltonian simply add—if we know the Hij for a pure Band we know the Hij for a pure Bx, then the Hij for a both Band Btogether is simply the sum. This is certainly true if we consider only fields in the z-direction—if we double Bz, then all the Hij are doubled. So let’s assume that H is linear in the field B.”

Now, the assumption that H12 must be some linear combination of Bx, Band Bz, combined with the |H12|2 = 0 condition when all of the magnitude of the field is in the z-direction, tells us that H12 has no term in Bz. It may have – in fact, it probably should have – terms in Bx and By, but not in Bz. That does take us a step further.

Next assumption. The next assumption is that, regardless of the direction of the field, H11 and H22 don’t change: they remain what they are, so we write: H11 = −μBz and H22 = +μBz. Now, you may think that’s no big deal, because we defined the 1 and 2 states in terms of our z-direction, but… Well… We did so assuming all of the magnitude was in the z-direction.

You’ll say: so what? Now we’ve got some field in the x– and y-directions, so that shouldn’t impact the amplitude to be in a state that’s associated with the z-direction. Well… I should say two things here. First, we’re not talking about the amplitude to be in state 1 or state 2. These amplitudes are those C1 and Cfunctions that we can find once we’ve got those Hamiltonian coefficients. Second, you’d surely expect that some field in the x– and y-directions should have some impact on those C1 and Cfunctions. Of course!

In any case, I’ll let you do some more thinking about this assumption. Again, we need to move on, so let’s just go along with it. At this point, Feynman‘s had enough of the assumptions, and so he boldly proposes a solution, which incorporates that the H11 = −μBz and H22 = +μBz assumption. Let me quote him:

Formula 4

Of course, this leaves us gasping for breath. A simple guess? One can plug it in, of course, and see it makes sense—rather quickly, really. But… Nothing linear is going to come out of that expression for |H12|2, right? We’ll have to take a square root to find that H12 = ±μ·(Bx+ By2)1/2. Well… No. We’re working in the complex space here, remember? So we can use complex solutions. Feynman notes the same and immediately proposes the right solution:

final 1

To make a long story, we get what we wanted, i.e. those “equations of motion for the spin states” of an electron in a magnetic field. I’ll let Feynman summarize the results:

Final 3

It’s truly a Great Result, especially because, as Feynman notes, (almost) any problem about two-state systems can be solved by making a mathematical analog to the system of the spinning electron. We’ll illustrate that as we move ahead. For now, however, I think we’ve had enough, isn’t it? :-)

We’ve made a big leap here, and perhaps we should re-visit some of the assumptions and conventions—later, that is. As for now, let’s try to work with it. As mentioned above, Feynman shied away from the grand mathematical approach to it. Indeed, the whole argument might have been somewhat fuzzy, but at least we got a good feel for the solution. In my next post, I’ll abstract away from it, as Feynman does in his next Lecture, where he introduces the so-called Pauli spin matrices, which are like Lego building blocks for all of the matrix algebra which – I must assume you sort of sense that’s coming, no? :-) – we’ll need to master so as to understand what’s going on.

So… That’s it for today. I hope you understood “what it is that we’re trying to do”, and that you’ll have some fun working on it on your own now. :-)

The quantum-mechanical view of chemical binding

In my post on the hydrogen atom, I explained its stability using the following graph out of Feynman’s Lectures. It shows an equilibrium state for the Hmolecule with an energy level that’s about 5 eV (ΔE/E≈ −0.375 ⇔ ΔE ≈ −0.375×13.6 eV = 5.1 eV) lower than the energy of two separate hydrogen atoms (2H).

raph3The lower energy level is denoted by EII and refers to a state, which we also denoted as state II, that’s associated with some kind of molecular orbital for both electrons, resulting in more (shared) space where the two electrons can have a low potential energy, as Feynman puts it, so “the electron can spread out—lowering its kinetic energy—without increasing its potential energy.” The electrons have opposite spin. The have to have opposite spin because our formula for state II would violate the Fermi exclusion principle if they would not have opposite spin. Indeed, if the two electrons would not have opposite spin, the formula for our CII amplitude, would be violating the rule that, when identical fermions are involved, and we’re adding amplitudes, then we should do so with a negative sign for the exchanged case. So our CII = 〈II|ψ〉 = (1/√2)[〈1|ψ〉 + 〈2|ψ〉] = (1/√2)[〈2|ψ〉 + 〈1|ψ〉] would be problematic: when we switch the electrons, we should get a minus sign.

We do get that minus sign for state I:

〈I|ψ〉 = (1/√2)[〈1|ψ〉 − 〈2|ψ〉] = −(1/√2)[〈2|ψ〉 − 〈1|ψ〉]

To make a long story short, state II is the equilibrium state, and so that’s an Hmolecule with two electrons with opposite spins that share a molecular orbital, rather than moving around in some atomic orbital.

The question is: can we generalize this analysis? I mean… We’ve spent a lot of time so as to make sure we understand this one particular case. What’s the use of such analysis if we can’t generalize? We shouldn’t be doing nitty-gritty all of the time, isn’t it?

You’re right. The thing is: we can easily generalize. We’ve learned to play with those Hamiltonian matrices now, and so let’s do the ‘same-same but different’ with other systems. Let’s replace one of the two protons in the two-protons-one-electron model by a much heavier ion—say, lithium. [The example is not random, of course: lithium is very easily ionized, which is why it’s used in batteries.]

We need to think of the Hamiltonian again, right? We’re now in a situation in which the Hamiltonian coefficients H11 and H22 are likely to be different. We’ve lost the symmetry: if the electron is near the lithium ion, then we can’t assume the system has the same energy as when it’s near the hydrogen nucleus (in case you forgot: that’s what the proton is, really). Because we’ve lost the symmetry, we no longer have these ‘easy’ Hamiltonians:

equi

We need to look at the original formulas for Eand E2 once again. Let me write them down:

energies

Of course, H12 and H21 will still be equal to A, and so… Well… Let me simplify my life and copy Feynman:

Feynman

There’s several things here. First, note that approximation to the square root:

square root sum of squares

We’re only allowed to do that if y is much smaller than x, with = 1 and = 2A/(H11 − H22). In fact, the condition is usually written as 0 ≤ y/x ≤ 1/2, so we take the A/(H11 − H22) ratio as (much) less than one, indeed. So the second term in the energy difference E− EII = (H11 − H22) + 2A·A/(H11 − H22) is surely smaller than 2A. But there’s the first term, of course: H11 − H22. However, that’s there anyway, and so we should actually be looking at the additional separation, so that’s where the A comes in, and so that’s the second term: 2A·A/(H11 − H22) which, as mentioned, is smaller by the factor A/(H11 − H22), which is less than one. So Feynman’s conclusion is correct: “The binding of unsymmetric diatomic molecules is generally very weak.

However, that’s not the case when binding two ions by two electrons, which is referred to as a two-electron binding, which is the most common valence bond. Let me simplify my life once more and quote once again:

Feynman 2

What he’s saying is that H11 and H22 are one and the same once again, and equal to E0, because both ions can take one electron, so there’s no difference between state 1 and state 2 in that regard. So the energy difference is 2A once more and we’ve got good covalent binding. [Note that the term ‘covalent’ just refers to sharing electrons, so their value is shared, so to speak.]

Now, this result is, of course, subject to the hypothesis that the electron is more or less equally attracted to both ions, which may or may not be the case. If it’s not the case, we’ll have what’s referred to as ‘ionic’ binding. Again, I’ll let Feynman explain it, as it’s pretty straightforward and so it’s no use to try to write another summary of this:

Feynman3

So… That’s it, really. As Feynman puts it, by way of conclusion: “You can now begin to see how it is that many of the facts of chemistry can be most clearly understood in terms of a quantum mechanical description.”

Most clearly? Well… I guess that, at the very least, we’re “beginning to see” something here, aren’t we? :-)

An introduction to virtual particles

In one of my posts on the rules of quantum math, I introduced the propagator function, which gives us the amplitude for a particle to go from one place to another. It looks like this:

propagator

The rand r2 vectors are, obviously, position vectors describing (1) where the particle is right now, so the initial state is written as |r1〉, and (2) where it might go, so the final state is |r2〉. Now we can combine this with the analysis in my previous post to think about what might happen when an electron sort of ‘jumps’ from one state to another. It’s a rather funny analysis, but it will give you some feel of what these so-called ‘virtual’ particles might represent.

Let’s first look at the shape of that function. The e(i/ħ)·(pr12function in the numerator is now familiar to you. Note the r12 in the argument, i.e. the vector pointing from r1 to r2. The pr12 dot product equals |p|∙|r12|·cosθ = p∙r12·cosθ, with θ the angle between p and r12. If the angle is the same, then cosθ is equal to 1. If the angle is π/2, then it’s 0, and the function reduces to 1/r12. So the angle θ, through the cosθ factor, sort of scales the spatial frequency. Let me try to give you some idea of how this looks like by assuming the angle between p and r12 is the same, so we’re looking at the space in the direction of the momentum only and |p|∙|r12|·cosθ = p∙r12. Now, we can look at the p/ħ factor as a scaling factor, and measure the distance x in units defined by that scale, so we write: x = p∙r12/ħ. The whole function, including the denominator, then reduces to (ħ/p)·eix/x = (ħ/p)·cos(x)/x + i·(ħ/p)·sin(x)/x, and we just need to square this to get the probability. All of the graphs are drawn hereunder: I’ll let you analyze them. [Note that the graphs do not include the ħ/p factor, which you may look at as yet another scaling factor.] You’ll see – I hope! – that it all makes perfect sense: the probability quickly drops off with distance, both in the positive as well as in the negative x-direction, while going to infinity when very near, i.e. for very small x. [Note that the absolute square, using cos(x)/x and sin(x)/x yields the same graph as squaring 1/x—obviously!]

graph

Now, this propagator function is not dependent on time: it’s only the momentum that enters the argument. Of course, we assume p to be some positive real number. Of course?

This is where Feynman starts an interesting conversation. In the previous post, we studied a model in which we had two protons, and one electron jumping from one to another, as shown below.

hydrogen

This model told us the equilibrium state is a stable ionized hydrogen molecule (so that’s an H2+ molecule), with an interproton distance that’s equal to 1 Ångstrom – so that’s like twice the size of a hydrogen atom (which we simply write as H) – and an energy that’s 2.72 eV less than the energy of a hydrogen atom and a proton (so that’s not an H2+ molecule but a system consisting of a separate hydrogen atom and a proton). The why and how of that equilibrium state is illustrated below. [For more details, see my previous post.]

raph2

Now, the model implies there is a sort of attractive force pulling the two protons together even when the protons are at larger distances than 1 Å. One can see that from the graph indeed. Now, we would not associate any molecular orbital with those distances, as the system is, quite simply, not a molecule but a separate hydrogen atom and a proton. Nevertheless, the amplitude A is non-zero, and so we have an electron jumping back and forth.

We know how that works from our post on tunneling: particles can cross an energy barrier and tunnel through. One of the weird things we had to consider when a particle crosses such potential barrier, is that the momentum factor p in its wavefunction was some pure imaginary number, which we wrote as p = i·p’. We then re-wrote that wavefunction as a·e−iθ = a·e−i[(E/ħ)∙t − (i·p’/ħ)x] = a·e−i(E/ħ)∙t·ei2·p’·x/ħ = a·e−i(E/ħ)∙t·e−p’·x/ħ. The e−p’·x/ħ factor in this formula is a real-valued exponential function, that sort of ‘kills’ our wavefunction as we move across the potential barrier, which is what is illustrated below: if the distance is too large, then the amplitude for tunneling goes to zero.

potential barrier

From a mathematical point of view, the analysis of our electron jumping back and forth is very similar. However, there are differences too. We can’t really analyze this in terms of a potential barrier in space. The barrier is the potential energy of the electron itself: it’s happy when it’s bound, because its energy then contributes to a reduction of the total energy of the hydrogen atomic system that is equal to the ionization energy, or the Rydberg energy as it’s called, which is equal to not less than 13.6 eV (which, as mentioned, is pretty big at the atomic level). Well… We can take that propagator function (1/re(i/ħ)·p∙r (note the argument has no minus sign: it can be quite tricky!), and just fill in the value for the momentum of the electron.

Huh? What momentum? It’s got no momentum to spare. On the contrary, it wants to stay with the proton, so it has no energy whatsoever to escape. Well… Not in quantum mechanics. In quantum mechanics it can use all its potential energy and convert it into kinetic energy, so it can get away from its proton and convert the energy that’s being released into kinetic energy.

But there is no release of energy! The energy is negative!

Exactly! You’re right. So we boldly write: K.E. = m·v2/2 = p2/(2m) = −13.6 eV, and, because we’re working with complex numbers, we can take a square root of negative number, using the definition of the imaginary unit: i = √(−1), so we get a purely imaginary value for the momentum p, which we write as:

p = ±i·√(2m·EH)

The sign of p is chosen so it makes sense: our electron should go in one direction only. It’s going to be the plus sign. [If you’d take the negative root, you’d get a nonsensical propagator function.] To make a long story short, our propagator function becomes:

(1/re(i/ħ)·i·√(2m·EH)∙r = (1/re(i/ħ)·i·√(2m·EH)∙r = (1/rei2/ħ·√(2m·EH)∙r = (1/r)·e−√(2m·EH)/ħ∙r

Of course, from a mathematical point of view, that’s the same function as e−p’·x/ħ: it’s a real-valued exponential function that quickly dies. But it’s an amplitude alright, and it’s just like an amplitude for tunneling indeed: if the distance is too large, then the amplitude goes to zero. The final cherry on the cake, of course, is to write:

A ∼ (1/r)·e−√(2m·EH)/ħ∙r

Well… No. It gets better. This amplitude is an amplitude for an electron bond between the two protons which, as we know, lowers the energy of the system. By how much? Well… By A itself. Now we know that work or energy is an integral or antiderivative of force over distance, so force is the derivative of energy with respect to the distance. So we can just take the derivative of the expression above to get the force. I’ll leave that you as an exercise: don’t forget to use the product rule! :-)

So are we done? No. First, we didn’t talk about virtual particles yet! Let me do that now. However, first note that we should add one more effect in our two-proton-one-electron system: the coulomb field (ε) caused by the bare proton will cause the hydrogen molecule to take on an induced electric dipole moment (μ), so we should integrate that in our energy equation. Feynman shows how, but I won’t bother you with that here. Let’s talk about those virtual particles. What are they?

Well… There’s various definitions, but Feynman’s definition is this one:

“There is an exchange of a virtual electron when–as here–the electron has to jump across a space where it would have a negative energy. More specifically, a ‘virtual exchange’ means that the phenomenon involves a quantum-mechanical interference between an exchanged state and a non-exchanged state.”

You’ll say: what’s virtual about it? The electron does go from one place to another, doesn’t it? Well… Yes and no. We can’t observe it while it’s supposed to be doing that. Our analysis just tells us it seems to be useful to distinguish two different states and analyze all in terms of those differential equations. Who knows what’s really going on? What’s actual and what’s virtual? We just have some ‘model’ here: a model for the interaction between a hydrogen atom and a proton. It explains the attraction between them in terms of a sort of continuous exchange of an electron, but is it real?

The point is: in physics, it’s assumed that the coulomb interaction, i.e. all of electrostatics really, comes from the exchange of virtual photons: one electron, or proton, emits a photon, and then another absorbs it in the reverse of the same reaction. Furthermore, it is assumed that the amplitude for doing so is like that formula we found for the amplitude to exchange a virtual electron, except that the rest mass of a photon is zero, and so the formula reduces to 1/r. Such simple relationship makes sense, of course, because that’s how the electrostatic potential varies in space!

That, in essence, is all what there is to the quantum-mechanical theory of electromagnetism, which Feynman refers to as the ‘particle point of view’.

So… Yes. It’s that simple. Yes! For a change!  :-)

Post scriptum: Feynman’s Lecture on virtual particles is actually focused on a model for the nuclear forces. Most of it is devoted to a discussion of the virtual ‘pion’, or π-meson, which was then, when Feynman wrote his Lectures, supposed to mediate the force between two nucleons. However, this theory is clearly outdated: nuclear forces are described by quantum chromodynamics. So I’ll just skip the Yukawa theory here. It’s actually kinda strange his theory, which he proposed in 1935, was the theory for nuclear forces for such a long time. Hence, it’s surely all very interesting from a historical point of view.

The hydrogen molecule as a two-state system

My posts on the state transitions of an ammonia molecule weren’t easy, were they? So let’s try another two-state system. The illustration below shows an ionized hydrogen molecule in two possible states which, as usual, we’ll denote as |1〉 and |2〉. An ionized hydrogen molecule is an H2 molecule which lost an electron, so it’s two protons with one electron only, so we denote it as H2+. The difference between the two states is obvious: the electron is either with the first proton or with the second.

hydrogen

It’s an example taken from Feynman’s Lecture on two-state systems. The illustration itself raises a lot of questions, of course. The most obvious question is: how do we know which proton is which? We’re talking identical particles, right? Right. We should think of the proton spins! However, protons are fermions and, hence, they can’t be in the same state, so they must have opposite spins. Of course, now you’ll say: they’re not in the same state because they’re at different locations. Well… Now you’ve answered your own question. :-) However you want to look at this, the point is: we can distinguish both protons. Having said that, the reflections above raise other questions: what reference frame are we using? The answer is: it’s the reference frame of the system. We can mirror or rotate this image however we want – as I am doing below – but state |1〉 is state |1〉, and state |2〉 is state |2〉.

flip

The other obvious question is more difficult. If you’ve read anything at all about quantum mechanics, you’ll ask: what about the in-between states? The electron is actually being shared by the two protons, isn’t it? That’s what chemical bonds are all about, no? Molecular orbitals rather than atomic orbitals, right? Right. That’s actually what this post is all about. We know that, in quantum mechanics, the actual state – or what we think is the actual state – is always expressed as some linear combination of so-called base states. We wrote:

|ψ〉 = |1〉C|2〉C= |1〉〈1|ψ〉 + |2 〉〈2|ψ 〉

In terms of representing what’s actually going on, we only have these probability functions: they say that, if we would take a measurement, the probability of finding the electron near the first or the second proton varies as shown below:

graph

If the |1〉 and |2〉 states were actually representing two dual physical realities, the actual state of our H2molecule would be represented by some square or some pulse wave, as illustrated below. [We should be calling it a square function but that term has been reserved for a function like y = x2.]

Dutycycle

Of course, the symmetry of the situation implies that the average pulse duration τ would be one-half of the (average) period T, so we’d be talking a square wavefunction indeed. The two wavefunctions both qualify as probability density functions: the system is always in one state or the other, and the probabilities add up to one. But you’ll agree we prefer the smooth squared sine and cosine functions. To be precise, these smooth functions are:

  • P1(t) = |C1(t)|2 = cos2[(A/ħ)·t]
  • P2(t) = |C2(t)|= sin2[(A/ħ)·t]

So now we only need to explain A here (you know ħ already). But… Well… Why would we actually prefer those smooth functions? An irregular pulse function would seem to be doing a better job when it comes to modeling reality, doesn’t it? The electron should be either here, or there. Isn’t it?

Well… No. At least that’s why am slowly starting to understand. These pure base states |1〉 and |2〉 are real and not real at the same time. They’re real, because it’s what we’ll get when we verify, or measure, the state, so our measurement will tell us that it’s here or there. There’s no in-between. [I still need to study weak measurement theory.] But then they are not real, because our molecule will never ever be in those two states, except for those ephemeral moments when (A/ħ)·t = n·π (n = 0, 1, 2,…). So we’re really modeling uncertainty here and, while I am still exploring what that actually means, you should think of the electron as being everywhere really, but with an unequal density in space—sort of. :-)

Now, we’ve learned we can describe the state of a system in terms of an alternative set of base states. We wrote: |ψ〉 = |I〉C|II〉CII = |I〉〈I|ψ〉 + |II〉〈II|ψ〉, with the CI, II and C1, 2 coefficients being related to each other in exactly the same way as the associated base states, i.e. through a transformation matrix, which we summarized as:

transformation

To be specific, the two sets of base states we’ve been working with so far were related as follows:

transformation

So we’d write: |ψ〉 = |I〉C|II〉CII = |I〉〈I|ψ〉 + |II〉〈II|ψ〉 = |1〉C|2〉C= |1〉〈1|ψ〉 + |2 〉〈2|ψ 〉, and the CI, II and C1, 2 coefficients would be related in exactly the same way as the base states:

Eq 4

[In case you’d want to review how that works, see my post on the Hamiltonian and base states.] Now, we cautioned that it’s difficult to try to interpret such base transformations – often referred to as a change in the representation or a different projection – geometrically. Indeed, we acknowledged that (base) states were very much like (base) vectors – from a mathematical point of view, that is – but, at the same time, we said that they were ‘objects’, really: elements in some Hilbert space, which means you can do the operations we’re doing here, i.e. adding and multiplying. Something like |I〉CI doesn’t mean all that much: Cis a complex number – and so we can work with numbers, of course, because we can visualize them – but |I〉 is a ‘base state’, and so what’s the meaning of that, and what’s the meaning of the |I〉CI or CI|I〉 product? I could babble about that, but it’s no use: a base state is a base state. It’s some state of the system that makes sense to us. In fact, it may be some state that does not make sense to us—in terms of the physics of the situation, that is – but then there will always be some mathematical sense to it because of that transformation matrix, which establishes a one-to-one relationship between all sets of base states.

You’ll say: why don’t you try to give it some kind of geometrical or whatever meaning? OK. Let’s try. State |1〉 is obviously like minus state |2〉 in space, so let’s see what happens when we equate |1〉 to 1 on the real axis, and |2〉 to −1. Geometrically, that corresponds to the (1, 0) and (−1, 0) points on the unit circle. So let’s multiply those points with (1/√2, −1/√2) and (1/√2, 1/√2) respectively. What do we get? Well… What product should we take? The dot product, the cross product, or the ordinary complex-number product? The dot product gives us a number, so we don’t want that. [If we’re going to represent base states by vectors, we want all states to be vectors.] A cross product will give us a vector that’s orthogonal to both vectors, so it’s a vector in ‘outer space’, so to say. We don’t want that, I must assume, and so we’re left with the complex-number product, which projects our  (1, 0) and (−1, 0) vectors into the (1/√2, −1/√2)·(1, 0) = (1/√2−i/√2)·(1+0·i) = √2−i/√2 = (1/√2, −i/√2) and (1/√2, 1/√2)·(−1, 0) = (1/√2+i/√2)·(−1+0·i) = −√2−i/√2 = (−1/√2, −i/√2) respectively.

transformation 2

What does this say? Nothing. Stuff like this only causes confusion. We had two base states that were ‘180 degrees’ apart, and now our new base states are only ’90 degrees’ apart. If we’d ‘transform’ the two new base states once more, they collapse into each other: (1/√2, −1/√2)·(1/√2, −1/√2) = (1/√2−i/√2)2 = −= (0, −1) = (1/√2, 1/√2)·(−1/√2, −1/√2) = −i. This is nonsense, of course. It’s got nothing to do with the angle we picked for our original set of base states: we could have separated our original set of base states by 90 degrees, or 45 degrees. It doesn’t matter. It’s the transformation itself: multiplying by (+1/√2, −1/√2) amounts to a clockwise rotation by 45 degrees, while multiplying by (+1/√2, +1/√2) amounts to the same, but counter-clockwise. So… Well… We should not try to think of our base vectors in any geometric way, because it just doesn’t make any sense. So Let’s not waste time on this: the ‘base states’ are a bit of a mystery, in the sense that they just are what they are: we can’t ‘reduce’ them any further, and trying to interpret them geometrically leads to contradictions, as evidenced by what I tried to do above. Base states are ‘vectors’ in a so-called Hilbert space, and… Well… That’s not your standard vector space. [If you think you can make more sense of it, please do let me know!]

Onwards!

Let’s take our transformation again:

  • |I〉 = (1/√2)|1〉 − (1/√2)|2〉 = (1/√2)[|1〉 − |2〉]
  • |II〉 = (1/√2)|1〉 + (1/√2)|2〉 = (1/√2)[|1〉 + |2〉]

Again, trying to geometrically interpret what it means to add or subtract two base states is not what you should be trying to do. In a way, the two expressions above only make sense when combining them with a final state, so when writing:

  • 〈ψ|I〉 = (1/√2)〈ψ|1〉 − (1/√2)〈ψ|2〉 = (1/√2)[〈ψ|1〉 − 〈ψ|2〉]
  • 〈ψ|II〉 = (1/√2)〈ψ|1〉 + (1/√2)〈ψ|2〉 = (1/√2)[〈ψ|1〉 + 〈ψ|2〉]

Taking the complex conjugate of this gives us the amplitudes of the system to be in state I or state II:

  • 〈I|ψ〉 = 〈ψ|I〉* = (1/√2)[〈ψ|1〉* − 〈ψ|2〉*] = (1/√2)[〈1|ψ〉 − 〈2|ψ〉]
  • 〈II|ψ〉 = 〈ψ|II〉* = (1/√2)[〈ψ|1〉* + 〈ψ|2〉*] = (1/√2)[〈1|ψ〉 + 〈2|ψ〉]

That still doesn’t tell us much, because we’d need to know the 〈1|ψ〉 and 〈2|ψ〉 functions, i.e. the amplitudes of the system to be in state 1 and state 2 respectively. What we do know, however, is that the 〈1|ψ〉 and 〈2|ψ〉 functions will have some rather special amplitudes. We wrote:

  • C= 〈 I | ψ 〉 =  e−(i/ħ)·EI·t
  • CII = 〈 II | ψ 〉 = e−(i/ħ)·EII·t

These are amplitudes of so-called stationary states: the associated probabilities – i.e. the absolute square of these functions – do not vary in time: |e−(i/ħ)·EI·t|2 = |e−(i/ħ)·EII·t|2 = 1. For our ionized hydrogen molecule, it means that, if it would happen to be in state I, it will stay in state I, and the same goes for state II. We write:

〈 I | I 〉 = 〈 II | II 〉 = 1 and 〈 I | II 〉 = 〈 II | I 〉 = 0

That’s actually just the so-called ‘orthogonality’ condition for base states, which we wrote as 〈i|j〉 = 〈j|i〉 = δij, but, in light of the fact that we can’t interpret them geometrically, we shouldn’t be calling it like that. The point is: we had those differential equations describing a system like this. If the amplitude to go from state 1 to state 2 was equal to some real- or complex-valued constant A, then we could write those equations either in terms of Cand C2, or in terms of Cand CII:

set of equations

So the two sets of equations are equivalent. However, what we want to do here is look at it in terms of Cand CII. Let’s first analyze those two energy levels E= E+ A and EII = E− A. Feynman graphs them as follows:

raph1raph2

Let me explain. In the first graph, we have E= E+ A and EII = E− A, and they are depicted as being symmetric, with A depending on the distance between the two protons. As for E0, that’s the energy of a hydrogen atom, i.e. a proton with a bound electron, and a separate proton. So it’s the energy of a system consisting of a hydrogen atom and a proton, which is obviously not the same as that of an ionized hydrogen molecule. The concept of a molecule assumes the protons are closely together. We assume E= 0 if the interproton distance is relatively large but, of course, as the protons come closer, we shouldn’t forget the repulsive electrostatic force between the two protons, which is represented by the dashed line in the first graph. Indeed, unlike the electron and the proton, the two protons will want to push apart, rather than pull together, so the potential energy of the system increases as the interproton distance decreases. So Eis not constant either: it also depends on the interproton distance. But let’s forget about Efor a while. Let’s look at the two curves for A now.

A is not varying in time, but its value does depend on the distance between the two protons. We’ll use this in a moment to calculate the approximate size of the hydrogen nucleus in a calculation that closely resembles Feynman’s calculation of the size of a hydrogen atom. That A should be some function of the interproton distance makes sense: the transition probability, and therefore A, will exponentially decrease with distance. There are a few things to reflect on here:

1. In the mentioned calculation of the size of a hydrogen atom, which is based on the Uncertainty Principle, Feynman shows that the energy of the system decreases when an electron is bound to the proton. The reasoning is that, if the potential energy of the electron is zero when it is not bound, then its potential energy will be negative when bound. Think of it: the electron and the proton attract each other, so it requires force to separate them, and force over a distance is energy. From our course in electromagnetics, we know that the potential energy, when bound, should be equal to −e2/a0, with ethe squared charge of the electron divided by 4πε0, and a0 the so-called Bohr radius of the atom. Of course, the electron also has kinetic energy. It can’t just sit on top of the proton because that would violate the Uncertainty Principle: we’d know where it was. Combining the two, Feynman calculates both a0 as well as the so-called Rydberg energy, i.e. the total energy of the bound electron, which is equal to −13.6 eV. So, yes, the bound state has less energy, so the electron will want to be bound, i.e. it will want to be close to one of the two protons.

2. Now, while that’s not what’s depicted above, it’s clear the magnitude of A will be related to that Rydberg energy which − please note − is quite high. Just compare it with the A for the ammonia molecule, which we calculated in our post on the maser: we found an A of about 0.5×10−4 eV there, so that’s like 270,000 times less! Nevertheless, the possibility is there, and what happens when the electron flips over amounts to tunneling: it penetrates and crosses a potential barrier. We did a post on that, and so you may want to look at how that works. One of the weird things we had to consider when a particle crosses such potential barrier, is that the momentum factor p in its wavefunction was some pure imaginary number, which we wrote as p = i·p’. We then re-wrote that wavefunction as a·e−iθ = a·e−i[(E/ħ)∙t − (i·p’/ħ)x] = a·e−i(E/ħ)∙t·ei2·p’·x/ħ = a·e−i(E/ħ)∙t·e−p’·x/ħ. Now, it’s easy to see that the e−p’·x/ħ factor in this formula is a real-valued exponential function, with the same shape as the general e−x function, which I depict below.

graph

This e−p’·x/ħ basically ‘kills’ our wavefunction as we move in the positive x-direction, across the potential barrier, which is what is illustrated below: if the distance is too large, then the amplitude for tunneling goes to zero.

potential barrier

So that’s what depicted in those graphs of E= E+ A and EII = E− A: A goes to zero when the interproton distance becomes too large. We also recognize the exponential shape for A in those graphs, which can also be derived from the same tunneling story.

Now we can calculate EA and E− A taking into account that both terms vary with the interproton distance as explained, and so that gives us the final curves on the right-hand side, which tell us that the equilibrium configuration of the ionized hydrogen molecule is state II, i.e. the lowest energy state, and the interproton distance there is approximately one Ångstrom, i.e. 1×10−10 m. [You can compare this with the Bohr radius, which we calculated as a0 = 0.528×10−10 m, so that all makes sense.] Also note the energy scale: ΔE is the excess energy over a proton plus a hydrogen atom, so that’s the energy when the two protons are far apart. Because it’s the excess energy, we have a zero point. That zero point is, obviously, the energy of a hydrogen atom and a proton. [Read this carefully, and please refer back to what I wrote above. The energy of a system consisting of a hydrogen atom and a proton is not the same as that of an ionized hydrogen molecule: the concept of a molecule assumes the protons are closely together.] We then re-scale by dividing by the Rydberg energy E= 13.6 eV. So ΔE/E≈ −0.2 ⇔ ΔE ≈ −0.2×13.6 = –2.72 eV. That basically says that the energy of our ionized hydrogen molecule is 2.72 eV lower than the energy of a hydrogen atom and a proton.

Why is it lower? We need to think about our model of the hydrogen atom once more: the energy of the electron was minimized by striking a balance between (1) being close to the proton and, therefore, having a low potential energy (or a low coulomb energy, as Feynman calls it) and (2) being further away from the proton and, therefore, lowering its kinetic energy according to the Uncertainty Principle ΔxΔp ≥ ħ/2, which Feynman boldly re-wrote as p = ħ/a0. Now, a molecular orbital, i.e. the electron being around two protons, results in “more space where the electron can have a low potential energy”, as Feynman puts it, so “the electron can spread out—lowering its kinetic energy—without increasing its potential energy.”

The whole discussion here actually amounts to an explanation for the mechanism by which an electron shared by two protons provides, in effect, an attractive force between the two protons. So we’ve got a single electron actually holding two protons together, which chemists refer to as a “one-electron bond.”

So… Well… That explains why the energy EII = E− A is what it is, so that’s smaller than Eindeed, with the difference equal to the value A for an interproton distance of 1 Å. But how should we interpret E= E+ A? What is that higher energy level? What does it mean?

That’s a rather tricky question. There’s no easy interpretation here, like we had for our ammonia molecule: the higher energy level had an obvious physical meaning in an electromagnetic field, as it was related to the electric dipole moment of the molecule. That’s not the case here: we have no magnetic or electric dipole moment here. So, once again, what’s the physical meaning of E= E+ A? Let me quote Feynman’s enigmatic answer here:

“Notice that this state is the difference of the states |1⟩ and |2⟩. Because of the symmetry of |1⟩ and |2⟩, the difference must have zero amplitude to find the electron half-way between the two protons. This means that the electron is somewhat more confined, which leads to a larger energy.”

What does he mean with that? It seems he’s actually trying to do what I said we shouldn’t try to do, and that is to interpret what adding versus subtracting states actually means. But let’s give it a fair look. We said that the |I〉 = (1/√2)[|1〉 − |2〉] expression didn’t mean much: we should add a final state and write: 〈ψ|I〉 = (1/√2)[〈ψ|1〉 − 〈ψ|2〉], which is equivalent to 〈I|ψ〉 = (1/√2)[〈1|ψ〉 − 〈2|ψ〉]. That still doesn’t tell us anything: we’re still adding amplitudes, and so we should allow for interference, and saying that |1⟩ and |2⟩ are symmetric simply means that 〈1|ψ〉 − 〈2|ψ〉 = 〈2|ψ〉 − 〈1|ψ〉 ⇔ 2·〈1|ψ〉 = 2·〈2|ψ〉 ⇔ 〈1|ψ〉 = 〈2|ψ〉. Wait a moment! That’s an interesting reflection. Following the same reasoning for |II〉 = (1/√2)[|1〉 + |2〉], we get 〈1|ψ〉 + 〈2|ψ〉 = 〈2|ψ〉 + 〈1|ψ〉 ⇔ … Huh? No, that’s trivial: 0 = 0.

Hmm… What to say? I must admit I don’t quite ‘get’ Feynman here: state I, with energy E= E+ A, seems to be both meaningless as well as impossible. The only energy levels that would seem to make sense here are the energy of a hydrogen atom and a proton and the (lower) energy of an ionized hydrogen molecule, which you get when you bring a hydrogen atom and a proton together. :-)

But let’s move to the next thing: we’ve added only one electron to the two protons, and that was it, and so we had an ionized hydrogen molecule, i.e. an H2+ molecule. Why don’t we do a full-blown H2 molecule now? Two protons. Two electrons. It’s easy to do. The set of base states is quite predictable, and illustrated below: electron a can be either one of the two protons, and the same goes for electron b.

base

We can then go through the same as for the ion: the molecule’s stability is shown in the graph below, which is very similar to the graph of the energy levels of the ionized hydrogen molecule, i.e. the H2+  molecule. The shape is the same, but the values are different: the equilibrium state is at an interproton distance of 0.74 Å, and the energy of the equilibrium state is like 5 eV (ΔE/E≈ −0.375) lower than the energy of two separate hydrogen atoms.

raph3The explanation for the lower energy is the same: state II is associated with some kind of molecular orbital for both electrons, resulting in “more space where the electron can have a low potential energy”, as Feynman puts it, so “the electron can spread out—lowering its kinetic energy—without increasing its potential energy.”

However, there’s one extra thing here: the two electrons must have opposite spins. That’s the only way to actually distinguish the two electrons. But there is more to it: if the two electrons would not have opposite spin, we’d violate Fermi’s rule: when identical fermions are involved, and we’re adding amplitudes, then we should do so with a negative sign for the exchanged case. So our transformation would be problematic:

〈II|ψ〉 = (1/√2)[〈1|ψ〉 + 〈2|ψ〉] = (1/√2)[〈2|ψ〉 + 〈1|ψ〉]

When we switch the electrons, we should get a minus sign. The weird thing is: we do get that minus sign for state I:

〈I|ψ〉 = (1/√2)[〈1|ψ〉 − 〈2|ψ〉] = −(1/√2)[〈2|ψ〉 − 〈1|ψ〉]

So… Well… We’ve got a bit of an answer there as to what that the ‘other’ (upper) energy level of E= E+ A actually means, in physical terms, that is. It models two hydrogens coming together with parallel electron spins. Applying Fermi’s rules  – i.e. the exclusion principle, basically – we find that state II is, quite simply, not allowed for parallel electron spins: state I is, and it’s the only one. There’s something deep here, so let me quote the Master himself on it:

“We find that the lowest energy state—the only bound state—of the H2 molecule has the two electrons with spins opposite. The total spin angular momentum of the electrons is zero. On the other hand, two nearby hydrogen atoms with spins parallel—and so with a total angular momentum —must be in a higher (unbound) energy state; the atoms repel each other. There is an interesting correlation between the spins and the energies. It gives another illustration of something we mentioned before, which is that there appears to be an “interaction” energy between two spins because the case of parallel spins has a higher energy than the opposite case. In a certain sense you could say that the spins try to reach an antiparallel condition and, in doing so, have the potential to liberate energy—not because there is a large magnetic force, but because of the exclusion principle.”

You should read this a couple of times. It’s an important principle. We’ll discuss it again in the next posts, when we’ll be talking spin in much more detail once again. :-) The bottom line is: if the electrons are parallel, then they won’t ‘share’ any space at all and, hence, they are really much more confined in space, and the associated energy level is, therefore, much higher.

Post scriptum: I said we’d ‘calculate’ the equilibrium interproton distance. We didn’t do that. We just gave them through the graphs, which are based on the results of a ‘detailed quantum-mechanical calculation’—or that’s what Feynman claims, at least. I am not sure if they correspond to experimentally determined values, or what calculations are behind, exactly. Feynman notes that “this approximate treatment of the H2molecule as a two-state system breaks down pretty badly once the protons get as close together as they are at the minimum in the curve and, therefore, it will not give a good value for the actual binding energy. For small separations, the energies of the two “states” we imagined are not really equal to E0, and a more refined quantum mechanical treatment is needed.”

So… Well… That says it all, I guess.

Two-state systems: the math versus the physics, and vice versa.

I think my previous post, on the math behind the maser, was a bit of a brain racker. However, the results were important and, hence, it is useful to generalize them so we can apply it to other two-state systems. :-) Indeed, we’ll use the very same two-state framework to analyze things like the stability of neutral and ionized hydrogen molecules and the binding of diatomic molecules in general – and lots of other stuff that can be analyzed as a two-state system. However, let’s first have look at the math once more. More importantly, let’s analyze the physics behind. 

At the center of our little Universe here :-) is the fact that the dynamics of a two-state system are described by a set of two differential equations, which we wrote as: System

It’s obvious these two equations are usually not easy to solve: the Cand Cfunctions are complex-valued amplitudes which vary not only in time but also in space, obviously, but, in fact, that’s not the problem. The issue is that the Hamiltonian coefficients Hij may also vary in space and in time, and so that‘s what makes things quite nightmarish to solve. [Note that, while H11 and H22 represent some energy level and, hence, are usually real numbers, H12 and H21 may be complex-valued. However, in the cases we’ll be analyzing, they will be real numbers too, as they will usually also represent some energy. Having noted that, being real- or complex-valued is not the problem: we can work with complex numbers and, as you can see from the matrix equation above, the i/ħ factor in front of our differential equations results in a complex-valued coefficient matrix anyway.]

So… Yes. It’s those non-constant Hamiltonian coefficients that caused us so much trouble when trying to analyze how a maser works or, more generally, how induced transitions work. [The same equations apply to blackbody radiation indeed, or other phenomena involved induced transitions.] In any case, so we won’t do that again – not now, at least – and so we’ll just go back to analyzing ‘simple’ two-state systems, i.e. systems with constant Hamiltonian coefficients.

Now, even for such simple systems, Feynman made life super-easy for us – too easy, I think – because he didn’t use the general mathematical approach to solve the issue on hand. That more general approach would be based on a technique you may or may not remember from your high school or university days: it’s based on finding the so-called eigenvalues and eigenvectors of the coefficient matrix. I won’t say too much about that, as there’s excellent online coverage of that, but… Well… We do need to relate the two approaches, and so that’s where math and physics meet. So let’s have a look at it all.

If we would write the first-order time derivative of those C1 and Cfunctions as C1‘ and C2‘ respectively (so we just put a prime instead of writing dC1/dt and dC2/dt), and we put them in a two-by-one column matrix, which I’ll write as C, and then, likewise, we also put the functions themselves, i.e. C1 and C2, in a column matrix, which I’ll write as C, then the system of equations can be written as the following simple expression:

C = AC

One can then show that the general solution will be equal to:

C = a1eλI·tv+ a2eλII·tvII

The λI and λII in the exponential functions are the eigenvalues of A, so that’s that two-by-two matrix in the equation, i.e. the coefficient matrix with the −(i/ħ)Hij elements. The vI and vII column matrices in the solution are the associated eigenvectors. As for a1 and a2, these are coefficients that depend on the initial conditions of the system as well as, in our case at least, the normalization condition: the probabilities we’ll calculate have to add up to one. So… Well… It all comes with the system, as we’ll see in a moment.

Let’s first look at those eigenvalues. We get them by calculating the determinant of the A−λI matrix, and equating it to zero, so we write det(A−λI) = 0. If A is a two-by-two matrix (which it is for the two-state systems that we are looking at), then we get a quadratic equation, and its two solutions will be those λI and λII values. The two eigenvalues of our system above can be written as:

λI = −(i/ħ)·EI and λII = −(i/ħ)·EII.

EI and EII are two possible values for the energy of our system, which are referred to as the upper and the lower energy level respectively. We can calculate them as:

energies

Note that we use the Roman numerals I and II for these two energy levels, rather than the usual Arabic numbers 1 and 2. That’s in line with Feynman’s notation: it relates to a special set of base states that we will introduce shortly. Indeed, plugging them into the a1eλI·t and a2eλII·t expressions gives us a1e−(i/ħ)·EI·t and a2e−(i/ħ)·EII·t and…

Well… It’s time to go back to the physics class now. What are we writing here, really? These two functions are amplitudes for so-called stationary states, i.e. states that are associated with probabilities that do not change in time. Indeed, it’s easy to see that their absolute square is equal to:

  • P= |a1e−(i/ħ)·EI·t|= |a1|2·|e−(i/ħ)·EI·t|= |a1|2
  • PII = |a2e−(i/ħ)·EII·t|= |a2|2·|e−(i/ħ)·EII·t|= |a2|2

Now, the a1 and a2 coefficients depend on the initial and/or normalization conditions of the system, so let’s leave those out for the moment and write the rather special amplitudes e−(i/ħ)·EI·t and e−(i/ħ)·EII·t as:

  • C= 〈 I | ψ 〉 =  e−(i/ħ)·EI·t
  • CII = 〈 II | ψ 〉 = e−(i/ħ)·EII·t

As you can see, there’s two base states that go with these amplitudes, which we denote as state | I 〉 and | II 〉 respectively, so we can write the state vector of our two-state system – like our ammonia molecule, or whatever – as:

| ψ 〉 = | I 〉 C| II 〉 CII = | I 〉〈 I | ψ 〉 + | II 〉〈 II | ψ 〉

In case you forgot, you can apply the magical | = ∑ | i 〉 〈 i | formula to see this makes sense: | ψ 〉 = ∑ | i 〉 〈 i | ψ 〉 = | I 〉 〈 I | ψ 〉 + | II 〉 〈 II | ψ 〉 = | I 〉 C| II 〉 CII.

Of course, we should also be able to revert back to the base states we started out with so, once we’ve calculated Cand C2, we can also write the state of our system in terms of state | 1 〉 and | 2 〉, which are the states as we defined them when we first looked at the problem. :-) In short, once we’ve got Cand C2, we can also write:

| ψ 〉 = | 1 〉 C| 2 〉 C= | 1 〉〈 1 | ψ 〉 + | 2 〉〈 2 | ψ 〉

So… Well… I guess you can sort of see how this is coming together. If we substitute what we’ve got so far, we get:

C = a1·CI·vI + a2·CII·vII

Hmm… So what’s that? We’ve seen something like C = a1·CI + a2·CII , as we wrote something like C1 = (a/2)·CI + (b/2)·CII b in our previous posts, for example—but what are those eigenvectors vI and vII? Why do we need them?

Well… They just pop up because we’re solving the system as mathematicians would do it, i.e. not as Feynman-the-Great-Physicist-and-Teacher-cum-Simplifier does it. :-) From a mathematical point of view, they’re the vectors that solve the (A−λII)vI = 0 and (A−λIII)vII = 0 equations, so they come with the eigenvalues, and their components will depend on the eigenvalues λand λI as well as the Hamiltonian coefficients. [I is the identity matrix in these matrix equations.] In fact, because the eigenvalues are written in terms of the Hamiltonian coefficients, they depend on the Hamiltonian coefficients only, but then it will be convenient to use the EI and EII values as a shorthand.

Of course, one can also look at them as base vectors that uniquely specify the solution C as a linear combination of vI and vII. Indeed, just ask your math teacher, or google, and you’ll find that eigenvectors can serve as a set of base vectors themselves. In fact, the transformations you need to do to relate them to the so-called natural basis are the ones you’d do when diagonalizing the coefficient matrix A, which you did when solving systems of equations back in high school or whatever you were doing at university. But then you probably forgot, right? :-) Well… It’s all rather advanced mathematical stuff, and so let’s cut some corners here. :-)

We know, from the physics of the situations, that the C1 and C2 functions and the CI and CII functions are related in the same way as the associated base states. To be precise, we wrote:

eq 1

This two-by-two matrix here is the transformation matrix for a rotation of state filtering apparatus about the y-axis, over an angle equal to α, when only two states are involved. You’ve seen it before, but we wrote it differently:

transformation

In fact, we can be more precise: the angle that we chose was equal to minus 90 degrees. Indeed, we wrote our transformation as:

Eq 4[Check the values against α = −π/2.] However, let’s keep our analysis somewhat more general for the moment, so as to see if we really need to specify that angle. After all, we’re looking for a general solution here, so… Well… Remembering the definition of the inverse of a matrix (and the fact that cos2α + sin2α = 1), we can write:

Eq 3

Now, if we write the components of vI and vII as vI1 and vI2, and vII1 and vII2 respectively, then the C = a1·CI·vI + a2·CII·vII expression is equivalent to:

  • C1 = a1·vI1·Ca2·vII1·CII
  • C2 = a1·vI2·CI + a2·vII2 ·CII

Hence, a1·vI1 = a2·vII2 = cos(α/2) and a2·vII1 = −a1·vI2 = sin(α/2). What can we do with this? Can we solve this? Not really: we’ve got two equations and four variables. So we need to look at the normalization and starting conditions now. For example, we can choose our t = 0 point such that our two-state system is in state 1, or in state I. And then we know it will not be in state 2, or state II. In short, we can impose conditions like:

|C1(0)|= 1 = |a1·vI1·CI(0) + a2·vII1·CII(0)|and |C2|= 0 = |a1·vI1·CI(0) + a2·vII1·CII(0)|

However, as Feynman puts it: “These conditions do not uniquely specify the coefficients. They are still undetermined by an arbitrary phase.”

Hmm… He means the α, of course. So… What to do? Well… It’s simple. What he’s saying here is that we do need to specify that transformation angle. Just look at it: the a1·vI1 = a2·vII2 = cos(α/2) and a2·vII1 = −a1·vI2 = sin(α/2) conditions only make sense when we equate α with −π/2, so we can write:

  • a1·vI1 = a2·vII2 = cos(−π/4) = 1/√2
  • a2·vII1 = −a1·vI2 = sin(−π/4) = –1/√2

It’s only then that we get a unique ratio for a1/a= vI1/vII2 = −vII1/vI2. [In case you think there are two angles in the circle for which the cosine equals minus the sine – or, what amounts to the same, for which the sine equals minus the cosine – then… Well… You’re right, but we’ve got α divided by two in the argument. So if α/2 is equal to the ‘other’ angle, i.e. 3π/4, then α itself will be equal to 6π/4 = 3π/2. And so that’s the same −π/2 angle as above: 3π/2 − 2π = −π/2, indeed. So… Yes. It all makes sense.]

What are we doing here? Well… We’re sort of imposing a ‘common-sense’ condition here. Think of it: if the vI1/vII2 and −vII1/vI2 ratios would be different, we’d have a huge problem, because we’d have two different values for the a1/aratio! And… Well… That just doesn’t make sense. The system must come with some specific value for aand a2. We can’t just invent two ‘new’ ones!

So… Well… We are alright now, and we can analyze whatever two-state system we want now. One example was our ammonia molecule in an electric field, for which we found that the following systems of equations were fully equivalent:

Set

So, the upshot is that you should always remember that everything we’re doing is subject to the condition that the ‘1’ and ‘2’ base states and the ‘I’ and ‘II’ base states (Feynman suggests to read I and II as ‘Eins’ and ‘Zwei’ – or try ‘Uno‘ and ‘Duo‘ instead :-) – so as to make a difference with ‘one’ and ‘two’) are ‘separated’ by an angle of (minus) 90 degrees. [Of course, I am not using the ‘right’ language here, obviously. I should say ‘projected’, or ‘orthogonal’, perhaps, but then that’s hard to say for base states: the [1/√2, 1/√2] and [1/√2, −1/√2] vectors are obviously orthogonal, because their dot product is zero, but, as you know, the base states themselves do not have such geometrical interpretation: they’re just ‘objects’ in what’s referred to as a Hilbert space. But… Well… I shouldn’t dwell on that here.]

So… There we are. We’re all set. Good to go! Please note that, in the absence of an electric field, the two Hamiltonians are even simpler:

equi

In fact, they’ll usually do the trick in what we’re going to deal with now.

[…] So… Well… That’s is really! :-) We’re now going to apply all this in the next posts, so as to analyze things like the stability of neutral and ionized hydrogen molecules and the binding of diatomic molecules. More interestingly, we’re going to talk about virtual particles. :-)

Addendum: I started writing this post because Feynman actually does give the impression there’s some kind of ‘doublet’ of aand a2 coefficients as he start his chapter on ‘other two-state systems’. It’s the symbols he’s using: ‘his’ aand a2, and the other doublet with the primes, i.e. a1‘ and a2‘, are the transformation amplitudesnot the coefficients that I am calculating above, and that he was calculating (in the previous chapter) too. So… Well… Again, the only thing you should remember from this post is that 90 degree angle as a sort of physical ‘common sense condition’ on the system.

Having criticized the Great Teacher for not being consistent in his use of symbols, I should add that the interesting thing is that, while confusing, his summary in that chapter does give us precise formulas for those transformation amplitudes, which he didn’t do before. Indeed, if we write them as a, b, c and d respectively (so as to avoid that confusing aand a2, and then a1‘ and a2‘ notation), so if we have:

transformation

then one can show that:

final

That’s, of course, fully consistent with the ratios we introduced above, as well as with the orthogonality condition that comes with those eigenvectors. Indeed, if a/b = −1 and c/d = +1, then a/b = −c/d and, therefore, a·d + b·c = 0. [I’ll leave it to you to compare the coefficients so as to check that’s the orthogonality condition indeed.]

In short, it all shows everything does come out of the system in a mathematical way too, so the math does match the physics once again—as it should, of course! :-)

The math behind the maser

As I skipped the mathematical arguments in my previous post so as to focus on the essential results only, I thought it would be good to complement that post by looking at the math once again, so as to ensure we understand what it is that we’re doing. So let’s do that now. We start with the easy situation: free space.

The two-state system in free space

We started with an ammonia molecule in free space, i.e. we assumed there were no external force fields, like a gravitational or an electromagnetic force field. Hence, the picture was as simple as the one below: the nitrogen atom could be ‘up’ or ‘down’ with regard to its spin around its axis of symmetry.

Capture

It’s important to note that this ‘up’ or ‘down’ direction is defined in regard to the molecule itself, i.e. not in regard to some external reference frame. In other words, the reference frame is that of the molecule itself. For example, if I flip the illustration above – like below – then we’re still talking the same states, i.e. the molecule is still in state 1 in the image on the left-hand side and it’s still in state 2 in the image on the right-hand side. 

Capture

We then modeled the uncertainty about its state by associating two different energy levels with the molecule: E0 + A and E− A. The idea is that the nitrogen atom needs to tunnel through a potential barrier to get to the other side of the plane of the hydrogens, and that requires energy. At the same time, we’ll show the two energy levels are effectively associated with an ‘up’ or ‘down’ direction of the electric dipole moment of the molecule. So that resembles the two spin states of an electron, which we associated with the +ħ/2 and −ħ/2 energies respectively. So if E0 would be zero (we can always take another reference point, remember?), then we’ve got the same thing: two energy levels that are separated by some definite amount: that amount is 2A for the ammonia molecule, and ħ when we’re talking quantum-mechanical spin. I should make a last note here, before I move on: note that these energies only make sense in the presence of some external field, because the + and − signs in the E0 + A and E− A and +ħ/2 and −ħ/2 expressions make sense only with regard to some external direction defining what’s ‘up’ and what’s ‘down’ really. But I am getting ahead of myself here. Let’s go back to free space: no external fields, so what’s ‘up’ or ‘down’ is completely random here. :-)

Now, we also know an energy level can be associated with a complex-valued wavefunction, or an amplitude as we call it. To be precise, we can associate it with the generic a·e−(i/ħ)·(E·t − px) expression which you know so well by now. Of course,  as the reference frame is that of the molecule itself, its momentum is zero, so the px term in the a·e−(i/ħ)·(E·t − px) expression vanishes and the wavefunction reduces to a·ei·ω·t a·e−(i/ħ)·E·t, with ω = E/ħ. In other words, the energy level determines the temporal frequency, or the temporal variation (as opposed to the spatial frequency or variation), of the amplitude.

We then had to find the amplitudes C1(t) = 〈 1 | ψ 〉 and C2(t) =〈 2 | ψ 〉, so that’s the amplitude to be in state 1 or state 2 respectively. In my post on the Hamiltonian, I explained why the dynamics of a situation like this can be represented by the following set of differential equations:

Hamiltonian

As mentioned, the Cand C2 functions evolve in time, and so we should write them as C= C1(t) and C= C2(t) respectively. In fact, our Hamiltonian coefficients may also evolve in time, which is why it may be very difficult to solve those differential equations! However, as I’ll show below, one usually assumes they are constant, and then one makes informed guesses about them so as to find a solution that makes sense.

Now, I should remind you here of something you surely know: if Cand Care solutions to this set of differential equations, then the superposition principle tells us that any linear combination a·C1 + b·Cwill also be a solution. So we need one or more extra conditions, usually some starting condition, which we can combine with a normalization condition, so we can get some unique solution that makes sense.

The Hij coefficients are referred to as Hamiltonian coefficients and, as shown in the mentioned post, the H11 and H22 coefficients are related to the amplitude of the molecule staying in state 1 and state 2 respectively, while the H12 and H21 coefficients are related to the amplitude of the molecule going from state 1 to state 2 and vice versa. Because of the perfect symmetry of the situation here, it’s easy to see that H11 should equal H22 , and that H12 and H21 should also be equal to each other. Indeed, Nature doesn’t care what we call state 1 or 2 here: as mentioned above, we did not define the ‘up’ and ‘down’ direction with respect to some external direction in space, so the molecule can have any orientation and, hence, switching the i an j indices should not make any difference. So that’s one clue, at least, that we can use to solve those equations: the perfect symmetry of the situation and, hence, the perfect symmetry of the Hamiltonian coefficients—in this case, at least!

The other clue is to think about the solution if we’d not have two states but one state only. In that case, we’d need to solve iħ·[dC1(t)/dt] = H11·C1(t). That’s simple enough, because you’ll remember that the exponential function is its own derivative. To be precise, we write: d(a·eiωt)/dt = a·d(eiωt)/dt = a·iω·eiωt, and please note that can be any complex number: we’re not necessarily talking a real number here! In fact, we’re likely to talk complex coefficients, and we multiply with some other complex number (iω) anyway here! So if we write iħ·[dC1/dt] = H11·C1 as dC1/dt = −(i/ħ)·H11·C1 (remember: i−1 = 1/i = −i), then it’s easy to see that the Ca·e–(i/ħ)·H11·t function is the general solution for this differential equation. Let me write it out for you, just to make sure:

dC1/dt = d[a·e–(i/ħ)H11t]/dt = a·d[e–(i/ħ)H11t]/dt = –a·(i/ħ)·H11·e–(i/ħ)H11t

= –(i/ħ)·H11·a·e–(i/ħ)H11= −(i/ħ)·H11·C1

Of course, that reminds us of our generic wavefunction a·e−(i/ħ)·E0·t wavefunction: we only need to equate H11 with E0 and we’re done! Hence, in a one-state system, the Hamiltonian coefficient is, quite simply, equal to the energy of the system. In fact, that’s a result can be generalized, as we’ll see below, and so that’s why Feynman says the Hamiltonian ought to be called the energy matrix.

In fact, we actually may have two states that are entirely uncoupled, i.e. a system in which there is no dependence of C1 on Cand vice versa. In that case, the two equations reduce to:

iħ·[dC1/dt] = H11·C1 and iħ·[dC2/dt] = H22·C2

These do not form a coupled system and, hence, their solutions are independent:

C1(t) = a·e–(i/ħ)·H11·t and C2(t) = b·e–(i/ħ)·H22·t 

The symmetry of the situation suggests we should equate a and b, and then the normalization condition says that the probabilities have to add up to one, so |C1(t)|+ |C2(t)|= 1, so we’ll find that = 1/√2.

OK. That’s simple enough, and this story has become quite long, so we should wrap it up. The two ‘clues’ – about symmetry and about the Hamiltonian coefficients being energy levels – lead Feynman to suggest that the Hamiltonian matrix for this particular case should be equal to:

H-matrix

Why? Well… It’s just one of Feynman’s clever guesses, and it yields probability functions that makes sense, i.e. they actually describe something real. That’s all. :-) I am only half-joking, because it’s a trial-and-error process indeed and, as I’ll explain in a separate section in this post, one needs to be aware of the various approximations involved when doing this stuff. So let’s be explicit about the reasoning here:

  1. We know that H11 = H22 = Eif the two states would be identical. In other words, if we’d have only one state, rather than two – i.e. if H12 and H21 would be zero – then we’d just plug that in. So that’s what Feynman does. So that’s what we do here too! :-)
  2. However, H12 and H21 are not zero, of course, and so assume there’s some amplitude to go from one position to the other by tunneling through the energy barrier and flipping to the other side. Now, we need to assign some value to that amplitude and so we’ll just assume that the energy that’s needed for the nitrogen atom to tunnel through the energy barrier and flip to the other side is equal to A. So we equate H12 and H21 with −A.

Of course, you’ll wonder: why minus A? Why wouldn’t we try H12 = H21 = A? Well… I could say that a particle usually loses potential energy as it moves from one place to another, but… Well… Think about it. Once it’s through, it’s through, isn’t it? And so then the energy is just Eagain. Indeed, if there’s no external field, the + or − sign is quite arbitrary. So what do we choose? The answer is: when considering our molecule in free space, it doesn’t matter. Using +A or −A yields the same probabilities. Indeed, let me give you the amplitudes we get for H11 = H22 = Eand H12 and H21 = −A:

  1. C1(t) = 〈 1 | ψ 〉 = (1/2)·e(i/ħ)·(E− A)·t + (1/2)·e(i/ħ)·(E+ A)·t = e(i/ħ)·E0·t·cos[(A/ħ)·t]
  2. C2(t) = 〈 2 | ψ 〉 = (1/2)·e(i/ħ)·(E− A)·t – (1/2)·e(i/ħ)·(E+ A)·t = i·e(i/ħ)·E0·t·sin[(A/ħ)·t]

[In case you wonder how we go from those exponentials to a simple sine and cosine factor, remember that the sum of complex conjugates, i.e eiθ eiθ reduces to 2·cosθ, while eiθ − eiθ reduces to 2·i·sinθ.]

Now, it’s easy to see that, if we’d have used +A rather than −A, we would have gotten something very similar:

  • C1(t) = 〈 1 | ψ 〉 = (1/2)·e(i/ħ)·(E+ A)·t + (1/2)·e(i/ħ)·(E− A)·t = e(i/ħ)·E0·t·cos[(A/ħ)·t]
  • C2(t) = 〈 2 | ψ 〉 = (1/2)·e(i/ħ)·(E+ A)·t – (1/2)·e(i/ħ)·(E− A)·t = −i·e(i/ħ)·E0·t·sin[(A/ħ)·t]

So we get a minus sign in front of our C2(t) function, because cos(α) = cos(–α) but sin(α) = −sin(α). However, the associated probabilities are exactly the same. For both, we get the same P1(t) and P2(t) functions:

  • P1(t) = |C1(t)|2 = cos2[(A/ħ)·t]
  • P2(t) = |C2(t)|= sin2[(A/ħ)·t]

[Remember: the absolute square of and −is |i|= +√12 = +1 and |i|2 = (−1)2|i|= +1 respectively, so the i and −i in the two C2(t) formulas disappear.]

You’ll remember the graph:

graph

Of course, you’ll say: that plus or minus sign in front of C2(t) should matter somehow, doesn’t it? Well… Think about it. Taking the absolute square of some complex number – or some complex function , in this case! – amounts to multiplying it with its complex conjugate. Because the complex conjugate of a product is the product of the complex conjugates, it’s easy to see what happens: the e(i/ħ)·E0·t factor in C1(t) = e(i/ħ)·E0·t·cos[(A/ħ)·t] and C2(t) = ±i·e(i/ħ)·E0·t·sin[(A/ħ)·t] gets multiplied by e+(i/ħ)·E0·t and, hence, doesn’t matter: e(i/ħ)·E0·t·e+(i/ħ)·E0·t = e0 = 1. The cosine factor in C1(t) = e(i/ħ)·E0·t·cos[(A/ħ)·t] is real, and so its complex conjugate is the same. Now, the ±i·sin[(A/ħ)·t] factor in C2(t) = ±i·e(i/ħ)·E0·t·sin[(A/ħ)·t] is a pure imaginary number, and so its complex conjugate is its opposite. For some reason, we’ll find similar solutions for all of the situations we’ll describe below: the factor determining the probability will either be real or, else, a pure imaginary number. Hence, from a math point of view, it really doesn’t matter if we take +A or −A for  or  real factor for those H12 and H21 coefficients. We just need to be consistent in our choice, and I must assume that, in order to be consistent, Feynman likes to think of our nitrogen atom borrowing some energy from the system and, hence, temporarily reducing its energy by an amount that’s equal to −A. If you have a better interpretation, please do let me know! :-)

OK. We’re done with this section… Except… Well… I have to show you how we got those C1(t) and C1(t) functions, no? Let me copy Feynman here:

solutionNote that the ‘trick’ involving the addition and subtraction of the differential equations is a trick we’ll use quite often, so please do have a look at it. As for the value of the a and b coefficients – which, as you can see, we’ve equated to 1 in our solutions for C1(t) and C1(t) – we get those because of the following starting condition: we assume that at t = 0, the molecule will be in state 1. Hence, we assume C1(0) = 1 and C2(0) = 0. In other words: we assume that we start out on that P1(t) curve in that graph with the probability functions above, so the C1(0) = 1 and C2(0) = 0 starting condition is equivalent to P1(0) = 1 and P1(0) = 0. Plugging that in gives us a/2 + b/2 = 1 and a/2 − b/2 = 0, which is possible only if a = b = 1.

Of course, you’ll say: what if we’d choose to start out with state 2, so our starting condition is P1(0) = 0 and P1(0) = 1? Then a = 1 and b = −1, and we get the solution we got when equating H12 and H21 with +A, rather than with −A. So you can think about that symmetry once again: when we’re in free space, then it’s quite arbitrary what we call ‘up’ or ‘down’.

So… Well… That’s all great. I should, perhaps, just add one more note, and that’s on that A/ħ value. We calculated it in the previous post, because we wanted to actually calculate the period of those P1(t) and P2(t) functions. Because we’re talking the square of a cosine and a sine respectively, the period is equal to π, rather than 2π, so we wrote: (A/ħ)·T = π ⇔ T = π·ħ/A. Now, the separation between the two energy levels E+ A and E− A, so that’s 2A, has been measured as being equal, more or less, to 2A ≈ 10−4 eV.

How does one measure that? As mentioned above, I’ll show you, in a moment, that, when applying some external field, the plus and minus sign do matter, and the separation between those two energy levels E+ A and E− A will effectively represent something physical. More in particular, we’ll have transitions from one energy level to another and that corresponds to electromagnetic radiation being emitted or absorbed, and so there’s a relation between the energy and the frequency of that radiation. To be precise, we can write 2A = h·f0. The frequency of the radiation that’s being absorbed or emitted is 23.79 GHz, which corresponds to microwave radiation with a wavelength of λ = c/f0 = 1.26 cm. Hence, 2·A ≈ 25×109 Hz times 4×10−15 eV·s = 10−4 eV, indeed, and, therefore, we can write: T = π·ħ/A ≈ 3.14 × 6.6×10−16 eV·s divided by 0.5×10−4 eV, so that’s 40×10−12 seconds = 40 picoseconds. That’s 40 trillionths of a seconds. So that’s very short, and surely much shorter than the time that’s associated with, say, a freely emitting sodium atom, which is of the order of 3.2×10−8 seconds. You may think that makes sense, because the photon energy is so much lower: a sodium light photon is associated with an energy equal to E = h·f = 500×1012 Hz times 4×10−15  eV·s = 2 eV, so that’s 20,000 times 10−4 eV.

There’s a funny thing, however. An oscillation of a frequency of 500 tera-hertz that lasts 3.2×10−8 seconds is equivalent to 500×1012 Hz times 3.2×10−8 s ≈ 16 million cycles. However, an oscillation of a frequency of 23.97 giga-hertz that only lasts 40×10−12 seconds is equivalent to 23.97×109 Hz times 40×10−12 s ≈ 1000×10−3 = 1 ! One cycle only? We’re surely not talking resonance here!

So… Well… I am just flagging it here. We’ll have to do some more thinking about that later. [I’ve added an addendum that may or may not help us in this regard. :-)]

The two-state system in a field

As mentioned above, when there is no external force field, we define the ‘up’ or ‘down’ direction of the nitrogen atom was defined with regard to its its spin around its axis of symmetry, so with regard to the molecule itself. However, when we apply an external electromagnetic field, as shown below, we do have some external reference frame.

Now, the external reference frame – i.e. the physics of the situation, really – may make it more convenient to define the whole system using another set of base states, which we’ll refer to as I and II, rather than 1 and 2. Indeed, you’ve seen the picture below: it shows a state selector, or a filter as we called it. In this case, there’s a filtering according to whether our ammonia molecule is in state I or, alternatively, state II. It’s like a Stern-Gerlach apparatus splitting an electron beam according to the spin state of the electrons, which is ‘up’ or ‘down’ too, but in a totally different way than our ammonia molecule. Indeed, the ‘up’ and ‘down’ spin of an electron has to do with its magnetic moment and its angular momentum. However, there are a lot of similarities here, and so you may want to compare the two situations indeed, i.e. the electron beam in an inhomogeneous magnetic field versus the ammonia beam in an inhomogeneous electric field.

electric field

Now, when reading Feynman, as he walks us through the relevant Lecture on all of this, you get the impression that it’s the I and II states only that have some kind of physical or geometric interpretation. That’s not the case. Of course, the diagram of the state selector above makes it very obvious that these new I and II base states make very much sense in regard to the orientation of the field, i.e. with regard to external space, rather than with respect to the position of our nitrogen atom vis-á-vis the hydrogens. But… Well… Look at the image below: the direction of the field (which we denote by ε because we’ve been using the E for energy) obviously matters when defining the old ‘up’ and ‘down’ states of our nitrogen atom too!

In other words, our previous | 1 〉 and | 2 〉 base states acquire a new meaning too: it obviously matters whether or not the electric dipole moment of the molecule is in the same or, conversely, in the opposite direction of the field. To be precise, the presence of the electromagnetic field suddenly gives the energy levels that we’d associate with these two states a very different physical interpretation.

ammonia

Indeed, from the illustration above, it’s easy to see that the electric dipole moment of this particular molecule in state 1 is in the opposite direction and, therefore, temporarily ignoring the amplitude to flip over (so we do not think of A for just a brief little moment), the energy that we’d associate with state 1 would be equal to E+ με. Likewise, the energy we’d associate with state 2 is equal to E− με.  Indeed, you’ll remember that the (potential) energy of an electric dipole is equal to the vector dot product of the electric dipole moment μ and the field vector ε, but with a minus sign in front so as to get the sign for the energy righ. So the energy is equal to −μ·ε = −|μ|·|ε|·cosθ, with θ the angle between both vectors. Now, the illustration above makes it clear that state 1 and 2 are defined for θ = π and θ = 0 respectively. [And, yes! Please do note that state 1 is the highest energy level, because it’s associated with the highest potential energy: the electric dipole moment μ of our ammonia molecule will – obviously! – want to align itself with the electric field ε ! Just think of what it would imply to turn the molecule in the field!]

Therefore, using the same hunches as the ones we used in the free space example, Feynman suggests that, when some external electric field is involved, we should use the following Hamiltonian matrix:

H-matrix 2

So we’ll need to solve a similar set of differential equations with this Hamiltonian now. We’ll do that later and, as mentioned above, it will be more convenient to switch to another set of base states, or another ‘representation’ as it’s referred to. But… Well… Let’s not get too much ahead of ourselves: I’ll say something about that before we’ll start solving the thing, but let’s first look at that Hamiltonian once more.

When I say that Feynman uses the same clues here, then… Well.. That’s true and not true. You should note that the diagonal elements in the Hamiltonian above are not the same: E+ με ≠ E+ με. So we’ve lost that symmetry of free space which, from a math point of view, was reflected in those identical H11 = H22 = Ecoefficients.

That should be obvious from what I write above: state 1 and state 2 are no longer those 1 and 2 states we described when looking at the molecule in free space. Indeed, the | 1 〉 and | 2 〉 states are still ‘up’ or ‘down’, but the illustration above also makes it clear we’re defining state 1 and state 2 not only with respect to the molecule’s spin around its own axis of symmetry but also vis-á-vis some direction in space. To be precise, we’re defining state 1 and state 2 here with respect to the direction of the electric field ε. Now that makes a really big difference in terms of interpreting what’s going on.

In fact, the ‘splitting’ of the energy levels because of that amplitude A is now something physical too, i.e. something that goes beyond just modeling the uncertainty involved. In fact, we’ll find it convenient to distinguish two new energy levels, which we’ll write as E= E+ A and EII = E− A respectively. They are, of course, related to those new base states | I 〉 and | II 〉 that we’ll want to use. So the E+ A and E− A energy levels themselves will acquire some physical meaning, and especially the separation between them, i.e. the value of 2A. Indeed, E= E+ A and EII = E− A will effectively represent an ‘upper’ and a ‘lower’ energy level respectively.

But, again, I am getting ahead of myself. Let’s first, as part of working towards a solution for our equations, look at what happens if and when we’d switch to another representation indeed.

Switching to another representation

Let me remind you of what I wrote in my post on quantum math in this regard. The actual state of our ammonia molecule – or any quantum-mechanical system really – is always to be described in terms of a set of base states. For example, if we have two possible base states only, we’ll write:

| φ 〉 = | 1 〉 C1 + | 2 〉 C2

You’ll say: why? Our molecule is obviously always in either state 1 or state 2, isn’t it? Well… Yes and no. That’s the mystery of quantum mechanics: it is and it isn’t. As long as we don’t measure it, there is an amplitude for it to be in state 1 and an amplitude for it to be in state 2. So we can only make sense of its state by actually calculating 〈 1 | φ 〉 and 〈 2 | φ 〉 which, unsurprisingly are equal to 〈 1 | φ 〉 = 〈 1 | 1 〉 C1 + 〈 1 | 2 〉 C2  = C1(t) and 〈 2 | φ 〉 = 〈 2 | 1 〉 C1 + 〈 2 | 2 〉 C2  = C2(t) respectively, and so these two functions give us the probabilities P1(t) and  P2(t) respectively. So that’s Schrödinger’s cat really: the cat is dead or alive, but we don’t know until we open the box, and we only have a probability function – so we can say that it’s probably dead or probably alive, depending on the odds – as long as we do not open the box. It’s as simple as that.

Now, the ‘dead’ and ‘alive’ condition are, obviously, the ‘base states’ in Schrödinger’s rather famous example, and we can write them as | DEAD 〉 and | ALIVE 〉 you’d agree it would be difficult to find another representation. For example, it doesn’t make much sense to say that we’ve rotated the two base states over 90 degrees and we now have two new states equal to (1/√2)·| DEAD 〉 – (1/√2)·| ALIVE 〉 and (1/√2)·| DEAD 〉 + (1/√2)·| ALIVE 〉 respectively. There’s no direction in space in regard to which we’re defining those two base states: dead is dead, and alive is alive.

The situation really resembles our ammonia molecule in free space: there’s no external reference against which to define the base states. However, as soon as some external field is involved, we do have a direction in space and, as mentioned above, our base states are now defined with respect to a particular orientation in space. That implies two things. The first is that we should no longer say that our molecule will always be in either state 1 or state 2. There’s no reason for it to be perfectly aligned with or against the field. Its orientation can be anything really, and so its state is likely to be some combination of those two pure base states | 1 〉 and | 2 〉.

The second thing is that we may choose another set of base states, and specify the very same state in terms of the new base states. So, assuming we choose some other set of base states | I 〉 and | II 〉, we can write the very same state | φ 〉 = | 1 〉 C1 + | 2 〉 Cas:

| φ 〉 = | I 〉 CI + | II 〉 CII

It’s really like what you learned about vectors in high school: one can go from one set of base vectors to another by a transformation, such as, for example, a rotation, or a translation. It’s just that, just like in high school, we need some direction in regard to which we define our rotation or our translation.

For state vectors, I showed how a rotation of base states worked in one of my posts on two-state systems. To be specific, we had the following relation between the two representations:

matrix

The (1/√2) factor is there because of the normalization condition, and the two-by-two matrix equals the transformation matrix for a rotation of a state filtering apparatus about the y-axis, over an angle equal to (minus) 90 degrees, which we wrote as:

transformation

The y-axis? What y-axis? What state filtering apparatus? Just relax. Think about what you’ve learned already. The orientations are shown below: the S apparatus separates ‘up’ and ‘down’ states along the z-axis, while the T-apparatus does so along an axis that is tilted, about the y-axis, over an angle equal to α, or φ, as it’s written in the table above.

tilted

Of course, we don’t really introduce an apparatus at this or that angle. We just introduced an electromagnetic field, which re-defined our | 1 〉 and | 2 〉 base states and, therefore, through the rotational transformation matrix, also defines our | I 〉 and | II 〉 base states.

[…] You may have lost me by now, and so then you’ll want to skip to the next section. That’s fine. Just remember that the representations in terms of | I 〉 and | II 〉 base states or in terms of | 1 〉 and | 2 〉 base states are mathematically equivalent. Having said that, if you’re reading this post, and you want to understand it, truly (because you want to truly understand quantum mechanics), then you should try to stick with me here. :-) Indeed, there’s a zillion things you could think about right now, but you should stick to the math now. Using that transformation matrix, we can relate the Cand CII coefficients in the | φ 〉 = | I 〉 CI + | II 〉 CII expression to the Cand CII coefficients in the | φ 〉 = | 1 〉 C1 + | 2 〉 C2 expression. Indeed, we wrote:

  • C= 〈 I | ψ 〉 = (1/√2)·(C1 − C2)
  • CII = 〈 II | ψ 〉 = (1/√2)·(C1 + C2)

That’s exactly the same as writing:

transformation

OK. […] Waw! You just took a huge leap, because we can now compare the two sets of differential equations:

set of equations

They’re mathematically equivalent, but the mathematical behavior of the functions involved is very different. Indeed, unlike the C1(t) and C2(t) amplitudes, we find that the CI(t) and CII(t) amplitudes are stationary, i.e. the associated probabilities – which we find by taking the absolute square of the amplitudes, as usual – do not vary in time. To be precise, if you write it all out and simplify, you’ll find that the CI(t) and CII(t) amplitudes are equal to:

  • CI(t) = 〈 I | ψ 〉 = (1/√2)·(C1 − C2) = (1/√2)·e(i/ħ)·(E0+ A)·t = (1/√2)·e(i/ħ)·EI·t
  • CII(t) = 〈 II | ψ 〉 = (1/√2)·(C1 + C2) = (1/√2)·e(i/ħ)·(E0− A)·t = (1/√2)·e(i/ħ)·EII·t

As the absolute square of the exponential is equal to one, the associated probabilities, i.e. |CI(t)|2 and |CII(t)|2, are, quite simply, equal to |1/√2|2 = 1/2. Now, it is very tempting to say that this means that our ammonia molecule has an equal chance to be in state I or state II. In fact, while I may have said something like that in my previous posts, that’s not how one should interpret this. The chance of our molecule being exactly in state I or state II, or in state 1 or state 2 is varying with time, with the probability being ‘dumped’ from one state to the other all of the time.

I mean… The electric dipole moment can point in any direction, really. So saying that our molecule has a 50/50 chance of being in state 1 or state 2 makes no sense. Likewise, saying that our molecule has a 50/50 chance of being in state I or state II makes no sense either. Indeed, the state of our molecule is specified by the | φ 〉 = | I 〉 CI + | II 〉 CII = | 1 〉 C1 + | 2 〉 Cequations, and neither of these two expressions is a stationary state. They mix two frequencies, because they mix two energy levels.

Having said that, we’re talking quantum mechanics here and, therefore, an external inhomogeneous electric field will effectively split the ammonia molecules according to their state. The situation is really like what a Stern-Gerlach apparatus does to a beam of electrons: it will split the beam according to the electron’s spin, which is either ‘up’ or, else, ‘down’, as shown in the graph below:

diagram 2

The graph for our ammonia molecule, shown below, is very similar. The vertical axis measures the same: energy. And the horizontal axis measures με, which increases with the strength of the electric field ε. So we see a similar ‘splitting’ of the energy of the molecule in an external electric field.

graph new

How should we explain this? It is very tempting to think that the presence of an external force field causes the electrons, or the ammonia molecule, to ‘snap into’ one of the two possible states, which are referred to as state I and state II respectively in the illustration of the ammonia state selector below. But… Well… Here we’re entering the murky waters of actually interpreting quantum mechanics, for which (a) we have no time, and (b) we are not qualified. So you should just believe, or take for granted, what’s being shown here: an inhomogeneous electric field will split our ammonia beam according to their state, which we define as I and II respectively, and which are associated with the energy E0+ A and E0− A  respectively.

electric field

As mentioned above, you should note that these two states are stationary. The Hamiltonian equations which, as they always do, describe the dynamics of this system, imply that the amplitude to go from state I to state II, or vice versa, is zero. To make sure you ‘get’ that, I reproduce the associated Hamiltonian matrix once again:

H-matrix I and II

Of course, that will change when we start our analysis of what’s happening in the maser. Indeed, we will have some non-zero HI,II and HII,I amplitudes in the resonant cavity of our ammonia maser, in which we’ll have an oscillating electric field and, as a result, induced transitions from state I to II and vice versa. However, that’s for later. While I’ll quickly insert the full picture diagram below, you should, for the moment, just think about those two stationary states and those two zeroes. :-)

maser diagram

Capito? If not… Well… Start reading this post again, I’d say. :-)

Intermezzo: on approximations

At this point, I need to say a few things about all of the approximations involved, because it can be quite confusing indeed. So let’s take a closer look at those energy levels and the related Hamiltonian coefficients. In fact, in his LecturesFeynman shows us that we can always have a general solution for the Hamiltonian equations describing a two-state system whenever we have constant Hamiltonian coefficients. That general solution – which, mind you, is derived assuming Hamiltonian coefficients that do not depend on time – can always be written in terms of two stationary base states, i.e. states with a definite energy and, hence, a constant probability. The equations, and the two definite energy levels are:

Hamiltonian

solution3

That yields the following values for the energy levels for the stationary states:

solution x

Now, that’s very different from the E= E0+ A and EII = E0− A energy levels for those stationary states we had defined in the previous section: those stationary states had no square root, and no μ2ε2, in their energy. In fact, that sort of answers the question: if there’s no external field, then that μ2ε2 factor is zero, and the square root in the expression becomes ±√A= ±A. So then we’re back to our E= E0+ A and EII = E0− A formulas. The whole point, however, is that we will actually have an electric field in that cavity. Moreover, it’s going to be a field that varies in time, which we’ll write:

field

Now, part of the confusion in Feynman’s approach is that he constantly switches between representing the system in terms of the I and II base states and the 1 and 2 base states respectively. For a good understanding, we should compare with our original representation of the dynamics in free space, for which the Hamiltonian was the following one:

H-matrix

That matrix can easily be related to the new one we’re going to have to solve, which is equal to:

H-matrix 2

The interpretation is easy if we look at that illustration again:

ammonia

If the direction of the electric dipole moment is opposite to the direction ε, then the associated energy is equal to −μ·ε = −μ·ε = −|μ|·|ε|·cosθ = −μ·ε·cos(π) = +με. Conversely, for state 2, we find −μ·ε·cos(0) = −με for the energy that’s associated with the dipole moment. You can and should think about the physics involved here, because they make sense! Thinking of amplitudes, you should note that the +με and −με terms effectively change the H11 and H22 coefficients, so they change the amplitude to stay in state 1 or state 2 respectively. That, of course, will have an impact on the associated probabilities, and so that’s why we’re talking of induced transitions now.

Having said that, the Hamiltonian matrix above keeps the −A for H12 and H21, so the matrix captures spontaneous transitions too!

Still… You may wonder why Feynman doesn’t use those Eand EII formulas with the square root because that would give us some exact solution, wouldn’t it? The answer to that question is: maybe it would, but would you know how to solve those equations? We’ll have a varying field, remember? So our Hamiltonian H11 and H22 coefficients will no longer be constant, but time-dependent. As you’re going to see, it takes Feynman three pages to solve the whole thing using the +με and −με approximation. So just imagine how complicated it would be using that square root expression! [By the way, do have a look at those asymptotic curves in that illustration showing the splitting of energy levels above, so you see how that approximation looks like.]

So that’s the real answer: we need to simplify somehow, so as to get any solutions at all!

Of course, it’s all quite confusing because, after Feynman first notes that, for strong fields, the A2 in that square root is small as compared to μ2ε2, thereby justifying the use of the simplified E= E0+ με = H11 and EII = E0− με = H22 coefficients, he continues and bluntly uses the very same square root expression to explain how that state selector works, saying that the electric field in the state selector will be rather weak and, hence, that με will be much smaller than A, so one can use the following approximation for the square root in the expressions above:

square root sum of squaresThe energy expressions then reduce to:energy 2

And then we can calculate the force on the molecules as:

force

So the electric field in the state selector is weak, but the electric field in the cavity is supposed to be strong, and so… Well… That’s it, really. The bottom line is that we’ve a beam of ammonia molecules that are all in state I, and it’s what happens with that beam then, that is being described by our new set of differential equations:

new

Solving the equations

As all molecules in our ammonia beam are described in terms of the | I 〉 and | II 〉 base states – as evidenced by the fact that we say all molecules that enter the cavity are state I – we need to switch to that representation. We do that by using that transformation above, so we write:

  • C= 〈 I | ψ 〉 = (1/√2)·(C1 − C2)
  • CII = 〈 II | ψ 〉 = (1/√2)·(C1 + C2)

Keeping these ‘definitions’ of Cand CII in mind, you should then add the two differential equations, divide the result by the square root of 2, and you should get the following new equation:

Eq1

Please! Do it and verify the result! You want to learn something here, no?  :-)

Likewise, subtracting the two differential equations, we get:

Eq2

We can re-write this as:set new

Now, the problem is that the Hamiltonian constants here are not constant. To be precise, the electric field ε varies in time. We wrote:

field

So HI,II  and HII,I, which are equal to με, are not constant: we’ve got Hamiltonian coefficients that are a function of time themselves. […] So… Well… We just need to get on with it and try to finally solve this thing. Let me just copy Feynman as he grinds through this:

F1

This is only the first step in the process. Feynman just takes two trial functions, which are really similar to the very general Ca·e–(i/ħ)·H11·t function we presented when only one equation was involved, or – if you prefer a set of two equations – those CI(t) = a·e(i/ħ)·EI·t and CI(t) = b·e(i/ħ)·EII·equations above. The difference is that the coefficients in front, i.e. γI and γII are not some (complex) constant, but functions of time themselves. The next step in the derivation is as follows:

F2

One needs to do a bit of gymnastics here as well to follow what’s going on, but please do check and you’ll see it works. Feynman derives another set of differential equations here, and they specify these γI = γI(t) and γII = γII(t) functions. These equations are written in terms of the frequency of the field, i.e. ω, and the resonant frequency ω0, which we mentioned above when calculating that 23.79 GHz frequency from the 2A = h·f0 equation. So ω0 is the same molecular resonance frequency but expressed as an angular frequency, so ω0 = f0/2π = ħ/2A. He then proceeds to simplify, using assumptions one should check. He then continues:

F3

That gives us what we presented in the previous post:

F4

So… Well… What to say? I explained those probability functions in my previous post, indeed. We’ve got two probabilities here:

  • P= cos2[(με0/ħ)·t]
  • PII = sin2[(με0/ħ)·t]

So that’s just like the P=  cos2[(A/ħ)·t] and P= sin2[(A/ħ)·t] probabilities we found for spontaneous transitions. But so here we are talking induced transitions.

As you can see, the frequency and, hence, the period, depend on the strength, or magnitude, of the electric field, i.e. the εconstant in the ε = 2ε0cos(ω·t) expression. The natural unit for measuring time would be the period once again, which we can easily calculate as (με0/ħ)·T = π ⇔ T = π·ħ/με0.

Now, we had that T = (π·ħ)/(2A) expression above, which allowed us to calculate the period of the spontaneous transition frequency, which we found was like 40 picoseconds, i.e. 40×10−12 seconds. Now, the T = (π·ħ)/(2με0) is very similar, it allows us to calculate the expected, average, or mean time for an induced transition. In fact, if we write Tinduced = (π·ħ)/(2με0) and Tspontaneous = (π·ħ)/(2A), then we can take ratio to find:

Tinduced/Tspontaneous = [(π·ħ)/(2με0)]/[(π·ħ)/(2A)] = A/με0

This A/με0 ratio is greater than one, so Tinduced/Tspontaneous is greater than one, which, in turn, means that the presence of our electric field – which, let me remind you, dances to the beat of the resonant frequency – causes a slower transition than we would have had if the oscillating electric field were not present.

But – Hey! – that’s the wrong comparison! Remember all molecules enter in a stationary state, as they’ve been selected so as to ensure they’re in state I. So there is no such thing as a spontaneous transition frequency here! They’re all polarized, so to speak, and they would remain that way if there was no field in the cavity. So if there was no oscillating electric field, they would never transition. Nothing would happen! Well… In terms of our particular set of base states, of course! Why? Well… Look at the Hamiltonian coefficients HI,II = HII,I = με: these coefficients are zero if ε is zero. So… Well… That says it all.

So that‘s what it’s all about: induced emission and, as I explained in my previous post, because all molecules enter in state I, i.e. the upper energy state, literally, they all ‘dump’ a net amount of energy equal to 2A into the cavity at the occasion of their first transition. The molecules then keep dancing, of course, and so they absorb and emit the same amount as they go through the cavity, but… Well… We’ve got a net contribution here, which is not only enough to maintain the cavity oscillations, but actually also provides a small excess of power that can be drawn from the cavity as microwave radiation of the same frequency.

As Feynman notes, an exact description of what actually happens requires an understanding of the quantum mechanics of the field in the cavity, i.e. quantum field theory, which I haven’t studied yet. But… Well… That’s for later, I guess. :-)

Post scriptum: The sheer length of this post shows we’re not doing something that’s easy here. Frankly, I feel the whole analysis is still quite obscure, in the sense that – despite looking at this thing again and again – it’s hard to sort of interpret what’s going on, in a physical sense that is. But perhaps one shouldn’t try that. I’ve quoted Feynman’s view on how easy or how difficult it is to ‘understand’ quantum mechanics a couple of times already, so let me do it once more:

“Because atomic behavior is so unlike ordinary experience, it is very difficult to get used to, and it appears peculiar and mysterious to everyone—both to the novice and to the experienced physicist. Even the experts do not understand it the way they would like to, and it is perfectly reasonable that they should not, because all of direct, human experience and human intuition applies to large objects.”

So… Well… I’ll grind through the remaining Lectures now – I am halfway through Volume III now – and then re-visit all of this. Despite Feynman’s warning, I want to understand it the way I like to, even if I don’t quite know what way that is right now. :-)

Addendum: As for those cycles and periods, I noted a couple of times already that the Planck-Einstein equation E = h·f  can usefully be re-written as E/= h, as it gives a physical interpretation to the value of the Planck constant. In fact, I said h is the energy that’s associated with one cycle, regardless of the frequency of the radiation involved. Indeed, the energy of a photon divided by the number of cycles per second, should give us the energy per cycle, no?

Well… Yes and no. Planck’s constant h and the frequency are both expressed referencing the time unit. However, if we say that a sodium atom emits one photon only as its electron transitions from a higher energy level to a lower one, and if we say that involves a decay time of the order of 3.2×10−8 seconds, then what we’re saying really is that a sodium light photon will ‘pack’ like 16 million cycles, which is what we get when we multiply the number of cycles per second (i.e. the mentioned frequency of 500×1012 Hz) by the decay time (i.e. 3.2×10−8 seconds): (500×1012 Hz)·(3.2×10−8 s) = 16 ×10cycles, indeed. So the energy per cycle is 2.068 eV (i.e. the photon energy) divided by 16×106, so that’s 0.129×10−6 eV. Unsurprisingly, that’s what we get when we we divide h by 3.2×10−8 s: (4.13567×10−15)/(3.2×10−8 s) = 1.29×10−7 eV. We’re just putting some values in to the E/(T) = h/T equation here.

The logic for that 2A = h·f0 is the same. The frequency of the radiation that’s being absorbed or emitted is 23.79 GHz, so the photon energy is (23.97×109 Hz)·(4.13567×10−15 eV·s) ≈ 1×10−4 eV. Now, we calculated the transition period T as T = π·ħ/A ≈ (π·6.626×10−16 eV·s)/(0.5×10−4 eV) ≈ 41.6×10−12 seconds. Now, an oscillation of a frequency of 23.97 giga-hertz that only lasts 41.6×10−12 seconds is an oscillation of one cycle only. The consequence is that, when we continue this style of reasoning, we’d have a photon that packs all of its energy into one cycle!

Let’s think about what this implies in terms of the density in space. The wavelength of our microwave radiation is 1.25×10−2 m, so we’ve got a ‘density’ of 1×10−4 eV/1.25×10−2 m = 0.8×10−2 eV/m = 0.008 eV/m. The wavelength of our sodium light is 0.6×10−6 m, so we get a ‘density’ of 1.29×10−7 eV/0.6×10−6 m = 2.15×10−1 eV/m = 0.215 eV/m. So the energy ‘density’ of our sodium light is 26.875 times that of our microwave radiation. :-)

Frankly, I am not quite sure if calculations like this make much sense. In fact, when talking about energy densities, I should review my posts on the Poynting vector. However, they may help you think things through. :-)

The ammonia maser: transitions in a time-dependent field

Feynman’s analysis of a maser – microwave amplification, by stimulated emission of radiation – combines an awful lot of stuff. Resonance, electromagnetic field theory, and quantum mechanics: it’s all there! Therefore, it’s complicated and, hence, actually very tempting to just skip it when going through his third volume of Lectures. But let’s not do that. What I want to do in this post is not repeat his analysis, but reflect on it and, perhaps, offer some guidance as to how to interpret some of the math.

The model: a two-state system

The model is a two-state system, which Feynman illustrates as follows:

ammonia

Don’t shy away now. It’s not so difficult. Try to understand. The nitrogen atom (N) in the ammonia molecule (NH3) can tunnel through the plane of the three hydrogen (H) atoms, so it can be ‘up’ or ‘down’. This ‘up’ or ‘down’ state has nothing to do with the classical or quantum-mechanical notion of spin, which is related to the magnetic moment. Nothing, i.e. nada, niente, rien, nichts! Indeed, it’s much simpler than that. :-) The nitrogen atom could be either beneath or, else, above the plane of the hydrogens, as shown above, with ‘beneath’ and ‘above’ being defined in regard to the molecule’s direction of rotation around its axis of symmetry. That’s all. That’s why we prefer simple numbers to denote those two states, instead of the confusing ‘up’ or ‘down’, or ‘↑’ or ‘↓’ symbols. We’ll just call the two states state ‘1’ and state ‘2’ respectively.

Having said that (i.e. having said that you shouldn’t think of spin, which is related to the angular momentum of some (net) electric charge), the NHmolecule does have some electric dipole moment, which is denoted by μ in the illustration and which, depending on the state of the molecule (i.e. the nitrogen atom being above or beneath the plane of the hydrogens), changes the total energy of the molecule by an amount that is equal to +με or −με, with ε some external electric field, as illustrated by the ε arrow on the left-hand side of the diagram. [You may think of that arrow as an electric field vector.] This electric field may vary in time and/or in space, but we’ll not worry about that now. In fact, we should first analyze what happens in the absence of an external field, which is what we’ll do now.

The NHmolecule will spontaneously transition from an ‘up’ to a ‘down’ state, or from ‘1’ to ‘2’—and vice versa, of course! This spontaneous transition is also modeled as an uncertainty in its energy. Indeed, we say that, even in the absence of an external electric field, there will be two energy levels, rather than one only: E+ A and E− A.

We wrote the amplitude to find the molecule in either one of these two states as:

  • C1(t) = 〈 1 | ψ 〉 = (1/2)·e(i/ħ)·(E− A)·t + (1/2)·e(i/ħ)·(E+ A)·t = e(i/ħ)·E0·t·cos[(A/ħ)·t]
  • C2(t) =〈 2 | ψ 〉 = (1/2)·e(i/ħ)·(E− A)·t – (1/2)·e(i/ħ)·(E+ A)·t = i·e(i/ħ)·E0·t·sin[(A/ħ)·t]

[Remember: the sum of complex conjugates, i.e eiθ eiθ reduces to 2·cosθ, while eiθ − eiθ reduces to 2·i·sinθ.]

That gave us the following probabilities:

  • P= |C1|2 = cos2[(A/ħ)·t]
  • P= |C2|= sin2[(A/ħ)·t]

[Remember: the absolute square of is |i|= +√12 = +1, so the in the C2(t) formula disappears.]

The graph below shows how these probabilities evolve over time. Note that, because of the square, the period of cos2[(A/ħ)·t] and sin2[(A/ħ)·t] is equal to π, instead of the usual 2π.

graph

The interpretation of this is easy enough: if our molecule can be in two states only, and it starts off in one, then the probability that it will remain in that state will gradually decline, while the probability that it flips into the other state will gradually increase. As Feynman puts it: the first state ‘dumps’ probability into the second state as time goes by, and vice versa, so the probability sloshes back and forth between the two states.

The graph above measures time in units of ħ/A but, frankly, the ‘natural’ unit of time would usually be the period, which you can easily calculate as (A/ħ)·T = π ⇔ T = π·ħ/A. In any case, you can go from one unit to another by dividing or multiplying by π. Of course, the period is the reciprocal of the frequency and so we can calculate the molecular transition frequency fas f0 = A/[π·ħ] = 2A/h. [Remember: h = 2π·ħ, so A/[π·ħ] = 2A/h].

Of course, by now we’re used to using angular frequencies, and so we’d rather write: ω= 2π·f= f= 2π·A/[π·ħ] = 2A/ħ. And because it’s always good to have some idea of the actual numbers – as we’re supposed to model something real, after all – I’ll give them to you straight away. The separation between the two energy levels E+ A and E− A has been measured as being equal to 2A = hf0 ≈ 10−4 eV, more or less. :-) That’s tiny. To avoid having to convert this to joule, i.e. the SI unit for energy, we can calculate the corresponding frequency using h expressed in eV·s, rather than in J·s. We get: f0 = 2A/h = (1×10−4 eV)/(4×10−15 eV·s) = 25 GHz. Now, we’ve rounded the numbers here: the exact frequency is 23.79 GHz, which corresponds to microwave radiation with a wavelength of λ = c/f0 = 1.26 cm.

How does one measure that? It’s simple: ammonia absorbs light of this frequency. The frequency is also referred to as a resonance frequency, as light of this frequency, i.e. microwave radiation, will also induce transitions from one state to another. In fact, that’s what the stimulated emission of radiation principle is all about. But we’re getting ahead of ourselves here. It’s time to look at what happens if we do apply some external electric field, which is what we’ll do now.

Polarization and induced transitions

As mentioned above, an electric field will change the total energy of the molecule by an amount that is equal to +με or −με. Of course, the plus or the minus in front of με depends both on the direction of the electric field ε, as well as on the direction of μ. However, it’s not like our molecule might be in four possible states. No. We assume the direction of the field is given, and then we have two states only, with the following energy levels:

energy 1

Don’t rack your brain over how you get that square root thing. You get it when applying the general solution of a pair of Hamiltonian equations to this particular case. For full details on how to get this general solution, I’ll refer you to Feynman. Of course, we’re talking base states here, which do not always have a physical meaning. However, in this case, they do: a jet of ammonia gas will split in an inhomogeneous electric field, and it will split according to these two states, just like a beam of particles with different spin in a Stern-Gerlach apparatus. A Stern-Gerlach apparatus splits particle beams because of an inhomogeneous magnetic field, however. So here we’re talking an electric field.

It’s important to note that the field should not be homogeneous, for the very same reason as to why the magnetic field in the Stern-Gerlach apparatus should not be homogeneous: it’s because the force on the molecules will be proportional to the derivative of the energy. So if the energy doesn’t vary—so if there is no strong field gradient—then there will be no force. [If you want to get more detail, check the section on the Stern-Gerlach apparatus in my post on spin and angular momentum.] To be precise, if με is much smaller than A, then one can use the following approximation for the square root in the expressions above:

square root sum of squaresThe energy expressions then reduce to:energy 2

And then we can calculate the force on the molecules as:

force

The bottom line is that our ammonia jet will split into two separate beams: all molecules in state I will be deflected toward the region of lower ε2, and all molecules in state II will be deflected toward the region of larger ε2. [We talk about ε2 rather than ε because of the ε2 gradient in that force formula. However, you could, of course, simplify and write ε2 as ε= 2εε.] So, to make a long story short, we should now understand the left-hand side of the schematic maser diagram below. It’s easy to understand that the ammonia molecules that go into the maser cavity are polarized.

maser diagram

To understand the maser, we need to understand how the maser cavity works. It’s a so-called resonant cavity, and we’ve got an electric field in it as well. The field direction happens to be south as we’re looking at it right now, but in an actual maser we’ll have an electric field that varies sinusoidally. Hence, while the direction of the field is always perpendicular to the direction of motion of our ammonia molecules, it switches from south to north and vice versa all of the time. We write ε as:

ε = 2ε0cos(ω·t) = ε0(ei·ω·t ei·ω·t)

Now, you’ve guessed it, of course. If we ensure that ω = ω= 2A/ħ, then we’ve got a maser. In fact, the result is a similar graph:

graph

Let’s first explain this graph. We’ve got two probabilities here:

  • P= cos2[(με0/ħ)·t]
  • PII = sin2[(με0/ħ)·t]

So that’s just like the P=  cos2[(A/ħ)·t] and P= sin2[(A/ħ)·t] probabilities we found for spontaneous transitions. In fact, the formulas for the related amplitudes are also similar to those for C1(t) and C2(t):

  • CI(t) = 〈 I | ψ 〉 = e(i/ħ)·EI·t·cos[(με0/ħ)·t], which is equal to:

CI(t) = e(i/ħ)·(E0+A)·t·cos[(με0/ħ)·t] = e(i/ħ)·(E0+A)·t·(1/2)·[ei·(με0/ħ)·t + ei·(με0/ħ)·t] = (1/2)·e(i/ħ)·(E0+A−με0)·t + (1/2)·e(i/ħ)·(E0+A+με0)·t 

  • CII(t) = 〈 II | ψ 〉 = i·e(i/ħ)·EII·t·sin[(με0/ħ)·t], which is equal to:

 CII(t) = e(i/ħ)·(E0−A)·t·i·sin[(με0/ħ)·t] = e(i/ħ)·(E0−A)·t·(1/2)·[ei·(με0/ħ)·t ei·(με0/ħ)·t] = (1/2)·e(i/ħ)·(E0−A−με0)·t – (1/2)·e(i/ħ)·(E0−A+με0)·t 

But so here we are talking induced transitions. As you can see, the frequency and, hence, the period, depend on the strength, or magnitude, of the electric field, i.e. the εconstant in the ε = 2ε0cos(ω·t) expression. The natural unit for measuring time would be the period once again, which we can easily calculate as (με0/ħ)·T = π ⇔ T = π·ħ/με0. However, Feynman adds an 1/2 factor so as to ensure it’s the time that corresponds to the time a molecule needs to go through the cavity. Well… That’s what he says, at least. I’ll show he’s actually wrong, but the idea is OK.

First have a look at the diagram of our maser once again. You can see that all molecules come in in state I, but are supposed to leave in state II. Now, Feynman says that’s because the cavity is just long enough so as to more or less ensure that all ammonia molecules switch from state I to state II. Hmm… Let’s have a close look at that. What the functions and the graph are telling us is that, at the point t = 1 (with t being measured in those π·ħ/2με0 units), the probability of being in state I has all been ‘dumped’ into the probability of being in state II! 

So… Well… Our molecules had better be in that state then! :-) Of course, the idea is that, as they transition from state I to state II, they lose energy. To be precise, according to our expressions for Eand EII above, the difference between the energy levels that are associated with these two states is equal to 2A + μ2ε02/A.

Now, a resonant cavity is a cavity designed to keep electromagnetic waves like the oscillating field that we’re talking about here going with minimal energy loss. Indeed, a microwave cavity – which is what we’re having here – is similar to a resonant circuit, except that it’s much better than any equivalent electric circuit you’d try to build, using inductors and capacitors. ‘Much better’ means it hardly needs energy to keep it going. We express that using the so-called Q-factor (believe it or not: the ‘Q’ stands for quality). The Q factor of a resonant cavity is of the order of 106, as compared to 102 for electric circuits that are designed for the same frequencies. But let’s not get into the technicalities here. Let me quote Feynman as he summarizes the operation of the maser:

“The molecule enters the cavity, [and then] the cavity field—oscillating at exactly the right frequency—induces transitions from the upper to the lower state, and the energy released is fed into the oscillating field. In an operating maser the molecules deliver enough energy to maintain the cavity oscillations—not only providing enough power to make up for the cavity losses but even providing small amounts of excess power that can be drawn from the cavity. Thus, the molecular energy is converted into the energy of an external electromagnetic field.”

As Feynman notes, it is not so simple to explain how exactly the energy of the molecules is being fed into the oscillations of the cavity: it would require to also deal with the quantum mechanics of the field in the cavity, in addition to the quantum mechanics of our molecule. So we won’t get into that nitty-gritty—not here at least. So… Well… That’s it, really.

Of course, you’ll wonder about the orders of magnitude, or minitude, involved. And… Well… That’s where this analysis is somewhat tricky. Let me first say something more about those resonant cavities because, while that’s quite straightforward, you may wonder if they could actually build something like that in the 1950s. :-) The condition is that the cavity length must be an integer multiple of the half-wavelength at resonance. We’ve talked about this before. [See, for example, my post on wave modes. More formally, the condition for resonance in a resonator is that the round trip distance, 2·d, is equal to an integral number of the wavelength λ, so we write: 2·d = N·λ, with N = 1, 2, 3, etc. Then, if the velocity of our wave is equal to c, then the resonant frequencies will be equal to f = (N·c)/(2·d).

Does that makes sense? Of course. We’re talking the speed of light, but we’re also talking microwaves. To be specific, we’re talking a frequency of 23.79 GHz and, more importantly, a wavelength that’s equal to λ = c/f0 = 1.26 cm, so for the first normal mode (N = 1), we get 2·d = λ ⇔ d = λ/2 = 63 mm. In short, we’re surely not talking nanotechnology here! In other words, the technological difficulties involved in building the apparatus were not insurmountable. :-)

But what about the time that’s needed to travel through it? What about that length? Now, that depends on the μεquantity if we are to believe Feynman here. Now, we actually don’t need to know the actual values for μ or ε: we said that the value of the μεproduct is (much) smaller than the value of A. Indeed, the fields that are used in those masers aren’t all that strong, and the electric dipole moment μ is pretty tiny. So let’s say με0 = A/2, which is the upper limit for our approximation of that square root above, so 2με0 = A = 0.5×10−4 eV. [The approximation for that square root expression is only used when y ≤ x/2.]

Let’s now think about the time. It was measured in units equal to T = π·ħ/2με0. So our T here is not the T we defined above, which was the period. Here it’s the period divided by two. First the dimensions: ħ is expressed in eV·s, and με0 is an energy, so we can express it in eV too: 1 eV ≈ 1.6×10−19 J, i.e. 160 zeptojoules. :-) π is just a real number, so our T = π·ħ/2μεgives us seconds alright. So we get:

T ≈ (3.14×6.6×10−16 eV·s)/(0.5×10−4 eV) ≈ 40×10−12 seconds

[…] Hmm… That doesn’t look good. Even when traveling at the speed of light – which our ammonia molecule surely doesn’t do! – it would only travel over a distance equal to (3×108 m/s)·(20×10−12 s) = 60×10−4 m = 0.6 cm = 6 mm. The speed of our ammonia molecule is likely to be only a fraction of the speed of light, so we’d have an extremely short cavity then. :-/ The time mentioned is also not in line with what Feynman mentions about the ammonia molecule being in the cavity for a ‘reasonable length of time, say for one millisecond.‘ One millisecond is also more in line with the actual dimensions of the cavity which, as you can see from the historical illustration below, is quite long indeed.

091_004lg

So what’s going on here? Feynman’s statement that T is “the time that it takes the molecule to go through the cavity” cannot be right. Let’s do some good thinking here. For example, let’s calculate the time that’s needed for a spontaneous state transition and compare with the time we calculated above. From the graph and the formulas above, we know we can calculate that from the (A/ħ)·T = π/2 equation. [Note the added 1/2 factor, because we’re not going through a full probability cycle: we’re going through a half-cycle only.] So that’s equivalent to T = (π·ħ)/(2A). We get:

T ≈ (3.14×6.6×10−16 eV·s)/(1×10−4 eV) ≈ 20×10−12 seconds

The T = π·ħ/2με0 and T = (π·ħ)/(2A) expression make it obvious that the expected, average, or mean time for a spontaneous versus an induced transition depends on A and με respectively. Let’s be systematic now, so we’ll distinguish Tinduced = (π·ħ)/(2με0) from Tspontaneous = (π·ħ)/(2A) respectively. Taking the ratio, we find:

Tinduced/Tspontaneous = [(π·ħ)/(2με0)]/[(π·ħ)/(2A)] = A/με0

However, we know the A/με0 ratio is greater than one, so Tinduced/Tspontaneous is greater than one, which, in turn, means that the presence of our electric field – which, let me remind you, dances to the beat of the resonant frequency – causes a slower transition than we would have had if the oscillating electric field were not present. We may write the equation above as:

Tinduced = [A/με0]·Tspontaneous = [A/με0]·(π·ħ)/(2A) = h/(4με0)

However, that doesn’t tell us anything new. It just says that the transition period (T) is inversely proportional to the strength of the field (as measured by ε0). So a weak field will make for a longer transition period (T), with T → ∞ as ε0 → 0. So it all makes sense, but what do we do with this?

The Tinduced/Tspontaneous = [με0/A]−1 is the most telling. It says that the Tinduced/Tspontaneous is inversely proportional to the με0/A ratio. For example, if the energy με0 is only one fifth of the energy A, then the time for the induced transition will be five times that of a spontaneous transition. To get something like a millisecond, however, we’d need the με0/A ratio to go down to like a billionth or something, which doesn’t make sense.

So what’s the explanation? Is Feynman hiding something from us? He’s obviously aware of these periods because, when discussing the so-called three-state maser, he notes that “The | I 〉 state has a long lifetime, so its population can be increased.” But… Well… That’s just not relevant here. He just made a mistake: the length of the maser has nothing to do with it. The thing is: once the molecule transitions from state I to state II, then that’s basically the end of the story as far as the maser operation is concerned. By transitioning, it dumps that energy 2A + μ2ε02/A into the electric field, and that’s it. That’s energy that came from outside, because the ammonia molecules were selected so as to ensure they were in state I. So all the transitions afterwards don’t really matter: the ammonia molecules involved will absorb energy as they transition, and then give it back as they transition again, and so on and so on. But that’s no extra energy, i.e. no new or outside energy: it’s just energy going back and forth from the field to the molecules and vice versa.

So, in a way, those PI and PII curves become irrelevant. Think of it: the energy that’s related to A and μεis defined with respect to a certain orientation of the molecule as well as with respect to the direction of the electric field before it enters the apparatus, and the induced transition is to happen when the electric field inside of the cavity points south, as shown in the diagram. But then the transition happens, and that’s the end of the story, really. Our molecule is then in state II, and will oscillate between state II and I, and back again, and so on and so on, but it doesn’t mean anything anymore, as these flip-flops do not add any net energy to the system as a whole.

So that’s the crux of the matter, really. Mind you: the energy coming out of the first masers was of the order of one microwatt, i.e. 10−6 joule per second. Not a lot, but it’s something, and so you need to explain it from an ‘energy conservation’ perspective: it’s energy that came in with the molecules as they entered the cavity. So… Well… That’s it.

The obvious question, of course, is: why do we actually need the oscillating field in the cavity? If all molecules come in in the ‘upper’ state, they’ll all dump their energy anyway. Why do we need the field? Well… First, you should note that the whole idea is that our maser keeps going because it uses the energy that the molecules are dumping into its field. The more important thing, however, is that we actually do need the field to induce the transition. That’s obvious from the math. Look at the probability functions once again:

  • P= cos2[(με0/ħ)·t]
  • PII = sin2[(με0/ħ)·t]

If there would be no electric field, i.e. if ε0 = 0, then P= 1 and PII = 0. So, our ammonia molecules enter in state I and, more importantly, stay in state I forever, so there’s no chance whatsoever to transition to state II. Also note what I wrote above: Tinduced = h/(4με0), and, therefore, we find that T → ∞ as ε0 → 0.

So… Well… That’s it. I know this is not the ‘standard textbook’ explanation of the maser—it surely isn’t Feynman’s! But… Well… Please do let me know what you think about it. What I write above, indicates the analysis is much more complicated than standard textbooks would want it to be.

There’s one more point related to masers that I need to elaborate on, and that’s its use as an ‘atomic’ clock. So let me quickly do that now.

The use of a maser as an ‘atomic’ clock 

In light of the amazing numbers involved – we talked GHz frequencies, and cycles expressed in picoseconds – we may wonder how it’s possible to ‘tune’ the frequency of the field to the ‘natural’ molecular transition frequency. It will be no surprise to hear that it’s actually not straightforward. It’s got to be right: if the frequency of the field, which we’ll denote by ω, is somewhat ‘off’ – significantly different from the molecular transition frequency ω– then the chance of transitioning from state I to state II shrinks significantly, and actually becomes zero for all practical purposes. That basically means that, if the frequency isn’t right, then the presence of the oscillating field doesn’t matter. In fact, the fact that the frequency has got to be right – with tolerances that, as we will see in a moment, are expressed in billionths – is why a maser can be used as an atomic clock.

The graph below illustrates the principle. If ω = ω0, then the probability that a transition from state I to II will happen is one, so PI→II(ω)/PI→II0) = 1. If it’s slightly off, though, then the ratio decreases quickly, which means that the PI→II probability goes rapidly down to zero. [There’s secondary and tertiary ‘bumps’ because of interference of amplitudes, but they’re insignificant.] As evidenced from the graph, the cut-off point is ω − ω= 2π/T, which we can re-write as 2π·f − 2π·f= 2π/T, which is equivalent to writing: (f − f0)/f0 =   1/(f0T). Now, we know that f= 23.79 GHz, but what’s T in this expression? Well… This time around it actually is the time that our ammonia molecules spend in the resonant cavity, from going in to going out, which Feynman says is of the order of a millisecond—so that’s much more reasonable that those 40 picoseconds we calculated. So 1/(f0T) = 1/[23.79×109·1×−3] ≈ 0.042×10−6 = 42×10−9  , i.e. 42 billionths indeed, which Feynman rounds to “five parts in 108“, i.e. five parts in a hundred million.

graph frequency

In short, the frequency must be ‘just right’, so as to get a significant transition probability and, therefore, get some net energy out of our maser, which, of course, will come out of our cavity as microwave radiation of the same frequency. Now that’s how one the first ‘atomic’ clock was built: the maser was the equivalent of a resonant circuit, and one could keep it going with little energy, because it’s so good as a resonant circuit. However, in order to get some net energy out of the system, in the form of microwave radiation of, yes, the ammonia frequency, the applied frequency had to be exactly right. To be precise, the applied frequency ω has to match the ω0 frequency, i.e. the molecular resonance frequency, with a precision expressed in billionths. As mentioned above, the power output is very limited, but it’s real: it comes out through the ‘output waveguide’ in the illustration above or, as the Encyclopædia Brittanica puts it: “Output is obtained by allowing some radiation to escape through a small hole in the resonator.” :-)

In any case, a maser is not build to produce huge amounts of power. On the contrary, the state selector obviously consumes more power than comes out of the cavity, obviously, so it’s not some generator. Its main use nowadays is as a clock indeed, and so it’s that simple really: if there’s no output, then the ‘clock’ doesn’t work.

It’s an interesting topic, but you can read more about it yourself. I’ll just mention that, while the ammonia maser was effectively used as a timekeeping device, the next-generation of atomic clocks was based on the hydrogen maser, which was introduced in 1960. The principle is the same. Let me quote the Encyclopædia Brittanica on it: “Its output is a radio wave, hose frequency of 1,420,405,751.786 hertz (cycles per second) is reproducible with an accuracy of one part in 30 × 1012. A clock controlled by such a maser would not get out of step more than one second in 100,000 years.”

So… Well… Not bad. :-) Of course, one needs another clock to check if one’s clock is still accurate, and so that’s what’s done internationally: national standards agencies in various countries maintain a network of atomic clocks which are intercompared and kept synchronized. So these clocks define a continuous and stable time scale, collectively, which is referred to as the International Atomic Time (TAI, from the French Temps Atomique International).

Well… That’s it for today. I hope you enjoyed it.

Post scriptum: 

When I say the ammonia molecule just dumps that energy 2A + μ2ε02/A into the electric field, and that’s “the end of the story”, then I am simplifying, of course. The ammonia molecule still has two energy levels, separated by an energy difference of 2A and, obviously, it keeps its electric dipole moment and so that continues to play as we’ve got an electric field in the cavity. In fact, the ammonia molecule has a high polarizability coefficient, which means it’s highly sensitive to the electric field inside of the cavity. So, yes, the molecules will continue ‘dancing’ to be the beat of the field indeed, and absorbing and releasing energy, in accordance with that 2A and με0 factor, and so the probability curves do remain relevant—of course! However, we talked net energy going into the field, and so that’s where the ‘end of story’ story comes in. I hope I managed to make that clear.

In fact, there’s lots of other complications as well, and Feynman mentions them briefly in his account of things. But let’s keep things simple here. :-) Also, if you’d want to know how we get that PI→II(ω)/PI→II0), check it out in Feynman. However, I have to warn you: the math involved is not easy. Not at all, really. The set of differential equations that’s involved is complicated, and it takes a while to understand why Feynman uses the trial functions he uses. So the solution that comes out, i.e. those simple P= cos2[(με0/ħ)·t] and PII = sin2[(με0/ħ)·t] functions, makes sense—but, if you check it out, you’ll see the whole mathematical argument is rather complicated. That’s just how it is, I am afraid. :-)