I think my previous post, on the math behind the maser, was a bit of a brain racker. However, the results were important and, hence, it is useful to generalize them so we can apply it to other two-state systems. 🙂 Indeed, we’ll use the very same two-state framework to analyze things like the stability of neutral and ionized hydrogen molecules and the binding of diatomic molecules in general – and lots of other stuff that can be analyzed as a two-state system. However, let’s first have look at the math once more. More importantly, **let’s analyze the physics behind. **

At the center of our little Universe here 🙂 is the fact that the *dynamics *of a two-state system are described by a set of two differential equations, which we wrote as:

It’s obvious these two equations are usually *not* easy to solve: the C_{1 }and C_{2 }functions are complex-valued *amplitudes* which vary not only in time but also in space, obviously, but, in fact, that’s *not* the problem. **The issue is that the Hamiltonian coefficients H _{ij }may also vary in space and in time**, and so

*that*‘s what makes things quite nightmarish to solve. [Note that, while H

_{11 }and H

_{22 }represent some energy level and, hence, are usually

*real*numbers, H

_{12 }and H

_{21 }may be complex-valued. However, in the cases we’ll be analyzing, they will be real numbers too, as they will usually also represent some

*energy*. Having noted that, being real- or complex-valued is

*not*the problem: we can work with complex numbers and, as you can see from the matrix equation above, the

*i*/ħ factor in front of our differential equations results in a complex-valued coefficient matrix anyway.]

So… Yes. It’s those *non*-constant Hamiltonian coefficients that caused us so much trouble when trying to analyze how a maser works or, more generally, how *induced *transitions work. [The same equations apply to blackbody radiation indeed, or other phenomena involved induced transitions.] In any case, so we won’t do that again – not now, at least – and so we’ll just go back to analyzing ‘simple’ two-state systems, i.e. systems with *constant *Hamiltonian coefficients.

Now, even for such simple systems, Feynman made life super-easy for us – *too *easy, I think – because he didn’t use the general mathematical approach to solve the issue on hand. That more general approach would be based on a technique you may or may not remember from your high school or university days: it’s based on finding the so-called eigenvalues and eigenvectors of the coefficient matrix. I won’t say too much about that, as there’s excellent online coverage of that, but… Well… We *do *need to relate the two approaches, and so that’s where math and physics meet. So let’s have a look at it all.

If we would write the first-order time derivative of those C_{1} and C_{2 }functions as C_{1}‘ and C_{2}‘ respectively (so we just put a prime instead of writing dC_{1}/dt and dC_{2}/dt), and we put them in a two-by-one *column **matrix*, which I’ll write as ** C‘**, and then, likewise, we also put the functions themselves, i.e. C

_{1}and C

_{2}, in a column matrix, which I’ll write as

*, then the system of equations can be written as the following simple expression:*

**C**** C‘** = A

**C**One can then show that the general solution will be equal to:

* C* =

*a*

_{1}

*e*

^{λI·t}

**v**_{I }

*+ a*

_{2}

*e*

^{λII·t}

**v**_{II}

The λ_{I} and λ_{II} in the exponential functions are the *eigenvalues* of A, so that’s that two-by-two matrix in the equation, i.e. the *coefficient *matrix with the −(*i*/ħ)H_{ij }elements. The **v**_{I} and **v**_{II} column matrices in the solution are the associated *eigenvectors*. As for *a*_{1} and *a*_{2}, these are coefficients that depend on the initial conditions of the system as well as, in our case at least, the normalization condition: the *probabilities *we’ll calculate have to add up to one. So… Well… It all comes with the system, as we’ll see in a moment.

Let’s first look at those *eigenvalues. *We get them* *by calculating the *determinant *of the A−λI matrix, and equating it to zero, so we write det(A−λI) = 0. If A is a two-by-two matrix (which it is for the two-state systems that we are looking at), then we get a quadratic equation, and its two solutions will be those λ_{I} and λ_{II} values. The two eigenvalues of *our* system above can be written as:

λ_{I} = −(*i*/ħ)·E_{I} and λ_{II} = −(*i*/ħ)·E_{II}.

E_{I} and E_{II} are *two possible values *for the *energy* of our system, which are referred to as the upper and the lower energy level respectively. We can calculate them as:

Note that we use the *Roman *numerals I and II for these two energy levels, rather than the usual Arabic numbers 1 and 2. That’s in line with Feynman’s notation: it relates to a special set of *base *states that we will introduce shortly. Indeed, plugging them into the *a*_{1}*e*^{λI·t} and *a*_{2}*e*^{λII·t} expressions gives us *a*_{1}*e*^{−(i/ħ)·EI·t} and *a*_{2}*e*^{−(i/ħ)·EII·t} and…

Well… It’s time to go back to the physics class now. What are we writing here, *really*? These two functions are *amplitudes *for so-called *stationary states*, i.e. states that are associated with *probabilities *that do *not *change in time. Indeed, it’s easy to see that their *absolute *square is equal to:

- P
_{I }= |*a*_{1}*e*^{−(i/ħ)·EI·t}|^{2 }= |*a*_{1}|^{2}·|*e*^{−(i/ħ)·EI·t}|^{2 }= |*a*_{1}|^{2} - P
_{II }= |*a*_{2}*e*^{−(i/ħ)·EII·t}|^{2 }= |*a*_{2}|^{2}·|*e*^{−(i/ħ)·EII·t}|^{2 }= |*a*_{2}|^{2}

Now, the *a*_{1} and *a*_{2} coefficients depend on the initial and/or normalization conditions of the system, so let’s leave those out for the moment and write the rather special amplitudes *e*^{−(i/ħ)·EI·t} and *e*^{−(i/ħ)·EII·t} as:

- C
_{I }= 〈 I | ψ 〉 =*e*^{−(i/ħ)·EI·t} - C
_{II }= 〈 II | ψ 〉 =*e*^{−(i/ħ)·EII·t}

As you can see, there’s two* **base* *states *that go with these amplitudes, which we denote as state | I 〉 and | II 〉 respectively, so we can write the *state vector *of our two-state system – like our ammonia molecule, or whatever – as:

| ψ 〉 = | I 〉 C_{I }*+ *| II 〉 C_{II }= | I 〉〈 I | ψ 〉 + | II 〉〈 II | ψ 〉

In case you forgot, you can apply the magical | = ∑ | i 〉 〈 i | formula to see this makes sense: | ψ 〉 = ∑ | i 〉 〈 i | ψ 〉 = | I 〉 〈 I | ψ 〉 + | II 〉 〈 II | ψ 〉 = | I 〉 C_{I }*+ *| II 〉 C_{II}.

Of course, we should also be able to revert back to the base states we started out with so, once we’ve calculated C_{1 }and C_{2}, we can also write the state of our system in terms of state | 1 〉 and | 2 〉, which are the states as we defined them when we first looked at the problem. 🙂 In short, once we’ve got C_{1 }and C_{2}, we can also write:

| ψ 〉 = | 1 〉 C_{1 }*+ *| 2 〉 C_{2 }= | 1 〉〈 1 | ψ 〉 + | 2 〉〈 2 | ψ 〉

So… Well… I guess you can sort of see how this is coming together. If we substitute what we’ve got so far, we get:

* C* =

*a*

_{1}

*·*C

_{I}·

**v**_{I}+

*a*

_{2}·C

_{II}·

**v**_{II}

Hmm… So what’s ** that**? We’ve seen something like

*=*

**C***a*

_{1}

*·*C

_{I}+

*a*

_{2}·C

_{II }, as we wrote something like C

_{1}= (a/2)

*·*C

_{I}+ (b/2)·C

_{II}b in our previous posts, for example—but what are those

*eigenvectors*

**v**_{I}and

**v**_{II}? Why do we need them?

Well… They just pop up because we’re solving the system as mathematicians would do it, i.e. * not *as

*Feynman-the-Great-Physicist-and-Teacher-cum-Simplifier*does it. 🙂 From a mathematical point of view, they’re the vectors that solve the (A−λ

_{I}I)

**v**_{I}=

**0**and (A−λ

_{II}I)

**v**_{II}=

**0**equations, so they come with the

*eigenvalues*, and their components will depend on the eigenvalues λ

_{I }and λ

_{I}as well as the Hamiltonian coefficients. [I is the identity matrix in these matrix equations.] In fact, because the eigenvalues are written in terms of the Hamiltonian coefficients, they depend on the Hamiltonian coefficients

*only*, but then it will be convenient to use the E

_{I}and E

_{II}values as a shorthand.

Of course, one can also look at them as *base vectors* that *uniquely *specify the solution * C* as a linear combination of

**v**_{I}and

**v**_{II}. Indeed, just ask your math teacher, or

*natural basis*are the ones you’d do when

*diagonalizing*the coefficient matrix A, which you did when solving systems of equations back in high school or whatever you were doing at university. But then you probably forgot, right? 🙂 Well… It’s all rather advanced mathematical stuff, and so let’s cut some corners here. 🙂

We *know*, from the physics of the situations, that the C_{1} and C_{2} functions and the C_{I} and C_{II} functions are related in the same way as the associated base states. To be precise, we wrote:

This two-by-two matrix here is the transformation matrix for a rotation of state filtering apparatus about the y-axis, over an angle equal to α, when only two states are involved. You’ve seen it before, but we wrote it differently:

In fact, we can be more precise: the angle that we chose was equal to *minus *90 degrees. Indeed, we wrote our transformation as:

[Check the values against α = −π/2.] However, let’s keep our analysis somewhat more general for the moment, so as to see if we really need to specify that angle. After all, we’re looking for a *general *solution here, so… Well… Remembering the definition of the *inverse *of a matrix (and the fact that cos^{2}α + sin^{2}α = 1), we can write:

Now, if we write the components of **v**_{I} and **v**_{II} as *v*_{I1} and *v*_{I2}, and *v*_{II1} and *v*_{II2} respectively, then the * C* =

*a*

_{1}

*·*C

_{I}·

**v**_{I}+

*a*

_{2}·C

_{II}·

**v**_{II }expression is equivalent to:

- C
_{1}=*a*_{1}*·**v*_{I1}·C_{I }+*a*_{2}·*v*_{II1}·C_{II} - C
_{2}=*a*_{1}·*v*_{I2}·C_{I}+*a*_{2}·*v*_{II2}_{ }·C_{II}

Hence, *a*_{1}*·**v*_{I1 }= *a*_{2}*·**v*_{II2 }= cos(α/2) and *a*_{2}·*v*_{II1} = −*a*_{1}·*v*_{I2 }= sin(α/2). What can we do with this? Can we solve this? Not really: we’ve got two equations and four variables. So we need to look at the normalization and starting conditions now. For example, we can choose our t = 0 point such that our two-state system is in state 1, or in state I. And then we know it will *not *be in state 2, or state II. In short, we can impose conditions like:

|C_{1}(0)|^{2 }= 1 = |*a*_{1}*·**v*_{I1}·C_{I}(0) + *a*_{2}·*v*_{II1}·C_{II}(0)|^{2 }and |C_{2}|^{2 }= 0 = |*a*_{1}*·**v*_{I1}·C_{I}(0) + *a*_{2}·*v*_{II1}·C_{II}(0)|^{2 }

However, as Feynman puts it: “*These conditions do not uniquely specify the coefficients. They are still undetermined by an arbitrary phase*.”

Hmm… He means the α, of course. So… What to do? Well… It’s simple. What he’s saying here is that we *do* need to specify that transformation angle. Just look at it: **the a_{1}·v_{I1 }= a_{2}·v_{II2 }= cos(α/2) and a_{2}·v_{II1} = −a_{1}·v_{I2 }= sin(α/2) conditions only make sense when we equate α with −π/2**, so we can write:

*a*_{1}*·**v*_{I1 }=*a*_{2}*·**v*_{II2 }= cos(−π/4) = 1/√2*a*_{2}·*v*_{II1}= −*a*_{1}·*v*_{I2 }= sin(−π/4) = –1/√2

It’s only then that we get a *unique* ratio for *a*_{1}/*a*_{2 }=* **v*_{I1}/*v*_{II2 }= −*v*_{II1}/*v*_{I2}. [In case you think there are *two *angles in the circle for which the cosine equals *minus *the sine – or, what amounts to the same, for which the sine equals *minus *the cosine – then… Well… You’re right, but we’ve got α *divided by two *in the argument. So if α/2 is equal to the ‘other’ angle, i.e.** **3π/4, then α itself will be equal to 6π/4 = 3π/2. And so that’s the same −π/2 angle as above: 3π/2 − 2π = −π/2, indeed. So… Yes. It all makes sense.]

What are we doing here? Well… We’re sort of imposing a ‘common-sense’ condition here. Think of it: if the *v*_{I1}/*v*_{II2 }and −*v*_{II1}/*v*_{I2 }ratios would be different, we’d have a *huge *problem, because we’d have two different values for the *a*_{1}/*a*_{2 }ratio! And… Well… That just doesn’t make sense. The system *must *come with some *specific* value for* a _{1 }*and

*a*

_{2}. We can’t just invent two ‘new’ ones!

So… Well… We are alright now, and we can analyze whatever two-state system we want now. One example was our ammonia molecule in an electric field, for which we found that the following systems of equations were fully equivalent:

So, the upshot is that you should always remember that everything we’re doing is subject to the condition that the ‘1’ and ‘2’ base states and the ‘I’ and ‘II’ base states (Feynman suggests to read I and II as ‘Eins’ and ‘Zwei’ – or try ‘*Uno*‘ and ‘*Duo*‘ instead 🙂 – so as to make a difference with ‘one’ and ‘two’) are ‘separated’ by an angle of (minus) 90 degrees. [Of course, I am not using the ‘right’ language here, obviously. I should say ‘projected’, or ‘orthogonal’, perhaps, but then that’s hard to say for base states: the [1/√2, 1/√2] and [1/√2, −1/√2] vectors are obviously orthogonal, because their dot product is zero, but, as you know, the base states themselves* *do *not *have such geometrical interpretation: they’re just ‘objects’ in what’s referred to as a *Hilbert space*. But… Well… I shouldn’t dwell on that here.]

So… There we are. We’re all set. Good to go! Please note that, in the absence of an electric field, the two Hamiltonians are even simpler:

In fact, they’ll usually do the trick in what we’re going to deal with now.

[…] So… Well… That’s is really! 🙂 We’re now going to apply all this in the next posts, so as to analyze things like the stability of neutral and ionized hydrogen molecules and the binding of diatomic molecules. More interestingly, we’re going to talk about virtual particles. 🙂

**Addendum**: I started writing this post because Feynman actually *does* give the impression there’s some kind of ‘doublet’ of *a _{1 }*and

*a*

_{2}coefficients as he start his chapter on ‘other two-state systems’. It’s the symbols he’s using: ‘his’

*a*and

_{1 }*a*

_{2}, and the other doublet with the primes, i.e.

*a*‘ and

_{1}*a*

_{2}‘, are the

*transformation amplitudes*,

*not*the coefficients that I am calculating above, and that he was calculating (in the previous chapter) too. So… Well… Again, the only thing you should remember from this post is that 90 degree angle as a sort of

*physical*‘common sense condition’ on the system.

Having criticized the Great Teacher for not being consistent in his use of symbols, I should add that the interesting thing is that, while confusing, his summary in that chapter does give us *precise *formulas for those transformation amplitudes, which he didn’t do before. Indeed, if we write them as *a*, *b*, *c* and *d* respectively (so as to avoid that confusing *a _{1 }*and

*a*

_{2}, and then

*a*‘ and

_{1}*a*

_{2}‘ notation), so if we have:

then one can show that:

That’s, of course, fully consistent with the ratios we introduced above, as well as with the orthogonality condition that comes with those *eigenvectors*. Indeed, if *a*/*b* = −1 and *c*/*d* = +1, then *a*/*b* = −*c*/*d* and, therefore, *a*·*d* + *b*·*c* = 0. [I’ll leave it to you to compare the coefficients so as to check that’s the orthogonality condition indeed.]

In short, it all shows everything *does *come out of the system in a *mathematical *way too, so the math does match the physics once again—as it should, of course! 🙂

Pingback: The Hamiltonian coefficients revisited | Reading Feynman

Pingback: N-state systems | Reading Feynman