I think my previous post, on the math behind the maser, was a bit of a brain racker. However, the results were important and, hence, it is useful to generalize them so we can apply it to other two-state systems. ðŸ™‚ Indeed, we’ll use the very same two-state framework to analyze things like the stability of neutral and ionized hydrogen molecules and the binding of diatomic molecules in generalÂ â€“ and lots of other stuff that can be analyzed as a two-state system. However, let’s first have look at the math once more. More importantly, **let’s analyze the physics behind.Â **

At the center of our little Universe here ðŸ™‚ is the fact that theÂ *dynamicsÂ *of a two-state system are described by a set of two differential equations, which we wrote as:Â

It’s obvious these two equations are usually *not* easy to solve: theÂ C_{1Â }and C_{2Â }functions are complex-valued *amplitudes*Â which vary not only in time but also in space, obviously, but, in fact, that’s *not* the problem. **The issue is that the Hamiltonian coefficients H _{ijÂ }may also vary in space and in time**, and so

*that*‘s what makes things quiteÂ nightmarish to solve.Â [Note that, while H

_{11Â }and H

_{22Â }represent some energy level and, hence, are usually

*realÂ*numbers, H

_{12Â }and H

_{21Â }may be complex-valued. However, in the cases we’ll be analyzing, they will be real numbers too, as they will usually also represent some

*energy*. Having noted that, being real- or complex-valued is

*notÂ*the problem: we can work with complex numbers and, as you can see from the matrix equation above, the

*i*/Ä§ factor in front of our differential equations results in a complex-valued coefficient matrix anyway.]

So… Yes. It’s those *non*-constant Hamiltonian coefficients that caused us so much trouble when trying to analyze how a maser works or, more generally, how *inducedÂ *transitions work. [The same equations apply to blackbody radiation indeed, or other phenomena involved induced transitions.] In any case, so we won’t do that again â€“ not now, at least â€“ and so we’ll just go back to analyzing ‘simple’Â two-state systems, i.e. systems with *constantÂ *Hamiltonian coefficients.

Now, even for such simple systems, Feynman made life super-easy for us â€“ *tooÂ *easy, I thinkÂ â€“ because he didn’t use the general mathematical approach to solve the issue on hand.Â That more general approach would be based on a technique you may or may not remember from your high school or university days: it’s based on finding the so-called eigenvalues and eigenvectors of the coefficient matrix. I won’t say too much about that, as there’s excellent online coverage of that, but… Well… We *doÂ *need to relate the two approaches, and so that’s where math and physics meet. So let’s have a look at it all.

If we would write the first-order time derivative of those C_{1}Â and C_{2Â }functions as C_{1}‘ and C_{2}‘ respectively (so we just put a prime instead of writing dC_{1}/dt and dC_{2}/dt), and we put them in a two-by-oneÂ *column **matrix*,Â which I’ll write as ** C‘**, and then, likewise, we also put the functions themselves, i.e. C

_{1}Â and C

_{2}, in a column matrix, which I’ll write as

*, then the system of equations can be written as the following simple expression:*

**C**** C‘**Â = A

**C**One can then show that the general solution will be equal to:

* C*Â =

*a*

_{1}

*e*

^{Î»IÂ·t}

**v**_{IÂ }

*+ a*

_{2}

*e*

^{Î»IIÂ·t}

**v**_{II}

The Î»_{I}Â and Î»_{II}Â in the exponential functions are the *eigenvalues*Â of A, so that’s that two-by-two matrix in the equation, i.e. the *coefficientÂ *matrix with the âˆ’(*i*/Ä§)H_{ijÂ }elements. TheÂ **v**_{I}Â andÂ **v**_{II}Â column matrices in the solution are the associated *eigenvectors*. As forÂ *a*_{1}Â and *a*_{2}, these are coefficients that depend on the initial conditions of the system as well as, in our case at least, the normalization condition: the *probabilitiesÂ *we’ll calculate have to add up to one. So… Well… It all comes with the system, as we’ll see in a moment.

Let’s first look at those *eigenvalues. *We get them*Â *by calculating theÂ *determinantÂ *of the Aâˆ’Î»I matrix, and equating it to zero, so we write det(Aâˆ’Î»I) = 0. If A is a two-by-two matrix (which it is for the two-state systems that we are looking at), then we get a quadratic equation, and its two solutions will be those Î»_{I}Â and Î»_{II}Â values. The two eigenvalues of *our* system above can be written as:

Î»_{I}Â = âˆ’(*i*/Ä§)Â·E_{I}Â and Î»_{II}Â = âˆ’(*i*/Ä§)Â·E_{II}.

E_{I}Â and E_{II}Â are *two possible values *for the *energy* of our system, which are referred to as the upper and the lower energy level respectively. We can calculate them as:

Note that we use theÂ *RomanÂ *numerals I and II for these two energy levels, rather than the usual Arabic numbers 1 and 2. That’s in line with Feynman’s notation: it relates to a special set ofÂ *base *states that we will introduce shortly.Â Indeed, plugging them into the *a*_{1}*e*^{Î»IÂ·t}Â andÂ *a*_{2}*e*^{Î»IIÂ·t}Â expressions gives usÂ *a*_{1}*e*^{âˆ’(i/Ä§)Â·EIÂ·t}Â andÂ *a*_{2}*e*^{âˆ’(i/Ä§)Â·EIIÂ·t}Â and…

Well… It’s time to go back to the physics class now. What are we writing here, *really*? These two functions are *amplitudesÂ *for so-calledÂ *stationary states*, i.e. states that are associated withÂ *probabilitiesÂ *that doÂ *notÂ *change in time. Indeed, it’s easy to see that theirÂ *absoluteÂ *square is equal to:

- P
_{IÂ }= |*a*_{1}*e*^{âˆ’(i/Ä§)Â·EIÂ·t}|^{2Â }= |*a*_{1}|^{2}Â·|*e*^{âˆ’(i/Ä§)Â·EIÂ·t}|^{2Â }= |*a*_{1}|^{2} - P
_{IIÂ }= |*a*_{2}*e*^{âˆ’(i/Ä§)Â·EIIÂ·t}|^{2Â }= |*a*_{2}|^{2}Â·|*e*^{âˆ’(i/Ä§)Â·EIIÂ·t}|^{2Â }= |*a*_{2}|^{2}

Now, theÂ *a*_{1}Â and *a*_{2}Â coefficients depend on the initial and/or normalization conditions of the system, so let’s leave those out for the moment and write theÂ rather special amplitudes *e*^{âˆ’(i/Ä§)Â·EIÂ·t}Â and *e*^{âˆ’(i/Ä§)Â·EIIÂ·t}Â as:

- C
_{IÂ }= âŒ© I |Â ÏˆÂ âŒª = Â*e*^{âˆ’(i/Ä§)Â·EIÂ·t} - C
_{IIÂ }= âŒ© II |Â ÏˆÂ âŒª =Â*e*^{âˆ’(i/Ä§)Â·EIIÂ·t}

As you can see, there’s two*Â **base*Â *statesÂ *that go with these amplitudes, which we denote as stateÂ | I âŒª andÂ | II âŒª respectively, so we can write theÂ *state vectorÂ *of our two-state systemÂ â€“ like our ammonia molecule, or whateverÂ â€“ as:

| ÏˆÂ âŒª = | I âŒª C_{IÂ }*+Â *| II âŒª C_{IIÂ }= | I âŒªâŒ© I |Â ÏˆÂ âŒª + | II âŒªâŒ© II |Â ÏˆÂ âŒª

In case you forgot, you can apply the magical | = âˆ‘ | i âŒª âŒ©Â iÂ | formula to see this makes sense: | ÏˆÂ âŒª =Â âˆ‘ | i âŒª âŒ©Â iÂ | ÏˆÂ âŒª = | I âŒª âŒ© I |Â ÏˆÂ âŒª + | II âŒªÂ âŒ© II |Â ÏˆÂ âŒª = | I âŒª C_{IÂ }*+Â *| II âŒª C_{II}.

Of course, we should also be able to revert back to the base states we started out with so, once we’ve calculated C_{1Â }andÂ C_{2}, we can also write the state of our system in terms of stateÂ | 1 âŒª andÂ | 2 âŒª, which are the states as we defined them when we first looked at the problem. ðŸ™‚ In short, once we’ve got C_{1Â }andÂ C_{2},Â we can also write:

| ÏˆÂ âŒª = | 1 âŒª C_{1Â }*+Â *| 2 âŒª C_{2Â }= | 1 âŒªâŒ© 1 |Â ÏˆÂ âŒª + | 2 âŒªâŒ© 2 |Â ÏˆÂ âŒª

So… Well… I guess you can sort of see how this is coming together. If we substitute what we’ve got so far, we get:

* C*Â =

*a*

_{1}

*Â·*C

_{I}Â·

**v**_{I}Â +Â

*a*

_{2}Â·C

_{II}Â·

**v**_{II}

Hmm… So what’s ** that**?Â We’ve seen something like

*Â =*

**C***a*

_{1}

*Â·*C

_{I}Â +Â

*a*

_{2}Â·C

_{IIÂ }, as we wrote something like C

_{1}Â = (a/2)

*Â·*C

_{I}Â + (b/2)Â·C

_{II}b in our previous posts, for exampleâ€”but what are thoseÂ

*eigenvectors*Â

**v**_{I}Â andÂ

**v**_{II}? Why do we need them?

Well… They just pop up because we’re solving the system as mathematicians would do it, i.e.Â * notÂ *as

*Feynman-the-Great-Physicist-and-Teacher-cum-Simplifier*Â does it. ðŸ™‚Â From a mathematical point of view, they’re the vectors that solve the (Aâˆ’Î»

_{I}I)

**v**_{I}Â =

**0**and (Aâˆ’Î»

_{II}I)

**v**_{II}Â =

**0**Â equations, so they come with theÂ

*eigenvalues*, and their components will depend on the eigenvaluesÂ Î»

_{IÂ }and Î»

_{I}Â as well as the Hamiltonian coefficients. [I is the identity matrix in these matrix equations.] In fact, because the eigenvalues are written in terms of the Hamiltonian coefficients, they depend on the Hamiltonian coefficients

*only*, but then it will be convenient to use the E

_{I}Â and E

_{II}Â values as a shorthand.

Of course, one can also look at them asÂ *base vectors*Â that *uniquelyÂ *specify the solution * C*Â as a linear combination of

**v**_{I}Â andÂ

**v**_{II}. Indeed, just ask your math teacher, or

*natural basis*are the ones you’d do whenÂ

*diagonalizingÂ*the coefficient matrix A, which you did when solving systems of equations back in high school or whatever you were doing at university. But then you probably forgot, right? ðŸ™‚ Well… It’s all rather advanced mathematical stuff, and so let’sÂ cut some corners here. ðŸ™‚

We *know*, from the physics of the situations, that theÂ C_{1}Â and C_{2}Â functions and the C_{I}Â and C_{II}Â functions are related in the same way as the associated base states. To be precise, we wrote:

This two-by-two matrix here is the transformation matrix for a rotation of state filtering apparatus about the y-axis, over an angle equal toÂ Î±, when only two states are involved. You’ve seen it before, but we wrote it differently:

In fact, we can be more precise: the angle that we chose was equal to *minusÂ *90 degrees. Indeed, we wrote ourÂ transformation as:

[Check the values against Î± = âˆ’Ï€/2.] However, let’s keep our analysis somewhat more general for the moment, so as to see if we really need to specify that angle. After all, we’re looking for a *generalÂ *solution here, so… Well… Remembering the definition of theÂ *inverseÂ *of a matrix (and the fact that cos^{2}Î±Â + sin^{2}Î± = 1), we can write:

Now, if we write the components of **v**_{I}Â andÂ **v**_{II}Â as *v*_{I1}Â andÂ *v*_{I2}, and *v*_{II1}Â andÂ *v*_{II2}Â respectively, then theÂ * C*Â =

*a*

_{1}

*Â·*C

_{I}Â·

**v**_{I}Â +Â

*a*

_{2}Â·C

_{II}Â·

**v**_{IIÂ }expression is equivalent to:

- C
_{1}=*a*_{1}*Â·**v*_{I1}Â·C_{IÂ }+Â*a*_{2}Â·*v*_{II1}Â·C_{II} - C
_{2}=*a*_{1}Â·*v*_{I2}Â·C_{I}Â +Â*a*_{2}Â·*v*_{II2}_{Â }Â·C_{II}

Hence, *a*_{1}*Â·**v*_{I1Â }= *a*_{2}*Â·**v*_{II2Â }= cos(Î±/2) and *a*_{2}Â·*v*_{II1}Â = âˆ’*a*_{1}Â·*v*_{I2Â }= sin(Î±/2). What can we do with this? Can we solve this? Not really: we’ve got two equations and four variables. So we need to look at the normalization and starting conditions now. For example, we can choose our t = 0 point such that our two-state system is in state 1, or in state I. And then we know it willÂ *notÂ *be in state 2, or state II. In short, we can impose conditions like:

|C_{1}(0)|^{2Â }= 1 = |*a*_{1}*Â·**v*_{I1}Â·C_{I}(0) +Â *a*_{2}Â·*v*_{II1}Â·C_{II}(0)|^{2Â }andÂ |C_{2}|^{2Â }= 0 = |*a*_{1}*Â·**v*_{I1}Â·C_{I}(0) +Â *a*_{2}Â·*v*_{II1}Â·C_{II}(0)|^{2Â }

However, as Feynman puts it: “*These conditions do not uniquely specify the coefficients. They are still undetermined by an arbitrary phase*.”

Hmm… He means theÂ Î±, of course.Â So… What to do? Well… It’s simple. What he’s saying here is that we *do* need to specify that transformation angle. Just look at it: **the a_{1}Â·v_{I1Â }= a_{2}Â·v_{II2Â }= cos(Î±/2) and a_{2}Â·v_{II1}Â = âˆ’a_{1}Â·v_{I2Â }= sin(Î±/2) conditions only make sense when we equateÂ Î± withÂ âˆ’Ï€/2**, so we can write:

*a*_{1}*Â·**v*_{I1Â }=*a*_{2}*Â·**v*_{II2Â }= cos(âˆ’Ï€/4) = 1/âˆš2*a*_{2}Â·*v*_{II1}Â = âˆ’*a*_{1}Â·*v*_{I2Â }= sin(âˆ’Ï€/4) =Â â€“1/âˆš2

It’s only then that we get a *unique* ratio for *a*_{1}/*a*_{2Â }=*Â **v*_{I1}/*v*_{II2Â }=Â âˆ’*v*_{II1}/*v*_{I2}. [In case you think there areÂ *twoÂ *angles in the circle for which the cosine equalsÂ *minusÂ *the sineÂ â€“ or, what amounts to the same, for which the sine equalsÂ *minusÂ *the cosineÂ â€“ then… Well… You’re right, but we’ve got Î±Â *divided by two *in the argument. So if Î±/2Â is equal to the ‘other’ angle, i.e.**Â **3Ï€/4, then Î± itself will be equal to 6Ï€/4 = 3Ï€/2. And so that’s the same âˆ’Ï€/2Â angle as above: 3Ï€/2 âˆ’ 2Ï€ =Â âˆ’Ï€/2, indeed. So… Yes. It all makes sense.]

What are we doing here? Well… We’re sort of imposing a ‘common-sense’ condition here. Think of it: if theÂ *v*_{I1}/*v*_{II2Â }and âˆ’*v*_{II1}/*v*_{I2Â }ratios would be different, we’d have a *hugeÂ *problem, because we’d have two different values for the *a*_{1}/*a*_{2Â }ratio! And… Well… That just doesn’t make sense. The system *mustÂ *come with someÂ *specific*Â value for*Â a _{1Â }*andÂ

*a*

_{2}. We can’t just invent two ‘new’ ones!

So… Well… We areÂ alright now, and we can analyze whatever two-state system we want now. One example was our ammonia molecule in an electric field, for which we found that the following systems of equations were fully equivalent:

So, the upshot is that you should always remember that everything we’re doing is subject to the condition that the ‘1’ and ‘2’ base states and the ‘I’ and ‘II’ base states (Feynman suggests to read I and II as ‘Eins’ and ‘Zwei’ â€“ or try ‘*Uno*‘ and ‘*Duo*‘ instead ðŸ™‚ â€“Â so as to make a difference with ‘one’ and ‘two’) are ‘separated’ by an angle of (minus) 90 degrees. [Of course, I am not using the ‘right’ language here, obviously. I should say ‘projected’, or ‘orthogonal’, perhaps, but then that’s hard to say for base states: the [1/âˆš2, 1/âˆš2] and [1/âˆš2, âˆ’1/âˆš2] vectors are obviously orthogonal, because their dot product is zero, but, as you know, the base states themselves*Â *do *notÂ *have such geometrical interpretation: they’re just ‘objects’ in what’s referred to as a *Hilbert space*. But… Well… I shouldn’t dwell on that here.]

So… There we are. We’re all set. Good to go! Please note that, in theÂ absence of an electric field, the two Hamiltonians are even simpler:

In fact, they’ll usually do the trick in what we’re going to deal with now.

[…] So… Well… That’s is really! ðŸ™‚ We’re now going to apply all this in the next posts, so as toÂ analyze things like the stability of neutral and ionized hydrogen molecules and the binding of diatomic molecules. More interestingly, we’re going to talk about virtual particles. ðŸ™‚

**Addendum**: I started writing this post because Feynman actually *does* give the impression there’s some kind of ‘doublet’ of *a _{1Â }*andÂ

*a*

_{2}Â coefficients as he start his chapter on ‘other two-state systems’. It’s the symbols he’s using: ‘his’Â

*a*andÂ

_{1Â }*a*

_{2}, and the other doublet with the primes, i.e.Â

*a*‘ andÂ

_{1}*a*

_{2}‘, are the

*transformation amplitudes*,Â

*notÂ*the coefficients that I am calculating above, and that he was calculating (in the previous chapter) too. So… Well… Again, the only thing you should remember from this post is that 90 degree angle as a sort of

*physical*‘common sense condition’ on the system.

Having criticized the Great Teacher for not being consistent in his use of symbols, I should add that the interesting thing is that, while confusing, his summary in that chapter does give us *preciseÂ *formulas for those transformation amplitudes, which he didn’t do before. Indeed, if we write them as *a*, *b*, *c* and *d* respectivelyÂ (so as to avoid that confusing *a _{1Â }*andÂ

*a*

_{2}, and thenÂ

*a*‘ andÂ

_{1}*a*

_{2}‘ notation), so if we have:

then one can show that:

That’s, of course, fully consistent with the ratios we introduced above, as well as with the orthogonality condition that comes with those *eigenvectors*. Indeed, if *a*/*b* = âˆ’1 and *c*/*d* = +1, then *a*/*b* = âˆ’*c*/*d*Â and, therefore, *a*Â·*d*Â +Â *b*Â·*c*Â = 0. [I’ll leave it to you to compare the coefficients so as to check that’s the orthogonality condition indeed.]

In short, it all shows everythingÂ *doesÂ *come out of the system in a *mathematicalÂ *way too,Â so the math does match the physics once againâ€”as it should, of course! ðŸ™‚