Transforming amplitudes for spin-1/2 particles

Some say it is not possible to fully understand quantum-mechanical spin. Now, I do agree it is difficult, but I do not believe it is impossible. That’s why I wrote so many posts on it. Most of these focused on elaborating how the classical view of how a rotating charge precesses in a magnetic field might translate into the weird world of quantum mechanics. Others were more focused on the corollary of the quantization of the angular momentum, which is that, in the quantum-mechanical world, the angular momentum is never quite all in one direction only—so that explains some of the seemingly inexplicable randomness in particle behavior.

Frankly, I think those explanations help us quite a bit already but… Well… We need to go the extra mile, right? In fact, that’s drives my search for a geometric (or physical) interpretation of the wavefunction: the extra mile. 🙂

Now, in one of these many posts on spin and angular momentum, I advise my readers – you, that is – to try to work yourself through Feynman’s 6th Lecture on quantum mechanics, which is highly abstract and, therefore, usually skipped. [Feynman himself told his students to skip it, so I am sure that’s what they did.] However, if we believe the physical (or geometric) interpretation of the wavefunction that we presented in previous posts is, somehow, true, then we need to relate it to the abstract math of these so-called transformations between representations. That’s what we’re going to try to do here. It’s going to be just a start, and I will probably end up doing several posts on this but… Well… We do have to start somewhere, right? So let’s see where we get today. 🙂

The thought experiment that Feynman uses throughout his Lecture makes use of what Feynman’s refers to as modified or improved Stern-Gerlach apparatuses. They allow us to prepare a pure state or, alternatively, as Feynman puts it, to analyze a state. In theory, that is. The illustration below present a side and top view of such apparatus. We may already note that the apparatus itself—or, to be precise, our perspective of it—gives us two directions: (1) the up direction, so that’s the positive direction of the z-axis, and (2) the direction of travel of our particle, which coincides with the positive direction of the y-axis. [This is obvious and, at the same time, not so obvious, but I’ll talk about that in my next post. In this one, we basically need to work ourselves through the math, so we don’t want to think too much about philosophical stuff.]

Modified Stern-Gerlach

The kind of questions we want to answer in this post are variants of the following basic one: if a spin-1/2 particle (let’s think of an electron here, even if the Stern-Gerlach experiment is usually done with an atomic beam) was prepared in a given condition by one apparatus S, say the +S state, what is the probability (or the amplitude) that it will get through a second apparatus T if that was set to filter out the +T state?

The result will, of course, depend on the angles between the two apparatuses S and T, as illustrated below. [Just to respect copyright, I should explicitly note here that all illustrations are taken from the mentioned Lecture, and that the line of reasoning sticks close to Feynman’s treatment of the matter too.]

basic set-up

We should make a few remarks here. First, this thought experiment assumes our particle doesn’t get lost. That’s obvious but… Well… If you haven’t thought about this possibility, I suspect you will at some point in time. So we do assume that, somehow, this particle makes a turn. It’s an important point because… Well… Feynman’s argument—who, remember, represents mainstream physics—somehow assumes that doesn’t really matter. It’s the same particle, right? It just took a turn, so it’s going in some other direction. That’s all, right? Hmm… That’s where I part ways with mainstream physics: the transformation matrices for the amplitudes that we’ll find here describe something real, I think. It’s not just perspective: something happened to the electron. That something does not only change the amplitudes but… Well… It describes a different electron. It describes an electron that goes in a different direction now. But… Well… As said, these are reflections I will further develop in my next post. 🙂 Let’s focus on the math here. The philosophy will follow later. 🙂 Next remark.

Second, we assume the (a) and (b) illustrations above represent the same physical reality because the relative orientation between the two apparatuses, as measured by the angle α, is the same. Now that is obvious, you’ll say, but, as Feynman notes, we can only make that assumption because experiments effectively confirm that spacetime is, effectively, isotropic. In other words, there is no aether allowing us to establish some sense of absolute direction. Directions are relativerelative to the observer, that is… But… Well… Again, in my next post, I’ll argue that it’s not because directions are relative that they are, somehow, not real. Indeed, in my humble opinion, it does matter whether an electron goes here or, alternatively, there. These two different directions are not just two different coordinate frames. But… Well… Again. The philosophy will follow later. We need to stay focused on the math here.

Third and final remark. This one is actually very tricky. In his argument, Feynman also assumes the two set-ups below are, somehow, equivalent.

equivalent set-up

You’ll say: Huh? If not, say it! Huh? 🙂 Yes. Good. Huh? Feynman writes equivalentnot the same because… Well… They’re not the same, obviously:

  1. In the first set-up (a), T is wide open, so the apparatus is not supposed to do anything with the beam: it just splits and re-combines it.
  2. In set-up (b) the T apparatus is, quite simply, not there, so… Well… Again. Nothing is supposed to happen with our particles as they come out of S and travel to U.

The fundamental idea here is that our spin-1/2 particle (again, think of an electron here) enters apparatus U in the same state as it left apparatus S. In both set-ups, that is! Now that is a very tricky assumption, because… Well… While the net turn of our electron is the same, it is quite obvious it has to take two turns to get to U in (a), while it only takes one turn in (b). And so… Well… You can probably think of other differences too. So… Yes. And no. Same-same but different, right? 🙂

Right. That is why Feynman goes out of his way to explain the nitty-gritty behind: he actually devotes a full page in small print on this, which I’ll try to summarize in just a few paragraphs here. [And, yes, you should check my summary against Feynman’s actual writing on this.] It’s like this. While traveling through apparatus T in set-up (a), time goes by and, therefore, the amplitude would be different by some phase factor δ. [Feynman doesn’t say anything about this, but… Well… In the particle’s own frame of reference, this phase factor depend on the energy, the momentum and the time and distance traveled. Think of the argument of the elementary wavefunction here: θ = (E∙t – px)/ħ).] Now, if we believe that the amplitude is just some mathematical construct—so that’s what mainstream physicists (not me!) believe—then we could effectively say that the physics of (a) and (b) are the same, as Feynman does. In fact, let me quote him here:

“The physics of set-up (a) and (b) should be the same but the amplitudes could be different by some phase factor without changing the result of any calculation about the real world.”

Hmm… It’s one of those mysterious short passages where we’d all like geniuses like Feynman (or Einstein, or whomever) to be more explicit on their world view: if the amplitudes are different, can the physics really be the same? I mean… Exactly the same? It all boils down to that unfathomable belief that, somehow, the particle is real but the wavefunction that describes it, is not. Of course, I admit that it’s true that choosing another zero point for the time variable would also change all amplitudes by a common phase factor and… Well… That’s something that I consider to be not real. But… Well… The time and distance traveled in the apparatus is the time and distance traveled in the apparatus, right?

Bon… I have to stay away from these questions as for now—we need to move on with the math here—but I will come back to it later. But… Well… Talking math, I should note a very interesting mathematical point here. We have these transformation matrices for amplitudes, right? Well… Not yet. In fact, the coefficient of these matrices are exactly what we’re going to try to derive in this post, but… Well… Let’s assume we know them already. 🙂 So we have a 2-by-2 matrix to go from S to T, from T to U, and then one to go from S to U without going through T, which we can write as RSTRTU,  and RSU respectively. Adding the subscripts for the base states in each representation, the equivalence between the (a) and (b) situations can then be captured by the following formula:

phase factor

So we have that phase factor here: the left- and right-hand side of this equation is, effectively, same-same but different, as they would say in Asia. 🙂 Now, Feynman develops a beautiful mathematical argument to show that the eiδ factor effectively disappears if we convert our rotation matrices to some rather special form that is defined as follows:

normalization

I won’t copy his argument here, but I’d recommend you go over it because it is wonderfully easy to follow and very intriguing at the same time. [Yes. Simple things can be very intriguing.] Indeed, the calculation below shows that the determinant of these special rotation matrices will be equal to 1.

det is one

So… Well… So what? You’re right. I am being sidetracked here. The point is that, if we put all of our rotation matrices in this special form, the eiδ factor vanishes and the formula above reduces to:

reduced formula

So… Yes. End of excursion. Let us remind ourselves of what it is that we are trying to do here. As mentioned above, the kind of questions we want to answer will be variants of the following basic one: if a spin-1/2 particle was prepared in a given condition by one apparatus (S), say the +S state, what is the probability (or the amplitude) that it will get through a second apparatus (T) if that was set to filter out the +T state?

We said the result would depend on the angles between the two apparatuses S and T. I wrote: angles—plural. Why? Because a rotation will generally be described by the three so-called Euler angles:  α, β and γ. Now, it is easy to make a mistake here, because there is a sequence to these so-called elemental rotations—and right-hand rules, of course—but I will let you figure that out. 🙂

The basic idea is the following: if we can work out the transformation matrices for each of these elemental rotations, then we can combine them and find the transformation matrix for any rotation. So… Well… That fills most of Feynman’s Lecture on this, so we don’t want to copy all that. We’ll limit ourselves to the logic for a rotation about the z-axis, and then… Well… You’ll see. 🙂

So… The z-axis… We take that to be the direction along which we are measuring the angular momentum of our electron, so that’s the direction of the (magnetic) field gradient, so that’s the up-axis of the apparatus. In the illustration below, that direction points out of the page, so to speak, because it is perpendicular to the direction of the x– and the y-axis that are shown. Note that the y-axis is the initial direction of our beam.

rotation about z

Now, because the (physical) orientation of the fields and the field gradients of S and T is the same, Feynman says that—despite the angle—the probability for a particle to be up or down with regard to and T respectively should be the same. Well… Let’s be fair. He does not only say that: experiment shows it to be true. [Again, I am tempted to interject here that it is not because the probabilities for (a) and (b) are the same, that the reality of (a) and (b) is the same, but… Well… You get me. That’s for the next post. Let’s get back to the lesson here.] The probability is, of course, the square of the absolute value of the amplitude, which we will denote as C+C, C’+, and C’ respectively. Hence, we can write the following:

same probabilities

Now, the absolute values (or the magnitudes) are the same, but the amplitudes may differ. In fact, they must be different by some phase factor because, otherwise, we would not be able to distinguish the two situations, which are obviously different. As Feynman, finally, admits himself—jokingly or seriously: “There must be some way for a particle to know that it has turned the corner at P1.” [P1 is the midway point between and in the illustration, of course—not some probability.]

So… Well… We write:

C’+ = eiλ ·C+ and C’ = eiμ ·C

Now, Feynman notes that an equal phase change in all amplitudes has no physical consequence (think of re-defining our t0 = 0 point), so we can add some arbitrary amount to both λ and μ without changing any of the physics. So then we can choose this amount as −(λ + μ)/2. We write:

subtracting a number

Now, it shouldn’t you too long to figure out that λ’ is equal to λ’ = λ/2 + μ/2 = −μ’. So… Well… Then we can just adopt the convention that λ = −μ. So our C’+ = eiλ ·C+ and C’ = eiμ ·C equations can now be written as:

C’+ = eiλ ·C+ and C’ = eiλ·C

The absolute values are the same, but the phases are different. Right. OK. Good move. What’s next?

Well… The next assumption is that the phase shift λ is proportional to the angle (α) between the two apparatuses. Hence, λ is equal to λ = m·α, and we can re-write the above as:

C’+ = ei·C+ and C’ = ei·C

Now, this assumption may or may not seem reasonable. Feynman justifies it with a continuity argument, arguing any rotation can be built up as a sequence of infinitesimal rotations and… Well… Let’s not get into the nitty-gritty here. [If you want it, check Feynman’s Lecture itself.] Back to the main line of reasoning. So we’ll assume we can write λ as λ = m·α. The next question then is: what is the value for m? Now, we obviously do get exactly the same physics if we rotate by 360°, or 2π radians. So we might conclude that the amplitudes should be the same and, therefore, that ei = eim·2π has to be equal to one, so C’+ = C+ and C’ = C . That’s the case if m is equal to 1. But… Well… No. It’s the same thing again: the probabilities (or the magnitudes) have to be the same, but the amplitudes may be different because of some phase factor. In fact, they should be different. If m = 1/2, then we also get the same physics, even if the amplitudes are not the same. They will be each other’s opposite:

same physical state

Huh? Yes. Think of it. The coefficient of proportionality (m) cannot be equal to 1. If it would be equal to 1, and we’d rotate by 180° only, then we’d also get those C’+ = −C+ and C’ = −C equations, and so these coefficients would, therefore, also describe the same physical situation. Now, you will understand, intuitively, that a rotation of the apparatus by 180° will not give us the same physical situation… So… Well… In case you’d want a more formal argument proving a rotation by 180° does not give us the same physical situation, Feynman has one for you. 🙂

I know that, by now, you’re totally tired and bored, and so you only want the grand conclusion at this point. Well… All of what I wrote above should, hopefully, help you to understand that conclusion, which – I quote Feynman here – is the following:

If we know the amplitudes C+ and C of spin one-half particles with respect to a reference frame S, and we then use new base states, defined with respect to a reference frame T which is obtained from S by a rotation α around the z-axis, the new amplitudes are given in terms of the old by the following formulas:

conclusion

[Feynman denotes our angle α by phi (φ) because… He uses the Euler angles a bit differently. But don’t worry: it’s the same angle.]

What about the amplitude to go from the C to the C’+ state, and from the C+ to the C’ state? Well… That amplitude is zero. So the transformation matrix is this one:

rotation matrix

Let’s take a moment and think about this. Feynman notes the following, among other things: “It is very curious to say that if you turn the apparatus 360° you get new amplitudes. [They aren’t really new, though, because the common change of sign doesn’t give any different physics.] But if something has been rotated by a sequence of small rotations whose net result is to return it to the original orientation, then it is possible to define the idea that it has been rotated 360°—as distinct from zero net rotation—if you have kept track of the whole history.”

This is very deep. It connects space and time into one single geometric space, so to speak. But… Well… I’ll try to explain this rather sweeping statement later. Feynman also notes that a net rotation of 720° does give us the same amplitudes and, therefore, cannot be distinguished from the original orientation. Feynman finds that intriguing but… Well… I am not sure if it’s very significant. I do note some symmetries in quantum physics involve 720° rotations but… Well… I’ll let you think about this. 🙂

Note that the determinant of our matrix is equal to a·b·ceiφ/2·eiφ/2 = 1. So… Well… Our rotation matrix is, effectively, in that special form! How comes? Well… When equating λ = −μ, we are effectively putting the transformation into that special form.  Let us also, just for fun, quickly check the normalization condition. It requires that the probabilities, in any given representation, add to up to one. So… Well… Do they? When they come out of S, our electrons are equally likely to be in the up or down state. So the amplitudes are 1/√2. [To be precise, they are ±1/√2 but… Well… It’s the phase factor story once again.] That’s normalized: |1/√2|2 + |1/√2|2 = 1. The amplitudes to come out of the apparatus in the up or down state are eiφ/2/√2 and eiφ/2/√2 respectively, so the probabilities add up to |eiφ/2/√2|2 + |eiφ/2/√2|2 = … Well… It’s 1. Check it. 🙂

Let me add an extra remark here. The normalization condition will result in matrices whose determinant will be equal to some pure imaginary exponential, like eiα. So is that what we have here? Yes. We can re-write 1 as 1 = ei·0 = e0, so α = 0. 🙂 Capito? Probably not, but… Well… Don’t worry about it. Just think about the grand results. As Feynman puts it, this Lecture is really “a sort of cultural excursion.” 🙂

Let’s do a practical calculation here. Let’s suppose the angle is, effectively, 180°. So the eiφ/2 and eiφ/2/√2 factors are equal to eiπ/2 = +i and eiπ/2 = −i, so… Well… What does that mean—in terms of the geometry of the wavefunction? Hmm… We need to do some more thinking about the implications of all this transformation business for our geometric interpretation of he wavefunction, but so we’ll do that in our next post. Let us first work our way out of this rather hellish transformation logic. 🙂 [See? I do admit it is all quite difficult and abstruse, but… Well… We can do this, right?]

So what’s next? Well… Feynman develops a similar argument (I should say same-same but different once more) to derive the coefficients for a rotation of ±90° around the y-axis. Why 90° only? Well… Let me quote Feynman here, as I can’t sum it up more succinctly than he does: “With just two transformations—90° about the y-axis, and an arbitrary angle about the z-axis [which we described above]—we can generate any rotation at all.”

So how does that work? Check the illustration below. In Feynman’s words again: “Suppose that we want the angle α around x. We know how to deal with the angle α α around z, but now we want it around x. How do we get it? First, we turn the axis down onto x—which is a rotation of +90°. Then we turn through the angle α around z’. Then we rotate 90° about y”. The net result of the three rotations is the same as turning around x by the angle α. It is a property of space.”

full rotation

Besides helping us greatly to derive the transformation matrix for any rotation, the mentioned property of space is rather mysterious and deep. It sort of reduces the degrees of freedom, so to speak. Feynman writes the following about this:

“These facts of the combinations of rotations, and what they produce, are hard to grasp intuitively. It is rather strange, because we live in three dimensions, but it is hard for us to appreciate what happens if we turn this way and then that way. Perhaps, if we were fish or birds and had a real appreciation of what happens when we turn somersaults in space, we could more easily appreciate such things.”

In any case, I should limit the number of philosophical interjections. If you go through the motions, then you’ll find the following elemental rotation matrices:

full set of rotation matrices

What about the determinants of the Rx(φ) and Ry(φ) matrices? They’re also equal to one, so… Yes. A pure imaginary exponential, right? 1 = ei·0 = e0. 🙂

What’s next? Well… We’re done. We can now combine the elemental transformations above in a more general format, using the standardized Euler angles. Again, just go through the motions. The Grand Result is:

euler transformatoin

Does it give us normalized amplitudes? It should, but it looks like our determinant is going to be a much more complicated complex exponential. 🙂 Hmm… Let’s take some time to mull over this. As promised, I’ll be back with more reflections in my next post.

The Hamiltonian revisited

I want to come back to something I mentioned in a previous post: when looking at that formula for those Uij amplitudes—which I’ll jot down once more:

Uij(t + Δt, t) = δij + ΔUij(t + Δt, t) = δij + Kij(t)·Δt ⇔ Uij(t + Δt, t) = δij − (i/ħ)·Hij(t)·Δt

—I noted that it resembles the general y(t + Δt) = y(t) + Δy = y(t) + (dy/dt)·Δt formula. So we can look at our Kij(t) function as being equal to the time derivative of the Uij(t + Δt, t) function. I want to re-visit that here, as it triggers a whole range of questions, which may or may not help to understand quantum math somewhat more intuitively.  Let’s quickly sum up what we’ve learned so far: it’s basically all about quantum-mechanical stuff that does not move in space. Hence, the x in our wavefunction ψ(x, t) is some fixed point in space and, therefore, our elementary wavefunction—which we wrote as:

ψ(x, t) = a·ei·θ a·ei·(ω·t − k∙x) = a·ei·[(E/ħ)·t − (p/ħ)∙x]

—reduces to ψ(t) = a·ei·ω·t = a·ei·[(E/ħ)·t.

Unlike what you might think, we’re not equating x with zero here. No. It’s the p = m·v factor that becomes zero, because our reference frame is that of the system that we’re looking at, so its velocity is zero: it doesn’t move in our reference frame. That immediately answers an obvious question: does our wavefunction look any different when choosing another reference frame? The answer is obviously: yes! It surely matters if the system moves or not, and it also matters how fast it moves, because it changes the energy and momentum values from E and p to some E’ and p’. However, we’ll not consider such complications here: that’s the realm of relativistic quantum mechanics. Let’s start with the simplest of situations.

A simple two-state system

One of the simplest examples of a quantum-mechanical system that does not move in space, is the textbook example of the ammonia molecule. The picture was as simple as the one below: an ammonia molecule consists of one nitrogen atom and three hydrogen atoms, and the nitrogen atom could be ‘up’ or ‘down’ with regard to the motion of the NH3 molecule around its axis of symmetry, as shown below.

Capture

It’s important to note that this ‘up’ or ‘down’ direction is, once again, defined with respect to the reference frame of the system itself. The motion of the molecule around its axis of symmetry is referred to as its spin—a term that’s used in a variety of contexts and, therefore, is annoyingly ambiguous. When we use the term ‘spin’ (up or down) to describe an electron state, for example, we’d associate it with the direction of its magnetic moment. Such magnetic moment arises from the fact that, for all practical purposes, we can think of an electron as a spinning electric charge. Now, while our ammonia molecule is electrically neutral, as a whole, the two states are actually associated with opposite electric dipole moments, as illustrated below. Hence, when we’d apply an electric field (denoted as ε) below, the two states are effectively associated with different energy levels, which we wrote as E0 ± εμ.

ammonia

But we’re getting ahead of ourselves here. Let’s revert to the system in free space, i.e. without an electromagnetic force field—or, what amounts to saying the same, without potential. Now, the ammonia molecule is a quantum-mechanical system, and so there is some amplitude for the nitrogen atom to tunnel through the plane of hydrogens. I told you before that this is the key to understanding quantum mechanics really: there is an energy barrier there and, classically, the nitrogen atom should not sneak across. But it does. It’s like it can borrow some energy – which we denote by A – to penetrate the energy barrier.

In quantum mechanics, the dynamics of this system are modeled using a set of two differential equations. These differential equations are really the equivalent of Newton’s classical Law of Motion (I am referring to the F = m·(dv/dt) = m·a equation here) in quantum mechanics, so I’ll have to explain them—which is not so easy as explaining Newton’s Law, because we’re talking complex-valued functions, but… Well… Let me first insert the solution of that set of differential equations:

graph

This graph shows how the probability of the nitrogen atom (or the ammonia molecule itself) being in state 1 (i.e. ‘up’) or, else, in state 2 (i.e. ‘down’), varies sinusoidally in time. Let me also give you the equations for the amplitudes to be in state 1 or 2 respectively:

  1. C1(t) = 〈 1 | ψ 〉 = (1/2)·e(i/ħ)·(E− A)·t + (1/2)·e(i/ħ)·(E+ A)·t = e(i/ħ)·E0·t·cos[(A/ħ)·t]
  2. C2(t) = 〈 2 | ψ 〉 = (1/2)·e(i/ħ)·(E− A)·t – (1/2)·e(i/ħ)·(E+ A)·t = i·e(i/ħ)·E0·t·sin[(A/ħ)·t]

So the P1(t) and P2(t) probabilities above are just the absolute square of these C1(t) and C2(t) functions. So as to help you understand what’s going on here, let me quickly insert the following technical remarks:

  • In case you wonder how we go from those exponentials to a simple sine and cosine factor, remember that the sum of complex conjugates, i.e eiθ eiθ reduces to 2·cosθ, while eiθ − eiθ reduces to 2·i·sinθ.
  • As for how to take the absolute square… Well… I shouldn’t be explaining that here, but you should be able to work that out remembering that (i) |a·b·c|2 = |a|2·|b|2·|c|2; (ii) |eiθ|2 = |e−iθ|= 12 = 1 (for any value of θ); and (iii) |i|2 = 1.
  • As for the periodicity of both probability functions, note that the period of the squared sine and cosine functions is equal to π. Hence, the argument of our sine and cosine function will be equal to 0, π, 2π, 3π etcetera if (A/ħ)·t = 0, π, 2π, 3π etcetera, i.e. if t = 0·ħ/A, π·ħ/A, 2π·ħ/A, 3π·ħ/A etcetera. So that’s why we measure time in units of ħ/A above.

The graph above is actually tricky to interpret, as it assumes that we know in what state the molecule starts out with at t = 0. This assumption is tricky because we usually do not know that: we have to make some observation which, curiously enough, will always yield one of the two states—nothing in-between. Or, else, we can use a state selector—an inhomogeneous electric field which will separate the ammonia molecules according to their state. It’s a weird thing really, and it summarizes all of the ‘craziness’ of quantum-mechanics: as long as we don’t measure anything – by applying that force field – our molecule is in some kind of abstract state, which mixes the two base states. But when we do make the measurement, always along some specific direction (which we usually take to be the z-direction in our reference frame), we’ll always find the molecule is either ‘up’ or, else, ‘down’. We never measure it as something in-between. Personally, I like to think the measurement apparatus – I am talking the electric field here – causes the nitrogen atom to sort of ‘snap into place’. However, physicists use more precise language here: they would say that the electric field does result in the two positions having very different energy levels (E0 + εμ and E0 – εμ, to be precise) and that, as a result, the amplitude for the nitrogen atom to flip back and forth has little effect. Now how do we model that?

The Hamiltonian equations

I shouldn’t be using the term above, as it usually refers to a set of differential equations describing classical systems. However, I’ll also use it for the quantum-mechanical analog, which amounts to the following for our simple two-state example above:

Hamiltonian maser

Don’t panic. We’ll explain. The equations above are all the same but use different formats: the first block writes them as a set of equations, while the second uses the matrix notation, which involves the use of that rather infamous Hamiltonian matrix, which we denote by H = [Hij]. Now, we’ve postponed a lot of technical stuff, so… Well… We can’t avoid it any longer. Let’s look at those Hamiltonian coefficients Hij first. Where do they come from?

You’ll remember we thought of time as some kind of apparatus, with particles entering in some initial state φ and coming out in some final state χ. Both are to be described in terms of our base states. To be precise, we associated the (complex) coefficients C1 and C2 with |φ〉 and D1 and D2 with |χ〉. However, the χ state is a final state, so we have to write it as 〈χ| = |χ〉† (read: chi dagger). The dagger symbol tells us we need to take the conjugate transpose of |χ〉, so the column vector becomes a row vector, and its coefficients are the complex conjugate of D1 and D2, which we denote as D1* and D2*. We combined this with Dirac’s bra-ket notation for the amplitude to go from one base state to another, as a function in time (or a function of time, I should say):

Uij(t + Δt, t) = 〈i|U(t + Δt, t)|j〉

This allowed us to write the following matrix equation:

U coefficients

To see what it means, you should write it all out:

〈χ|U(t + Δt, t)|φ〉 = D1*·(U11(t + Δt, t)·C1 + U12(t + Δt, t)·C2) + D2*·(U21(t + Δt, t)·C1 + U22(t + Δt, t)·C2)

= D1*·U11(t + Δt, t)·C+ D1*·U12(t + Δt, t)·C+ D2*·U21(t + Δt, t)·C+ D2*·U22(t + Δt, t)·C2

It’s a horrendous expression, but it’s a complex-valued amplitude or, quite simply, a complex number. So this is not nonsensical. We can now take the next step, and that’s to go from those Uij amplitudes to the Hij amplitudes of the Hamiltonian matrix. The key is to consider the following: if Δt goes to zero, nothing happens, so we write: Uij = 〈i|U|j〉 → 〈i|j〉 = δij for Δt → 0, with δij = 1 if i = j, and δij = 0 if i ≠ j. We then assume that, for small t, those Uij amplitudes should differ from δij (i.e. from 1 or 0) by amounts that are proportional to Δt. So we write:

Uij(t + Δt, t) = δij + ΔUij(t + Δt, t) = δij + Kij(t)·Δt

We then equated those Kij(t) factors with − (i/ħ)·Hij(t), and we were done: Uij(t + Δt, t) = δij − (i/ħ)·Hij(t)·Δt. […] Well… I show you how we get those differential equations in a moment. Let’s pause here for a while to see what’s going on really. You’ll probably remember how one can mathematically ‘construct’ the complex exponential eiθ by using the linear approximation eiε = 1 + iε near θ = 0 and for infinitesimally small values of ε. In case you forgot, we basically used the definition of the derivative of the real exponential eε for ε going to zero:

FormulaSo we’ve got something similar here for U11(t + Δt, t) = 1 − i·[H11(t)/ħ]·Δt and U22(t + Δt, t) = 1 − i·[H22(t)/ħ]·Δt. Just replace the ε in eiε = 1 + iε by ε = − (E0/ħ)·Δt. Indeed, we know that H11 = H22 = E0, and E0/ħ is, of course, just the energy measured in (reduced) Planck units, i.e. in its natural unit. Hence, if our ammonia molecule is in one of the two base states, we start at θ = 0 and then we just start moving on the unit circle, clockwise, because of the minus sign in eiθ. Let’s write it out:

U11(t + Δt, t) = 1 − i·[H11(t)/ħ]·Δt = 1 − i·[E0/ħ]·Δt and

U22(t + Δt, t) = 1 − i·[H22(t)/ħ]·Δt = 1 − i·[E0/ħ]·Δt

But what about U12 and U21? Is there a similar interpretation? Let’s write those equations down and think about them:

U12(t + Δt, t) = 0 − i·[H12(t)/ħ]·Δt = 0 + i·[A/ħ]·Δt and

U21(t + Δt, t) = 0 − i·[H21(t)/ħ]·Δt = 0 + i·[A/ħ]·Δt

We can visualize this as follows:

circle

Let’s remind ourselves of the definition of the derivative of a function by looking at the illustration below:izvodThe f(x0) value in this illustration corresponds to the Uij(t, t), obviously. So now things make somewhat more sense: U11(t, t) = U11(t, t) = 1, obviously, and U12(t, t) = U21(t, t) = 0. We then add the ΔUij(t + Δt, t) to Uij(t, t). Hence, we can, and probably should, think of those Kij(t) coefficients as the derivative of the Uij(t, t) functions with respect to time. So we can write something like this:

H and U

These derivatives are pure imaginary numbers. That does not mean that the Uij(t + Δt, t) functions are purely imaginary: U11(t + Δt, t) and U22(t + Δt, t) can be approximated by 1 − i·[E0/ħ]·Δt for small Δt, so they do have a real part. In contrast, U12(t + Δt, t) and U21(t + Δt, t) are, effectively, purely imaginary (for small Δt, that is).

I can’t help thinking these formulas reflect a deep and beautiful geometry, but its meaning escapes me so far. 😦 When everything is said and done, none of the reflections above makes things somewhat more intuitive: these wavefunctions remain as mysterious as ever.

I keep staring at those P1(t) and P2(t) functions, and the C1(t) and C2(t) functions that ‘generate’ them, so to speak. They’re not independent, obviously. In fact, they’re exactly the same, except for a phase difference, which corresponds to the phase difference between the sine and cosine. So it’s all one reality, really: all can be described in one single functional form, so to speak. I hope things become more obvious as I move forward. :-/

Post scriptum: I promised I’d show you how to get those differential equations but… Well… I’ve done that in other posts, so I’ll refer you to one of those. Sorry for not repeating myself. 🙂

The hydrogen molecule as a two-state system

My posts on the state transitions of an ammonia molecule weren’t easy, were they? So let’s try another two-state system. The illustration below shows an ionized hydrogen molecule in two possible states which, as usual, we’ll denote as |1〉 and |2〉. An ionized hydrogen molecule is an H2 molecule which lost an electron, so it’s two protons with one electron only, so we denote it as H2+. The difference between the two states is obvious: the electron is either with the first proton or with the second.

hydrogen

It’s an example taken from Feynman’s Lecture on two-state systems. The illustration itself raises a lot of questions, of course. The most obvious question is: how do we know which proton is which? We’re talking identical particles, right? Right. We should think of the proton spins! However, protons are fermions and, hence, they can’t be in the same state, so they must have opposite spins. Of course, now you’ll say: they’re not in the same state because they’re at different locations. Well… Now you’ve answered your own question. 🙂 However you want to look at this, the point is: we can distinguish both protons. Having said that, the reflections above raise other questions: what reference frame are we using? The answer is: it’s the reference frame of the system. We can mirror or rotate this image however we want – as I am doing below – but state |1〉 is state |1〉, and state |2〉 is state |2〉.

flip

The other obvious question is more difficult. If you’ve read anything at all about quantum mechanics, you’ll ask: what about the in-between states? The electron is actually being shared by the two protons, isn’t it? That’s what chemical bonds are all about, no? Molecular orbitals rather than atomic orbitals, right? Right. That’s actually what this post is all about. We know that, in quantum mechanics, the actual state – or what we think is the actual state – is always expressed as some linear combination of so-called base states. We wrote:

|ψ〉 = |1〉C|2〉C= |1〉〈1|ψ〉 + |2 〉〈2|ψ 〉

In terms of representing what’s actually going on, we only have these probability functions: they say that, if we would take a measurement, the probability of finding the electron near the first or the second proton varies as shown below:

graph

If the |1〉 and |2〉 states were actually representing two dual physical realities, the actual state of our H2molecule would be represented by some square or some pulse wave, as illustrated below. [We should be calling it a square function but that term has been reserved for a function like y = x2.]

Dutycycle

Of course, the symmetry of the situation implies that the average pulse duration τ would be one-half of the (average) period T, so we’d be talking a square wavefunction indeed. The two wavefunctions both qualify as probability density functions: the system is always in one state or the other, and the probabilities add up to one. But you’ll agree we prefer the smooth squared sine and cosine functions. To be precise, these smooth functions are:

  • P1(t) = |C1(t)|2 = cos2[(A/ħ)·t]
  • P2(t) = |C2(t)|= sin2[(A/ħ)·t]

So now we only need to explain A here (you know ħ already). But… Well… Why would we actually prefer those smooth functions? An irregular pulse function would seem to be doing a better job when it comes to modeling reality, doesn’t it? The electron should be either here, or there. Isn’t it?

Well… No. At least that’s why am slowly starting to understand. These pure base states |1〉 and |2〉 are real and not real at the same time. They’re real, because it’s what we’ll get when we verify, or measure, the state, so our measurement will tell us that it’s here or there. There’s no in-between. [I still need to study weak measurement theory.] But then they are not real, because our molecule will never ever be in those two states, except for those ephemeral moments when (A/ħ)·t = n·π (n = 0, 1, 2,…). So we’re really modeling uncertainty here and, while I am still exploring what that actually means, you should think of the electron as being everywhere really, but with an unequal density in space—sort of. 🙂

Now, we’ve learned we can describe the state of a system in terms of an alternative set of base states. We wrote: |ψ〉 = |I〉C|II〉CII = |I〉〈I|ψ〉 + |II〉〈II|ψ〉, with the CI, II and C1, 2 coefficients being related to each other in exactly the same way as the associated base states, i.e. through a transformation matrix, which we summarized as:

transformation

To be specific, the two sets of base states we’ve been working with so far were related as follows:

transformation

So we’d write: |ψ〉 = |I〉C|II〉CII = |I〉〈I|ψ〉 + |II〉〈II|ψ〉 = |1〉C|2〉C= |1〉〈1|ψ〉 + |2 〉〈2|ψ 〉, and the CI, II and C1, 2 coefficients would be related in exactly the same way as the base states:

Eq 4

[In case you’d want to review how that works, see my post on the Hamiltonian and base states.] Now, we cautioned that it’s difficult to try to interpret such base transformations – often referred to as a change in the representation or a different projection – geometrically. Indeed, we acknowledged that (base) states were very much like (base) vectors – from a mathematical point of view, that is – but, at the same time, we said that they were ‘objects’, really: elements in some Hilbert space, which means you can do the operations we’re doing here, i.e. adding and multiplying. Something like |I〉CI doesn’t mean all that much: Cis a complex number – and so we can work with numbers, of course, because we can visualize them – but |I〉 is a ‘base state’, and so what’s the meaning of that, and what’s the meaning of the |I〉CI or CI|I〉 product? I could babble about that, but it’s no use: a base state is a base state. It’s some state of the system that makes sense to us. In fact, it may be some state that does not make sense to us—in terms of the physics of the situation, that is – but then there will always be some mathematical sense to it because of that transformation matrix, which establishes a one-to-one relationship between all sets of base states.

You’ll say: why don’t you try to give it some kind of geometrical or whatever meaning? OK. Let’s try. State |1〉 is obviously like minus state |2〉 in space, so let’s see what happens when we equate |1〉 to 1 on the real axis, and |2〉 to −1. Geometrically, that corresponds to the (1, 0) and (−1, 0) points on the unit circle. So let’s multiply those points with (1/√2, −1/√2) and (1/√2, 1/√2) respectively. What do we get? Well… What product should we take? The dot product, the cross product, or the ordinary complex-number product? The dot product gives us a number, so we don’t want that. [If we’re going to represent base states by vectors, we want all states to be vectors.] A cross product will give us a vector that’s orthogonal to both vectors, so it’s a vector in ‘outer space’, so to say. We don’t want that, I must assume, and so we’re left with the complex-number product, which projects our  (1, 0) and (−1, 0) vectors into the (1/√2, −1/√2)·(1, 0) = (1/√2−i/√2)·(1+0·i) = √2−i/√2 = (1/√2, −i/√2) and (1/√2, 1/√2)·(−1, 0) = (1/√2+i/√2)·(−1+0·i) = −√2−i/√2 = (−1/√2, −i/√2) respectively.

transformation 2

What does this say? Nothing. Stuff like this only causes confusion. We had two base states that were ‘180 degrees’ apart, and now our new base states are only ’90 degrees’ apart. If we’d ‘transform’ the two new base states once more, they collapse into each other: (1/√2, −1/√2)·(1/√2, −1/√2) = (1/√2−i/√2)2 = −= (0, −1) = (1/√2, 1/√2)·(−1/√2, −1/√2) = −i. This is nonsense, of course. It’s got nothing to do with the angle we picked for our original set of base states: we could have separated our original set of base states by 90 degrees, or 45 degrees. It doesn’t matter. It’s the transformation itself: multiplying by (+1/√2, −1/√2) amounts to a clockwise rotation by 45 degrees, while multiplying by (+1/√2, +1/√2) amounts to the same, but counter-clockwise. So… Well… We should not try to think of our base vectors in any geometric way, because it just doesn’t make any sense. So Let’s not waste time on this: the ‘base states’ are a bit of a mystery, in the sense that they just are what they are: we can’t ‘reduce’ them any further, and trying to interpret them geometrically leads to contradictions, as evidenced by what I tried to do above. Base states are ‘vectors’ in a so-called Hilbert space, and… Well… That’s not your standard vector space. [If you think you can make more sense of it, please do let me know!]

Onwards!

Let’s take our transformation again:

  • |I〉 = (1/√2)|1〉 − (1/√2)|2〉 = (1/√2)[|1〉 − |2〉]
  • |II〉 = (1/√2)|1〉 + (1/√2)|2〉 = (1/√2)[|1〉 + |2〉]

Again, trying to geometrically interpret what it means to add or subtract two base states is not what you should be trying to do. In a way, the two expressions above only make sense when combining them with a final state, so when writing:

  • 〈ψ|I〉 = (1/√2)〈ψ|1〉 − (1/√2)〈ψ|2〉 = (1/√2)[〈ψ|1〉 − 〈ψ|2〉]
  • 〈ψ|II〉 = (1/√2)〈ψ|1〉 + (1/√2)〈ψ|2〉 = (1/√2)[〈ψ|1〉 + 〈ψ|2〉]

Taking the complex conjugate of this gives us the amplitudes of the system to be in state I or state II:

  • 〈I|ψ〉 = 〈ψ|I〉* = (1/√2)[〈ψ|1〉* − 〈ψ|2〉*] = (1/√2)[〈1|ψ〉 − 〈2|ψ〉]
  • 〈II|ψ〉 = 〈ψ|II〉* = (1/√2)[〈ψ|1〉* + 〈ψ|2〉*] = (1/√2)[〈1|ψ〉 + 〈2|ψ〉]

That still doesn’t tell us much, because we’d need to know the 〈1|ψ〉 and 〈2|ψ〉 functions, i.e. the amplitudes of the system to be in state 1 and state 2 respectively. What we do know, however, is that the 〈1|ψ〉 and 〈2|ψ〉 functions will have some rather special amplitudes. We wrote:

  • C= 〈 I | ψ 〉 =  e−(i/ħ)·EI·t
  • CII = 〈 II | ψ 〉 = e−(i/ħ)·EII·t

These are amplitudes of so-called stationary states: the associated probabilities – i.e. the absolute square of these functions – do not vary in time: |e−(i/ħ)·EI·t|2 = |e−(i/ħ)·EII·t|2 = 1. For our ionized hydrogen molecule, it means that, if it would happen to be in state I, it will stay in state I, and the same goes for state II. We write:

〈 I | I 〉 = 〈 II | II 〉 = 1 and 〈 I | II 〉 = 〈 II | I 〉 = 0

That’s actually just the so-called ‘orthogonality’ condition for base states, which we wrote as 〈i|j〉 = 〈j|i〉 = δij, but, in light of the fact that we can’t interpret them geometrically, we shouldn’t be calling it like that. The point is: we had those differential equations describing a system like this. If the amplitude to go from state 1 to state 2 was equal to some real- or complex-valued constant A, then we could write those equations either in terms of Cand C2, or in terms of Cand CII:

set of equations

So the two sets of equations are equivalent. However, what we want to do here is look at it in terms of Cand CII. Let’s first analyze those two energy levels E= E+ A and EII = E− A. Feynman graphs them as follows:

raph1raph2

Let me explain. In the first graph, we have E= E+ A and EII = E− A, and they are depicted as being symmetric, with A depending on the distance between the two protons. As for E0, that’s the energy of a hydrogen atom, i.e. a proton with a bound electron, and a separate proton. So it’s the energy of a system consisting of a hydrogen atom and a proton, which is obviously not the same as that of an ionized hydrogen molecule. The concept of a molecule assumes the protons are closely together. We assume E= 0 if the interproton distance is relatively large but, of course, as the protons come closer, we shouldn’t forget the repulsive electrostatic force between the two protons, which is represented by the dashed line in the first graph. Indeed, unlike the electron and the proton, the two protons will want to push apart, rather than pull together, so the potential energy of the system increases as the interproton distance decreases. So Eis not constant either: it also depends on the interproton distance. But let’s forget about Efor a while. Let’s look at the two curves for A now.

A is not varying in time, but its value does depend on the distance between the two protons. We’ll use this in a moment to calculate the approximate size of the hydrogen nucleus in a calculation that closely resembles Feynman’s calculation of the size of a hydrogen atom. That A should be some function of the interproton distance makes sense: the transition probability, and therefore A, will exponentially decrease with distance. There are a few things to reflect on here:

1. In the mentioned calculation of the size of a hydrogen atom, which is based on the Uncertainty Principle, Feynman shows that the energy of the system decreases when an electron is bound to the proton. The reasoning is that, if the potential energy of the electron is zero when it is not bound, then its potential energy will be negative when bound. Think of it: the electron and the proton attract each other, so it requires force to separate them, and force over a distance is energy. From our course in electromagnetics, we know that the potential energy, when bound, should be equal to −e2/a0, with ethe squared charge of the electron divided by 4πε0, and a0 the so-called Bohr radius of the atom. Of course, the electron also has kinetic energy. It can’t just sit on top of the proton because that would violate the Uncertainty Principle: we’d know where it was. Combining the two, Feynman calculates both a0 as well as the so-called Rydberg energy, i.e. the total energy of the bound electron, which is equal to −13.6 eV. So, yes, the bound state has less energy, so the electron will want to be bound, i.e. it will want to be close to one of the two protons.

2. Now, while that’s not what’s depicted above, it’s clear the magnitude of A will be related to that Rydberg energy which − please note − is quite high. Just compare it with the A for the ammonia molecule, which we calculated in our post on the maser: we found an A of about 0.5×10−4 eV there, so that’s like 270,000 times less! Nevertheless, the possibility is there, and what happens when the electron flips over amounts to tunneling: it penetrates and crosses a potential barrier. We did a post on that, and so you may want to look at how that works. One of the weird things we had to consider when a particle crosses such potential barrier, is that the momentum factor p in its wavefunction was some pure imaginary number, which we wrote as p = i·p’. We then re-wrote that wavefunction as a·e−iθ = a·e−i[(E/ħ)∙t − (i·p’/ħ)x] = a·e−i(E/ħ)∙t·ei2·p’·x/ħ = a·e−i(E/ħ)∙t·e−p’·x/ħ. Now, it’s easy to see that the e−p’·x/ħ factor in this formula is a real-valued exponential function, with the same shape as the general e−x function, which I depict below.

graph

This e−p’·x/ħ basically ‘kills’ our wavefunction as we move in the positive x-direction, across the potential barrier, which is what is illustrated below: if the distance is too large, then the amplitude for tunneling goes to zero.

potential barrier

So that’s what depicted in those graphs of E= E+ A and EII = E− A: A goes to zero when the interproton distance becomes too large. We also recognize the exponential shape for A in those graphs, which can also be derived from the same tunneling story.

Now we can calculate EA and E− A taking into account that both terms vary with the interproton distance as explained, and so that gives us the final curves on the right-hand side, which tell us that the equilibrium configuration of the ionized hydrogen molecule is state II, i.e. the lowest energy state, and the interproton distance there is approximately one Ångstrom, i.e. 1×10−10 m. [You can compare this with the Bohr radius, which we calculated as a0 = 0.528×10−10 m, so that all makes sense.] Also note the energy scale: ΔE is the excess energy over a proton plus a hydrogen atom, so that’s the energy when the two protons are far apart. Because it’s the excess energy, we have a zero point. That zero point is, obviously, the energy of a hydrogen atom and a proton. [Read this carefully, and please refer back to what I wrote above. The energy of a system consisting of a hydrogen atom and a proton is not the same as that of an ionized hydrogen molecule: the concept of a molecule assumes the protons are closely together.] We then re-scale by dividing by the Rydberg energy E= 13.6 eV. So ΔE/E≈ −0.2 ⇔ ΔE ≈ −0.2×13.6 = –2.72 eV. That basically says that the energy of our ionized hydrogen molecule is 2.72 eV lower than the energy of a hydrogen atom and a proton.

Why is it lower? We need to think about our model of the hydrogen atom once more: the energy of the electron was minimized by striking a balance between (1) being close to the proton and, therefore, having a low potential energy (or a low coulomb energy, as Feynman calls it) and (2) being further away from the proton and, therefore, lowering its kinetic energy according to the Uncertainty Principle ΔxΔp ≥ ħ/2, which Feynman boldly re-wrote as p = ħ/a0. Now, a molecular orbital, i.e. the electron being around two protons, results in “more space where the electron can have a low potential energy”, as Feynman puts it, so “the electron can spread out—lowering its kinetic energy—without increasing its potential energy.”

The whole discussion here actually amounts to an explanation for the mechanism by which an electron shared by two protons provides, in effect, an attractive force between the two protons. So we’ve got a single electron actually holding two protons together, which chemists refer to as a “one-electron bond.”

So… Well… That explains why the energy EII = E− A is what it is, so that’s smaller than Eindeed, with the difference equal to the value A for an interproton distance of 1 Å. But how should we interpret E= E+ A? What is that higher energy level? What does it mean?

That’s a rather tricky question. There’s no easy interpretation here, like we had for our ammonia molecule: the higher energy level had an obvious physical meaning in an electromagnetic field, as it was related to the electric dipole moment of the molecule. That’s not the case here: we have no magnetic or electric dipole moment here. So, once again, what’s the physical meaning of E= E+ A? Let me quote Feynman’s enigmatic answer here:

“Notice that this state is the difference of the states |1⟩ and |2⟩. Because of the symmetry of |1⟩ and |2⟩, the difference must have zero amplitude to find the electron half-way between the two protons. This means that the electron is somewhat more confined, which leads to a larger energy.”

What does he mean with that? It seems he’s actually trying to do what I said we shouldn’t try to do, and that is to interpret what adding versus subtracting states actually means. But let’s give it a fair look. We said that the |I〉 = (1/√2)[|1〉 − |2〉] expression didn’t mean much: we should add a final state and write: 〈ψ|I〉 = (1/√2)[〈ψ|1〉 − 〈ψ|2〉], which is equivalent to 〈I|ψ〉 = (1/√2)[〈1|ψ〉 − 〈2|ψ〉]. That still doesn’t tell us anything: we’re still adding amplitudes, and so we should allow for interference, and saying that |1⟩ and |2⟩ are symmetric simply means that 〈1|ψ〉 − 〈2|ψ〉 = 〈2|ψ〉 − 〈1|ψ〉 ⇔ 2·〈1|ψ〉 = 2·〈2|ψ〉 ⇔ 〈1|ψ〉 = 〈2|ψ〉. Wait a moment! That’s an interesting reflection. Following the same reasoning for |II〉 = (1/√2)[|1〉 + |2〉], we get 〈1|ψ〉 + 〈2|ψ〉 = 〈2|ψ〉 + 〈1|ψ〉 ⇔ … Huh? No, that’s trivial: 0 = 0.

Hmm… What to say? I must admit I don’t quite ‘get’ Feynman here: state I, with energy E= E+ A, seems to be both meaningless as well as impossible. The only energy levels that would seem to make sense here are the energy of a hydrogen atom and a proton and the (lower) energy of an ionized hydrogen molecule, which you get when you bring a hydrogen atom and a proton together. 🙂

But let’s move to the next thing: we’ve added only one electron to the two protons, and that was it, and so we had an ionized hydrogen molecule, i.e. an H2+ molecule. Why don’t we do a full-blown H2 molecule now? Two protons. Two electrons. It’s easy to do. The set of base states is quite predictable, and illustrated below: electron a can be either one of the two protons, and the same goes for electron b.

base

We can then go through the same as for the ion: the molecule’s stability is shown in the graph below, which is very similar to the graph of the energy levels of the ionized hydrogen molecule, i.e. the H2+  molecule. The shape is the same, but the values are different: the equilibrium state is at an interproton distance of 0.74 Å, and the energy of the equilibrium state is like 5 eV (ΔE/E≈ −0.375) lower than the energy of two separate hydrogen atoms.

raph3The explanation for the lower energy is the same: state II is associated with some kind of molecular orbital for both electrons, resulting in “more space where the electron can have a low potential energy”, as Feynman puts it, so “the electron can spread out—lowering its kinetic energy—without increasing its potential energy.”

However, there’s one extra thing here: the two electrons must have opposite spins. That’s the only way to actually distinguish the two electrons. But there is more to it: if the two electrons would not have opposite spin, we’d violate Fermi’s rule: when identical fermions are involved, and we’re adding amplitudes, then we should do so with a negative sign for the exchanged case. So our transformation would be problematic:

〈II|ψ〉 = (1/√2)[〈1|ψ〉 + 〈2|ψ〉] = (1/√2)[〈2|ψ〉 + 〈1|ψ〉]

When we switch the electrons, we should get a minus sign. The weird thing is: we do get that minus sign for state I:

〈I|ψ〉 = (1/√2)[〈1|ψ〉 − 〈2|ψ〉] = −(1/√2)[〈2|ψ〉 − 〈1|ψ〉]

So… Well… We’ve got a bit of an answer there as to what that the ‘other’ (upper) energy level of E= E+ A actually means, in physical terms, that is. It models two hydrogens coming together with parallel electron spins. Applying Fermi’s rules  – i.e. the exclusion principle, basically – we find that state II is, quite simply, not allowed for parallel electron spins: state I is, and it’s the only one. There’s something deep here, so let me quote the Master himself on it:

“We find that the lowest energy state—the only bound state—of the H2 molecule has the two electrons with spins opposite. The total spin angular momentum of the electrons is zero. On the other hand, two nearby hydrogen atoms with spins parallel—and so with a total angular momentum —must be in a higher (unbound) energy state; the atoms repel each other. There is an interesting correlation between the spins and the energies. It gives another illustration of something we mentioned before, which is that there appears to be an “interaction” energy between two spins because the case of parallel spins has a higher energy than the opposite case. In a certain sense you could say that the spins try to reach an antiparallel condition and, in doing so, have the potential to liberate energy—not because there is a large magnetic force, but because of the exclusion principle.”

You should read this a couple of times. It’s an important principle. We’ll discuss it again in the next posts, when we’ll be talking spin in much more detail once again. 🙂 The bottom line is: if the electrons are parallel, then they won’t ‘share’ any space at all and, hence, they are really much more confined in space, and the associated energy level is, therefore, much higher.

Post scriptum: I said we’d ‘calculate’ the equilibrium interproton distance. We didn’t do that. We just gave them through the graphs, which are based on the results of a ‘detailed quantum-mechanical calculation’—or that’s what Feynman claims, at least. I am not sure if they correspond to experimentally determined values, or what calculations are behind, exactly. Feynman notes that “this approximate treatment of the H2molecule as a two-state system breaks down pretty badly once the protons get as close together as they are at the minimum in the curve and, therefore, it will not give a good value for the actual binding energy. For small separations, the energies of the two “states” we imagined are not really equal to E0, and a more refined quantum mechanical treatment is needed.”

So… Well… That says it all, I guess.

Two-state systems: the math versus the physics, and vice versa.

I think my previous post, on the math behind the maser, was a bit of a brain racker. However, the results were important and, hence, it is useful to generalize them so we can apply it to other two-state systems. 🙂 Indeed, we’ll use the very same two-state framework to analyze things like the stability of neutral and ionized hydrogen molecules and the binding of diatomic molecules in general – and lots of other stuff that can be analyzed as a two-state system. However, let’s first have look at the math once more. More importantly, let’s analyze the physics behind. 

At the center of our little Universe here 🙂 is the fact that the dynamics of a two-state system are described by a set of two differential equations, which we wrote as: System

It’s obvious these two equations are usually not easy to solve: the Cand Cfunctions are complex-valued amplitudes which vary not only in time but also in space, obviously, but, in fact, that’s not the problem. The issue is that the Hamiltonian coefficients Hij may also vary in space and in time, and so that‘s what makes things quite nightmarish to solve. [Note that, while H11 and H22 represent some energy level and, hence, are usually real numbers, H12 and H21 may be complex-valued. However, in the cases we’ll be analyzing, they will be real numbers too, as they will usually also represent some energy. Having noted that, being real- or complex-valued is not the problem: we can work with complex numbers and, as you can see from the matrix equation above, the i/ħ factor in front of our differential equations results in a complex-valued coefficient matrix anyway.]

So… Yes. It’s those non-constant Hamiltonian coefficients that caused us so much trouble when trying to analyze how a maser works or, more generally, how induced transitions work. [The same equations apply to blackbody radiation indeed, or other phenomena involved induced transitions.] In any case, so we won’t do that again – not now, at least – and so we’ll just go back to analyzing ‘simple’ two-state systems, i.e. systems with constant Hamiltonian coefficients.

Now, even for such simple systems, Feynman made life super-easy for us – too easy, I think – because he didn’t use the general mathematical approach to solve the issue on hand. That more general approach would be based on a technique you may or may not remember from your high school or university days: it’s based on finding the so-called eigenvalues and eigenvectors of the coefficient matrix. I won’t say too much about that, as there’s excellent online coverage of that, but… Well… We do need to relate the two approaches, and so that’s where math and physics meet. So let’s have a look at it all.

If we would write the first-order time derivative of those C1 and Cfunctions as C1‘ and C2‘ respectively (so we just put a prime instead of writing dC1/dt and dC2/dt), and we put them in a two-by-one column matrix, which I’ll write as C, and then, likewise, we also put the functions themselves, i.e. C1 and C2, in a column matrix, which I’ll write as C, then the system of equations can be written as the following simple expression:

C = AC

One can then show that the general solution will be equal to:

C = a1eλI·tv+ a2eλII·tvII

The λI and λII in the exponential functions are the eigenvalues of A, so that’s that two-by-two matrix in the equation, i.e. the coefficient matrix with the −(i/ħ)Hij elements. The vI and vII column matrices in the solution are the associated eigenvectors. As for a1 and a2, these are coefficients that depend on the initial conditions of the system as well as, in our case at least, the normalization condition: the probabilities we’ll calculate have to add up to one. So… Well… It all comes with the system, as we’ll see in a moment.

Let’s first look at those eigenvalues. We get them by calculating the determinant of the A−λI matrix, and equating it to zero, so we write det(A−λI) = 0. If A is a two-by-two matrix (which it is for the two-state systems that we are looking at), then we get a quadratic equation, and its two solutions will be those λI and λII values. The two eigenvalues of our system above can be written as:

λI = −(i/ħ)·EI and λII = −(i/ħ)·EII.

EI and EII are two possible values for the energy of our system, which are referred to as the upper and the lower energy level respectively. We can calculate them as:

energies

Note that we use the Roman numerals I and II for these two energy levels, rather than the usual Arabic numbers 1 and 2. That’s in line with Feynman’s notation: it relates to a special set of base states that we will introduce shortly. Indeed, plugging them into the a1eλI·t and a2eλII·t expressions gives us a1e−(i/ħ)·EI·t and a2e−(i/ħ)·EII·t and…

Well… It’s time to go back to the physics class now. What are we writing here, really? These two functions are amplitudes for so-called stationary states, i.e. states that are associated with probabilities that do not change in time. Indeed, it’s easy to see that their absolute square is equal to:

  • P= |a1e−(i/ħ)·EI·t|= |a1|2·|e−(i/ħ)·EI·t|= |a1|2
  • PII = |a2e−(i/ħ)·EII·t|= |a2|2·|e−(i/ħ)·EII·t|= |a2|2

Now, the a1 and a2 coefficients depend on the initial and/or normalization conditions of the system, so let’s leave those out for the moment and write the rather special amplitudes e−(i/ħ)·EI·t and e−(i/ħ)·EII·t as:

  • C= 〈 I | ψ 〉 =  e−(i/ħ)·EI·t
  • CII = 〈 II | ψ 〉 = e−(i/ħ)·EII·t

As you can see, there’s two base states that go with these amplitudes, which we denote as state | I 〉 and | II 〉 respectively, so we can write the state vector of our two-state system – like our ammonia molecule, or whatever – as:

| ψ 〉 = | I 〉 C| II 〉 CII = | I 〉〈 I | ψ 〉 + | II 〉〈 II | ψ 〉

In case you forgot, you can apply the magical | = ∑ | i 〉 〈 i | formula to see this makes sense: | ψ 〉 = ∑ | i 〉 〈 i | ψ 〉 = | I 〉 〈 I | ψ 〉 + | II 〉 〈 II | ψ 〉 = | I 〉 C| II 〉 CII.

Of course, we should also be able to revert back to the base states we started out with so, once we’ve calculated Cand C2, we can also write the state of our system in terms of state | 1 〉 and | 2 〉, which are the states as we defined them when we first looked at the problem. 🙂 In short, once we’ve got Cand C2, we can also write:

| ψ 〉 = | 1 〉 C| 2 〉 C= | 1 〉〈 1 | ψ 〉 + | 2 〉〈 2 | ψ 〉

So… Well… I guess you can sort of see how this is coming together. If we substitute what we’ve got so far, we get:

C = a1·CI·vI + a2·CII·vII

Hmm… So what’s that? We’ve seen something like C = a1·CI + a2·CII , as we wrote something like C1 = (a/2)·CI + (b/2)·CII b in our previous posts, for example—but what are those eigenvectors vI and vII? Why do we need them?

Well… They just pop up because we’re solving the system as mathematicians would do it, i.e. not as Feynman-the-Great-Physicist-and-Teacher-cum-Simplifier does it. 🙂 From a mathematical point of view, they’re the vectors that solve the (A−λII)vI = 0 and (A−λIII)vII = 0 equations, so they come with the eigenvalues, and their components will depend on the eigenvalues λand λI as well as the Hamiltonian coefficients. [I is the identity matrix in these matrix equations.] In fact, because the eigenvalues are written in terms of the Hamiltonian coefficients, they depend on the Hamiltonian coefficients only, but then it will be convenient to use the EI and EII values as a shorthand.

Of course, one can also look at them as base vectors that uniquely specify the solution C as a linear combination of vI and vII. Indeed, just ask your math teacher, or google, and you’ll find that eigenvectors can serve as a set of base vectors themselves. In fact, the transformations you need to do to relate them to the so-called natural basis are the ones you’d do when diagonalizing the coefficient matrix A, which you did when solving systems of equations back in high school or whatever you were doing at university. But then you probably forgot, right? 🙂 Well… It’s all rather advanced mathematical stuff, and so let’s cut some corners here. 🙂

We know, from the physics of the situations, that the C1 and C2 functions and the CI and CII functions are related in the same way as the associated base states. To be precise, we wrote:

eq 1

This two-by-two matrix here is the transformation matrix for a rotation of state filtering apparatus about the y-axis, over an angle equal to α, when only two states are involved. You’ve seen it before, but we wrote it differently:

transformation

In fact, we can be more precise: the angle that we chose was equal to minus 90 degrees. Indeed, we wrote our transformation as:

Eq 4[Check the values against α = −π/2.] However, let’s keep our analysis somewhat more general for the moment, so as to see if we really need to specify that angle. After all, we’re looking for a general solution here, so… Well… Remembering the definition of the inverse of a matrix (and the fact that cos2α + sin2α = 1), we can write:

Eq 3

Now, if we write the components of vI and vII as vI1 and vI2, and vII1 and vII2 respectively, then the C = a1·CI·vI + a2·CII·vII expression is equivalent to:

  • C1 = a1·vI1·Ca2·vII1·CII
  • C2 = a1·vI2·CI + a2·vII2 ·CII

Hence, a1·vI1 = a2·vII2 = cos(α/2) and a2·vII1 = −a1·vI2 = sin(α/2). What can we do with this? Can we solve this? Not really: we’ve got two equations and four variables. So we need to look at the normalization and starting conditions now. For example, we can choose our t = 0 point such that our two-state system is in state 1, or in state I. And then we know it will not be in state 2, or state II. In short, we can impose conditions like:

|C1(0)|= 1 = |a1·vI1·CI(0) + a2·vII1·CII(0)|and |C2|= 0 = |a1·vI1·CI(0) + a2·vII1·CII(0)|

However, as Feynman puts it: “These conditions do not uniquely specify the coefficients. They are still undetermined by an arbitrary phase.”

Hmm… He means the α, of course. So… What to do? Well… It’s simple. What he’s saying here is that we do need to specify that transformation angle. Just look at it: the a1·vI1 = a2·vII2 = cos(α/2) and a2·vII1 = −a1·vI2 = sin(α/2) conditions only make sense when we equate α with −π/2, so we can write:

  • a1·vI1 = a2·vII2 = cos(−π/4) = 1/√2
  • a2·vII1 = −a1·vI2 = sin(−π/4) = –1/√2

It’s only then that we get a unique ratio for a1/a= vI1/vII2 = −vII1/vI2. [In case you think there are two angles in the circle for which the cosine equals minus the sine – or, what amounts to the same, for which the sine equals minus the cosine – then… Well… You’re right, but we’ve got α divided by two in the argument. So if α/2 is equal to the ‘other’ angle, i.e. 3π/4, then α itself will be equal to 6π/4 = 3π/2. And so that’s the same −π/2 angle as above: 3π/2 − 2π = −π/2, indeed. So… Yes. It all makes sense.]

What are we doing here? Well… We’re sort of imposing a ‘common-sense’ condition here. Think of it: if the vI1/vII2 and −vII1/vI2 ratios would be different, we’d have a huge problem, because we’d have two different values for the a1/aratio! And… Well… That just doesn’t make sense. The system must come with some specific value for aand a2. We can’t just invent two ‘new’ ones!

So… Well… We are alright now, and we can analyze whatever two-state system we want now. One example was our ammonia molecule in an electric field, for which we found that the following systems of equations were fully equivalent:

Set

So, the upshot is that you should always remember that everything we’re doing is subject to the condition that the ‘1’ and ‘2’ base states and the ‘I’ and ‘II’ base states (Feynman suggests to read I and II as ‘Eins’ and ‘Zwei’ – or try ‘Uno‘ and ‘Duo‘ instead 🙂 – so as to make a difference with ‘one’ and ‘two’) are ‘separated’ by an angle of (minus) 90 degrees. [Of course, I am not using the ‘right’ language here, obviously. I should say ‘projected’, or ‘orthogonal’, perhaps, but then that’s hard to say for base states: the [1/√2, 1/√2] and [1/√2, −1/√2] vectors are obviously orthogonal, because their dot product is zero, but, as you know, the base states themselves do not have such geometrical interpretation: they’re just ‘objects’ in what’s referred to as a Hilbert space. But… Well… I shouldn’t dwell on that here.]

So… There we are. We’re all set. Good to go! Please note that, in the absence of an electric field, the two Hamiltonians are even simpler:

equi

In fact, they’ll usually do the trick in what we’re going to deal with now.

[…] So… Well… That’s is really! 🙂 We’re now going to apply all this in the next posts, so as to analyze things like the stability of neutral and ionized hydrogen molecules and the binding of diatomic molecules. More interestingly, we’re going to talk about virtual particles. 🙂

Addendum: I started writing this post because Feynman actually does give the impression there’s some kind of ‘doublet’ of aand a2 coefficients as he start his chapter on ‘other two-state systems’. It’s the symbols he’s using: ‘his’ aand a2, and the other doublet with the primes, i.e. a1‘ and a2‘, are the transformation amplitudesnot the coefficients that I am calculating above, and that he was calculating (in the previous chapter) too. So… Well… Again, the only thing you should remember from this post is that 90 degree angle as a sort of physical ‘common sense condition’ on the system.

Having criticized the Great Teacher for not being consistent in his use of symbols, I should add that the interesting thing is that, while confusing, his summary in that chapter does give us precise formulas for those transformation amplitudes, which he didn’t do before. Indeed, if we write them as a, b, c and d respectively (so as to avoid that confusing aand a2, and then a1‘ and a2‘ notation), so if we have:

transformation

then one can show that:

final

That’s, of course, fully consistent with the ratios we introduced above, as well as with the orthogonality condition that comes with those eigenvectors. Indeed, if a/b = −1 and c/d = +1, then a/b = −c/d and, therefore, a·d + b·c = 0. [I’ll leave it to you to compare the coefficients so as to check that’s the orthogonality condition indeed.]

In short, it all shows everything does come out of the system in a mathematical way too, so the math does match the physics once again—as it should, of course! 🙂