# The Hamiltonian of matter in a field

Pre-script (dated 26 June 2020): Our ideas have evolved into a full-blown realistic (or classical) interpretation of all things quantum-mechanical. So no use to read this. Read my recent papers instead. š

Original post:

In this and the next post, I want to present some essential discussions inĀ Feynman’s 10th, 11th and 12thĀ LecturesĀ on Quantum Mechanics. This post in particular will actually present the Hamiltonian for theĀ spinĀ state of an electron, but the discussion is much more general than that: it’s a model for any spin-1/2 particle, i.e. for all elementary fermionsāso that’s the ‘matter-particles’ which you know: electrons, protons and neutrons. Or, taking into account protons and neutrons consists of quarks, we should say quarks, which also have spin 1/2. So let’s go for it. Let me first, by way of introduction, remind you of a few things.

#### What is it that we are trying to do?

That’s always a good question to start with. š Just for fun, and as we’ll be talking a lot about symmetries and directions in space, I’ve inserted an animation below of aĀ four-dimensionalĀ object, as its author calls it. This ‘object’ returns to its original configuration after a rotation of 720 degrees only (after 360 degrees,Ā the spiral flips between clockwise and counterclockwise orientations, so it’s notĀ the same). For some rather obscure reason š he refers to it as a spin-1/2 particle, or a spinor.

AreĀ spin one-half particles, like an electron or a proton, reallyĀ four-dimensional? Well… I guess so. All depends, of course, on your definition or concept of a dimension. š Indeed, the term is as well ā I should say, as badly, reallyĀ āĀ defined as the ubiquitous term ‘vector’ and so… Well… Let me say thatĀ spinors are usually defined in four-dimensional vector spaces, indeed. […] So is this what it’s all about, and should we talk about spinors?

Not really. Feynman doesn’t push the math that far, so I won’t do that either. š In fact, I am not sure why he’s holding back here: spinors are just mathematical objects, like vectors or tensors, which we introduced in one of our posts on electromagnetism, so why not have a go at it? You’ll remember that our electromagnetic tensor was like a special vectorĀ cross-productĀ which, using theĀ four-potential vectorĀ AĪ¼Ā and the āĪ¼Ā = (ā/āt, āā/āx,Ā āā/āy,Ā āā/āz) operator, we could write as (āĪ¼AĪ¼) ā (āĪ¼AĪ¼)T.

Huh?Ā Hey! Relax! It’sĀ a matrix equation. It looks like this:

In fact, I left cĀ out above, and so we should plug it in, remembering thatĀ Bās magnitude is 1/c times Eās magnitude. So the electromagnetic tensor ā in oneĀ of its many forms at leastĀ āĀ is the following matrix:

Why do we need aĀ beastĀ like this? Wellā¦ Have a look at the mentioned postĀ or, better, one of theĀ subsequent posts:Ā we used it in very powerfulĀ equations (read: very conciseĀ equations, because that’s what mathematicans, and physicists, like) describing theĀ dynamicsĀ of a system.Ā So we have something similar here: what we’re trying to describe theĀ dynamicsĀ of a quantum-mechanical system in terms of the evolution of its state, which we express as aĀ linear combination of ‘pure’Ā base states, which we wrote as:

|ĻāŖ = |1āŖC1Ā +Ā |2āŖC2Ā = |1āŖā©1|ĻāŖ + |2 āŖā©2|ĻāŖ

C1Ā and C2Ā are complex-valued wavefunctions, or amplitudes as we call them, and the dynamics of the system are captured in a set of differential equations, which we wrote as:

The trick was to know or guess our Hamiltonian, i.e. we had to know or, more likely, guess those HijĀ coefficients (and then find experiments to confirmĀ our guesses). Once we got those, it was a piece of cake. We’d solve forĀ C1Ā and C2, and then take theirĀ absoluteĀ square so as to getĀ probability functions. like the ones we found for our ammonia (NH3) molecule:Ā P1(t) =Ā |C1(t)|2Ā = cos2[(A/Ä§)Ā·t] andĀ P2(t) =Ā |C2(t)|2Ā = sin2[(A/Ä§)Ā·t].Ā They say that, if weĀ wouldĀ take a measurement, then the probability of finding the molecule in the ‘up’ or ‘down’ state (i.e. state 1 versus state 2) varies as shown:

So here we are going to generalize the analysis: rather than guessing, or assuming we knowĀ them (from experiment, for example, or because someone else told us so), we’re going to calculate what those Hamiltonian coefficients areĀ in general.

Now, returning to those spinors, it’s rather daunting to think that such a simple thing as being in the ‘up’ or ‘down’ condition has to be represented by some mathematical object that’s at least as complicated as these tensors. But… Well… I am afraid that’s the way it is. Having said that, Feynman himself seems to consider that’s math for graduateĀ students in physics, rather than the undergraduate public for which he wrote the course. Hence, while he presented all of the mathĀ in the Lecture Volume on electromagnetism, he keeps things as simple as possible in the Volume on quantum mechanics. So… No. We will not be talking about spinors here.

The only reason why I started out with that wonderful animation is to remind you of the weirdnessĀ of quantum mechanics as evidenced by, for example, the fact I almost immediately got into trouble when trying to associate base states with two-dimensional geometric vectorsĀ when writing my post on the hydrogen molecule, or when thinking about the magnitudeĀ of the quantum-mechanical equivalent of theĀ angular momentumĀ of a particleĀ (see my post on spin and angular momentum).

Thinking of that, it’s probably good to remind ourselves of the latter discussion. If we denote the angular momentum as J, then we know that, in classical mechanics, any of Jās components Jx,Ā JyĀ or Jz, could take on any value from +J toĀ āJ and, therefore, the maximum value of any component of J ā say JzĀ ā would be equal to J. To be precise, J would be the value of the component of JĀ in the direction of J itself. So, in classical mechanics, weād write: |J| = +ā(JĀ·J) = +āJ2Ā =Ā J, and it would be the maximum value of any component of J.

However, in quantum mechanics, that’s not the case. If the spin number of J is j, then the maximum value of any component of JĀ is equal to jĀ·Ä§. In this case, the spin number will be either +1/2 or ā1/2. So, naturally, one would think that J, i.e. the magnitude of J, would be equal to JĀ = |J| = +ā(JĀ·J) = +āJ2Ā = jĀ·Ä§ = Ä§/2. But that’s not the case: JĀ = |J| ā  jĀ·Ä§ = Ä§/2. To calculate the magnitude, we need to calculate J2Ā = Jx2Ā + Jy2Ā + Jz2. So the idea is to measureĀ these repeatedly and use theĀ expected valueĀ for Jx2, Jy2Ā and Jz2Ā in the formula. Now, that’s pretty simple: we know that Jx,Ā JyĀ or JzĀ are equal to eitherĀ +Ä§/2 or āÄ§/2, and, in the absence of a field (i.e. in free space), there’s no preference, so both values are equally likely. To make a long story short, the expected value of Jx2, Jy2Ā and Jz2Ā is equal to (1/2)Ā·(Ä§/2)2Ā +Ā (1/2)Ā·(āÄ§/2)2Ā = Ä§2/4, and J2Ā = 3Ā·Ä§2/4 = j(j+1)Ä§, with j = 1/2. So JĀ = |J| = +āJ2Ā = ā(3Ā·Ä§2/4) = ā3Ā·(Ä§/2) ā 0.866Ā·Ä§. Now that’s a hugeĀ difference as compared to Ä§/2 = Ä§/2.

What we’re saying here is that the magnitudeĀ of the angular momentum is ā3 ā 1.7 times the maximum value of the angular momentum in any direction. How is that possible? Thinking classically, this is nonsensical. However, we need to stop thinking classically here: it means that, when we’re atomic or sub-atomic particles, their angular momentum is neverĀ completely in one direction. This implies we need to revise our classical idea of an oriented (electric or magnetic)Ā moment: to put it simply, we find it’s never in one direction only! Alternatively, we might want to re-visit our concept of direction itself, but then we do not want to go there: we continue to say we’re measuring this or that quantity in this or that direction.Ā Of course we do!Ā What’s the alternative?Ā There’s none. You may think we didn’t use the proper definition of the magnitude of a quantity when calculating J as ā3Ā·(Ä§/2), but… Well… You’ll find yourself alone with that opinion. š

This weird thing really comes with the experimental factĀ that, if you measure the angular momentum, along any axis, you’ll find it is always an integer or half-integer timesĀ Ä§.Ā Always!Ā So it comes with theĀ experimental factĀ that energy levels are discrete: they’re separatedĀ by the quantum of energy, which isĀ Ä§, and which explains why we have the 1/Ä§ factor in all coefficients in the coefficient matrix for our set of differential equations. The Hamiltonian coefficients represent energies indeed, and so we’ll want to measure them in units of Ä§.

Of course, now you’ll wonder: why the āi? I wish I could you a simple answer here, like: “The āiĀ factor corresponds to a rotation by āĻ/2, and that’s the angle we use to go from our ‘up’ and ‘down’ base states to the ‘Uno‘ and ‘Duo‘ (I and II) base states.” š Unfortunately, thisĀ easy answer isn’t the answer. I need to refer you to my post on the Hamiltonian: the true answer is that it’s got to do with the iĀ in the eā(i/Ä§)Ā·(EĀ·t āĀ pāx)Ā function: the E, i.e. the energy, is realĀ ā most of the time, at least šĀ ā but the wavefunction is what it is: a complexĀ exponential. So… Well…

Frankly, that’s more than enough as an introduction. You may want to think about theĀ imaginaryĀ momentum of virtual particles here ā i.e. ‘particles’ that are being exchanged as part of a ‘state switch’ ā Ā but then we’d be babbling for hours! So let’s just do what we wanted to do here, and that is to find the Hamiltonian for a spin one-half particle in general, so that’s usually in someĀ field, rather than in free space. š

So here we go. Finally!Ā š

#### The Hamiltonian of a spin one-half particle in a magnetic field

We’ve actually done some really advanced stuff already. For example, when discussing the ammonia maser, we agreed on the following Hamiltonian in order to make sense of what happens inside of the maser’s resonant cavity:

State 1 was the state with the ‘upper’ energy E0Ā + Ī¼Īµ, as the energyĀ that’s associated with the electric dipole moment of the ammonia molecule wasĀ added to the (average) energy of the system (i.e. E0). State 2 was the state with the ‘lower’ energy level E0Ā ā Ī¼Īµ, implying the electric dipole moment is oppositeĀ to that of state 1. The field could be dynamic or static, i.e. varying in time, or not, but it was the same Hamiltonian. Of course, solving the differential equations with non-constant Hamiltonian coefficients was much more difficult, but we did it.

We also have a “flip-flop amplitude” ā I am using Feynman’s term for it šĀ ā in that Hamiltonian above. So that’s an amplitude for the system to go from one state to another in the absence of an electric field.Ā For our ammonia molecule, and our hydrogen molecule too, it was associated with the energy that’s needed to tunnel through a potential barrier and, as we explained in our post on virtual particles, that’s usually associated with a negativeĀ value for the energy or, what amounts to the same, with aĀ purely imaginaryĀ momentum, so that’s why we write minusĀ A in the matrix. However, don’t rack your brain over this as it is a bit of convention, really: putting +A would just result in a phase differenceĀ for the amplitudes, but it wouldĀ give us the same probabilities. If it helps you, you may also like to think of our nitrogen atom (or our electron when we were talking the hydrogen system) as borrowing some energy from the system so as to be able to tunnel through and, hence, temporarilyĀ reducingĀ the energy of the system by an amount thatās equal to A. In any case… We need to move on.

As for these probabilities, we could seeĀ ā after solving the whole thing, of course (and that wasĀ very complicated, indeed)Ā ā that they’re going up and down just like in that graph above. The only difference was that we were talkingĀ inducedĀ transitions here, and so the frequencyĀ of the transitions depended on Ī¼Īµ0, i.e. on the strength of the field, and the magnitude of the dipole moment itself of course, rather than on A. In fact, to be precise, we found that the ratio between the averageĀ periodsĀ was equal to:

Tinduced/TspontaneousĀ = [(ĻĀ·Ä§)/(2Ī¼Īµ0)]/[(ĻĀ·Ä§)/(2A)] = A/Ī¼Īµ0

But… Well… I need to move on. I just wanted to present the general philosophy behind these things. For a simple electron which, as you know, is either in a ‘up’ or a ‘down’ state ā vis-Ć”-vis a certain direction, of courseĀ ā the Hamiltonian will beĀ veryĀ simple. As usual, we’ll assume the direction is that z-direction. Of course, this ‘z-direction” is just a short-hand for our reference frame: weĀ decideĀ to measure something in this or that direction, and we call that direction the z-direction.

Fine. Next. As ourĀ z-direction is currently ourĀ referenceĀ direction, we assume it’s the direction of some magnetic field, which wel’ll write as B. So theĀ components of B in the x– and y-direction are zero: all of the field is in the z-direction, so B = Bz. [Note that the magnetic field isĀ not some quantum-mechanical quantity, and so we can have all of the magnitude in one direction. It’s just a classical thing.]

Fine. Next. The spin or the angular momentum of our electron is, of course, associated with some magneticĀ dipole moment, which we’ll write asĀ Ī¼. [And, yes, sometimes we use this symbol for anĀ electricĀ dipole moment and, at other times, for aĀ magneticĀ dipole moment, like here. I can’t help that. You don’t want a zillion different symbols anyway.] Hence, just like we had two energy levelsĀ E0Ā Ā± Ī¼Īµ, we’ll now have two energy levelsĀ E0Ā Ā± Ī¼Bz. We’ll just shift the energy scale so E0Ā = 0, so that’s as per our convention. [Feynman glosses over it, but this is a bit of a tricky point, really. Usually, one includes theĀ rest mass, or restĀ energy, in the E in the argument of the wavefunction, but so here we’re equating m0Ā c2Ā with zero. Tough! However, you can think of this re-definition of the zero energy points as a phase shift inĀ allĀ wavefunctions, so it shouldn’t matter when taking the absolute square or looking at interference. Still… Think about it.]

Fine. Next. Well… We’ve got two energy levels, +Ī¼BzĀ and +Ī¼Bz, but no A to put in our Hamiltonian, so the following Hamiltonian may or may not make sense:

Hmm… Why is there no flip-flop amplitude? Well… You tell me. Why would we have one? It’s not like the ammonia or hydrogen molecule here, so… Well… Where’s the potential barrier? Of course, you’ll now say that we can imagine it takes some energy toĀ changeĀ the spin of an electron, like we were doing with thoseĀ inducedĀ transitions. But… Yes and no. We’ve beenĀ selectingĀ particles using our Stern-Gerlach apparatus, or that state selector for our maser, but were we actuallyĀ flip-floppingĀ things? The changing electric field in our resonant cavity isĀ changesĀ the transition frequency but, when everything is said and done, the transition itself has to do with that A.Ā You’ll object again: a pure stationaryĀ state? So the electron is either ‘up’ or ‘down’, and it stays like that forever.Ā Really?

Well… I am afraid I have to cut you off, because otherwise we’ll never get to the end. Stop being so critical. š Well… No. You should be critical. However, you’re right in saying that, when everything is said and done, these are all hypothesesĀ that may or may not make sense. However, Feynman is also right when he says that, ultimately, the proof of the pudding is in the eating: at the end of this long, winding story, we’ll get someĀ solutionsĀ that can beĀ testedĀ in experiment: they should giveĀ predictions, or probabilities rather, that agree with experiment. As Feynman writes: “[The objective is to find] āequations of motion for the spin statesā of an electron in a magnetic field. We guess at them by making some physical argument, but the real test of any Hamiltonian is that it should give predictions in agreement with experiment. According to any tests that have been made, these equations are right. In fact, although we made our arguments only for constant fields, the Hamiltonian we have written is also right for magnetic fields which vary with time.”

So let’s get on with it: let’s assume the Hamiltonian above is the one we should use for a magnetic field in the z-direction, and that we have those pure stationary states with the energies they have, i.e. āĪ¼BzĀ and +Ī¼Bz. One minor technical point, perhaps: you may wonder why we write what we write and do notĀ switch āĪ¼BzĀ and +Ī¼BzĀ in the Hamiltonianāso as to reflect these ‘upper’ and ‘lower’ energies in those other Hamiltonians. The answer is: it’s just convention. We chooseĀ state 1 to be the ‘up’ state, so its spin is ‘up’, but the magnetic moment isĀ oppositeĀ to the spin, so the ‘up’ state has the minus sign. Full stop. Onwards!

We’re now going to assume our B field isĀ notĀ in the z-direction. Hence, its BxĀ andĀ By components areĀ notĀ zero. What we want to see now is how the Hamiltonian looks like.Ā [Yes. Sorry for regularly reminding you of what it is that we are trying to do.] Here you need to be creative. Whatever the direction of the field, we need to be consistent. IfĀ that Hamiltonian makes sense, i.e.Ā ifĀ we’d have two pure stationary states with the energies they have,Ā ifĀ the field is in the z-direction,Ā thenĀ it’s rather obvious that, if the field is in some other direction, we should still be able to find two stationary states with exactlyĀ the same energy levels. As Feynman puts it: “We could have chosen our z-axis in its direction, and we would have found two stationary states with the energies Ā±Ī¼Bz.Ā Just choosing our axes in a different direction doesn’t change the physics. Our description of the stationary states will be different, but their energies will still be Ā±Ī¼Bz.” Right. And because the magnetic field is a classical quantity, the relevant magnitude is just the square root of the squares of its components, so we write:

So we have the energies now, but we want the Hamiltonian coefficients. Here we need to work backwards. The general solution for any system with constant Hamiltonian coefficientsĀ alwaysĀ involves two stationary states with energy levels which we denoted as EIĀ and EII, indeed.Ā Let me remind you of the formula for them:

[If you want to double-check and see how we get those, it’s probably best to check it in the original text, i.e. Feynman’s Lecture on the Ammonia Maser, Section 2.]

So how do we connect the two sets of equations? How do we get the HijĀ coefficients out of these square roots and all of that? [Again. I am just reminding you of what it is that we are trying to do.] We’ve got two equations and four coefficients, so… Well… There’s some rules we can apply. For example, we know that any HijĀ coefficient must equal Hji*, i.e. complex conjugate of Hji. [However, I should add that’s true only if i ā  j.] But… Hey!Ā We can already see that H11Ā must be equal toĀ minusĀ H22. Just compare the two sets. That comes out as a condition, clearly. NowĀ thatĀ simplifies our square roots above significantly. Also noting that theĀ absoluteĀ square of a complex number is equal to the product of the number with its complex conjugate, the two equations above imply the following:

Let’s see what this means if we’d apply this to our ‘special’ direction once more, so let’s assume the field is in the z-direction once again. Perhaps we can some more ‘conditions’ out of that. If the field is in the z-direction itself, the equation above reduces to:

That makes it rather obvious that,Ā in this special case, at least,Ā |H12|2Ā = 0. You’ll say: that’s nothing new, because we had those zeroes in that Hamiltonian already. Well… Yes and no! Here we need to introduce another constraint. I’ll let Feynman explain it: “We are going to make an assumption that there is a kind of superposition principle for the terms of the Hamiltonian. More specifically, we want to assume that if two magnetic fields are superposed, the terms in the Hamiltonian simply addāif we know theĀ HijĀ for a pureĀ BzĀ and we know theĀ HijĀ for a pureĀ Bx,Ā then the HijĀ for a both BzĀ and BxĀ together is simply the sum. This is certainly true if we consider only fields in the z-directionāif we doubleĀ Bz, then all the HijĀ are doubled. So letās assume that H is linear in the fieldĀ B.”

Now, the assumption that H12Ā must be someĀ linear combination ofĀ Bx,Ā ByĀ and Bz, combined with the |H12|2Ā = 0 condition when all of the magnitude of the field is in the z-direction, tells us that H12Ā has no term in Bz. It mayĀ have ā in fact, it probably shouldĀ have ā terms in BxĀ andĀ By, but notĀ in Bz. That doesĀ take us a step further.

Next assumption. The next assumption is that, regardless of the direction of the field, H11Ā and H22Ā don’t change: they remain what they are, so we write: H11Ā = āĪ¼BzĀ and H22Ā = +Ī¼Bz. Now, you may think that’s no big deal, because we defined the 1 and 2 states in terms of our z-direction, but… Well… We did so assuming all of the magnitude was in the z-direction.

You’ll say: so what? Now we’ve got some field in the x– and y-directions, so that shouldn’t impact the amplitude to be in a state that’s associated with the z-direction. Well… I should say two things here. First, we’re notĀ talking about the amplitude to be in state 1 or state 2. These amplitudes are those C1Ā and C2Ā functions that we can find once we’ve got those Hamiltonian coefficients. Second, you’d surely expect that some field in the x– and y-directions should have some impact on those C1Ā and C2Ā functions. Of course!

In any case, I’ll let you do some more thinking about this assumption. Again, we need to move on, so let’s just go along with it. At this point, Feynman‘s had enough of the assumptions, and so he boldly proposes a solution, which incorporates that theĀ H11Ā = āĪ¼BzĀ and H22Ā = +Ī¼BzĀ assumption. Let me quote him:

Of course, this leaves us gasping for breath. AĀ simpleĀ guess? One can plug it in, of course, and see it makes senseārather quickly,Ā really. But… Nothing linear is going to come out of that expression for |H12|2, right? We’ll have to take a square root to find thatĀ H12Ā =Ā Ā±Ī¼Ā·(Bx2Ā +Ā By2)1/2. Well… No. We’re working in the complex spaceĀ here, remember? So we can use complex solutions.Ā Feynman notes the same and immediately proposes the right solution:

To make a long story, we get what we wanted, i.e. those āequations of motion for the spin statesā of an electron in a magnetic field. I’ll let Feynman summarize the results:

It’s truly a GreatĀ Result, especially because, as Feynman notes, (almost) any problem about two-state systems can be solved by making aĀ mathematical analogĀ to the system of the spinning electron. We’ll illustrate that as we move ahead. For now, however, I think we’ve had enough, isn’t it? š

We’ve made a big leap here, and perhaps we should re-visit some of the assumptions and conventionsālater, that is. As for now, let’s try to work with it. As mentioned above, Feynman shied away from theĀ grandĀ mathematical approach to it. Indeed, the whole argument might have been somewhat fuzzy, but at least we got a good feel for the solution. In my next post, I’ll abstract away from it, as Feynman does in his nextĀ Lecture, where he introduces the so-called Pauli spin matrices, which are likeĀ Lego building blocksĀ for all of the matrix algebra which ā I must assume you sort of sense that’s coming, no? š āĀ we’ll need to master so as to understand what’s going on.

So… That’s it for today. I hope you understood “what it is that we’re trying to do”, and that you’ll have some fun working on it on your own now. š

Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here: