# The reality of the wavefunction

If you haven’t read any of my previous posts on the geometry of the wavefunction (this link goes to the most recent one of them), then don’t attempt to read this one. It brings too much stuff together to be comprehensible. In fact, I am not even sure if I am going to understand what I write myself. đ [OK. Poor joke. Acknowledged.]

Just to recap the essentials, I part ways with mainstream physicists in regard to theÂ interpretationÂ of the wavefunction. For mainstream physicists, the wavefunction is just some mathematical construct. NothingÂ real. Of course, I acknowledge mainstream physicists have very good reasons for that, but… Well… I believe that, if there is interference, or diffraction, thenÂ somethingÂ must be interfering, or something must be diffracting. I won’t dwell on this because… Well… I have done that too many times already. MyÂ hypothesisÂ is that the wavefunction is, in effect, aÂ rotatingÂ field vector, so itâs just like the electric field vector of a (circularly polarized) electromagnetic wave (illustrated below).

Of course, it must be different, and it is. First, theÂ (physical) dimension of the field vector of the matter-wave must be different. So what is it? Well… I am tempted to associate the real and imaginary component of the wavefunction with a forceÂ per unit massÂ (as opposed to the force per unit charge dimension of the electric field vector). Of course, the newton/kg dimension reduces to the dimension of acceleration (m/s2), so thatâs the dimension of a gravitational field.

Second, I also am tempted to think that this gravitational disturbance causes an electron (or any matter-particle) to move about some center, and I believe it does so at the speed of light. In contrast, electromagnetic waves doÂ notÂ involve any mass: theyâre just an oscillatingÂ field. Nothing more. Nothing less. Why would I believe there must still be some pointlike particle involved? Well…Â As Feynman puts it: âWhen you do find the electron some place, the entire charge is there.â (FeynmanâsÂ Lectures, III-21-4) So… Well… That’s why.

The third difference is one that I thought of only recently: theÂ planeÂ of the oscillation cannotÂ be perpendicular to the direction of motion of our electron, because then we canât explain the direction of its magnetic moment, which is either up or down when traveling through a Stern-Gerlach apparatus. I am more explicit on that in the mentioned post, so you may want to check there. đ

I wish I mastered the software to make animations such as the one above (for which I have to credit Wikipedia), but so I don’t. You’ll just have toÂ imagineÂ it. That’s great mental exercise, so… Well… Just try it. đ

Let’s now think about rotating reference frames and transformations. If theÂ z-direction is the direction along which we measure the angular momentum (or the magnetic moment), then theÂ up-direction will be theÂ positiveÂ z-direction. We’ll also assume theÂ y-direction is the direction of travel of our elementary particleâand let’s just consider an electron here so we’re moreÂ real. đ So we’re in the reference frame that Feynman used to derive the transformation matrices for spin-1/2 particles (or for two-state systems in general). His ‘improved’ Stern-Gerlach apparatusâwhich I’ll refer to as a beam splitterâillustrates this geometry.

So I think the magnetic momentâor the angular momentum, reallyâcomes from an oscillatory motion in the x– and y-directions. One is theÂ realÂ component (the cosine function) and the other is the imaginary component (the sine function), as illustrated below.Â

So the crucial difference with the animations above (which illustrate left- and a right-handed polarization respectively) is that we, somehow, need to imagine the circular motion isÂ notÂ in theÂ xz-plane, but in theÂ yz-plane. Now what happens if we change the reference frame?

Well… That depends on what you mean by changing the reference frame. Suppose we’re looking in the positive y-directionâso that’s the direction in which our particle is movingâ, then we might imagine how it would look like whenÂ weÂ would make a 180Â°Â turn and look at the situation from the other side, so to speak. Now, I did a post on that earlier this year, which you may want to re-read.Â When we’re looking at the same thing from the other side (from the back side, so to speak), we will want to use our familiar reference frame. So we will want to keep theÂ z-axis as it is (pointing upwards), and we will also want to define theÂ x– andÂ y-axis using the familiar right-hand rule for defining a coordinate frame. So our newÂ x-axis and our newÂ y-axis will the same as the oldÂ x- andÂ y-axes but with the sign reversed. In short, we’ll have the following mini-transformation: (1)Â z‘ =Â z, (2) x’ = âx, and (3) y’ =Â ây.

So… Well… If we’re effectively looking at somethingÂ realÂ that was moving along theÂ y-axis, then it will now still be moving along the y’-axis, butÂ in theÂ negativeÂ direction. Hence, our elementary wavefunctionÂ eiÎ¸Â = cosÎ¸ +Â iÂˇsinÎ¸ willÂ transformÂ intoÂ âcosÎ¸ âÂ iÂˇsinÎ¸ =Â âcosÎ¸ âÂ iÂˇsinÎ¸ =Â cosÎ¸ âÂ iÂˇsinÎ¸.Â It’s the same wavefunction. We just… Well… We just changed our reference frame. We didn’t change reality.

Now you’ll cry wolf, of course, because we just went through all that transformational stuff in our last post. To be specific, we presented the following transformation matrix for a rotation along theÂ z-axis:

Now, ifÂ Ď is equal to 180Â° (so that’s Ď in radians), then theseÂ eiĎ/2Â andÂ eâiĎ/2/â2Â factors areÂ equal toÂ eiĎ/2Â =Â +iÂ andÂ eâiĎ/2Â = âiÂ respectively. Hence, ourÂ eiÎ¸Â = cosÎ¸ +Â iÂˇsinÎ¸ becomes…

Hey ! Wait a minute ! We’re talking about twoÂ veryÂ different things here, right? TheÂ eiÎ¸Â = cosÎ¸ +Â iÂˇsinÎ¸ is anÂ elementaryÂ wavefunction which, we presume, describes some real-life particleâwe talked about an electron with its spin in theÂ up-directionâwhile these transformation matrices are to be applied to amplitudes describing… Well… Either anÂ up– or a down-state, right?

Right. But… Well… Is itÂ so different, really? Suppose ourÂ eiÎ¸Â = cosÎ¸ +Â iÂˇsinÎ¸ wavefunction describes anÂ up-electron, then we still have to apply thatÂ eiĎ/2Â =Â eiĎ/2Â =Â +iÂ factor, right? So we get a new wavefunction that will be equal toÂ eiĎ/2ÂˇeiÎ¸Â =Â eiĎ/2ÂˇeiÎ¸Â =Â +iÂˇeiÎ¸Â =Â iÂˇcosÎ¸ +Â i2ÂˇsinÎ¸ =Â sinÎ¸ âÂ iÂˇcosÎ¸, right? So how can we reconcile that with the cosÎ¸ âÂ iÂˇsinÎ¸ function we thought we’d find?

We can’t. So… Well… Either my theory is wrong or… Well… Feynman can’t be wrong, can he? I mean… It’s not only Feynman here. We’re talking all mainstream physicists here, right?

Right. But think of it. Our electron in that thought experiment does, effectively, make a turn of 180Â°, so it is going in the other direction now !Â That’s more than just… Well… Going around the apparatus and looking at stuff from the other side.

Hmm… Interesting. Let’s think about the difference between theÂ sinÎ¸ âÂ iÂˇcosÎ¸ andÂ cosÎ¸ âÂ iÂˇsinÎ¸ functions. First, note that they will give us the same probabilities: the square of the absolute value of both complex numbers is the same. [It’s equal to 1 because we didn’t bother to put a coefficient in front.] Secondly, we should note that the sine and cosine functions are essentially the same. They just differ by a phase factor: cosÎ¸ =Â sin(Î¸ +Â Ď/2) andÂ âsinÎ¸ =Â cos(Î¸ +Â Ď/2). Let’s see what we can do with that. We can write the following, for example:

sinÎ¸ âÂ iÂˇcosÎ¸ =Â âcos(Î¸ +Â Ď/2) âÂ iÂˇsin(Î¸ +Â Ď/2) =Â â[cos(Î¸ +Â Ď/2) +Â iÂˇsin(Î¸ +Â Ď/2)] =Â âeiÂˇ(Î¸ +Â Ď/2)

Well… I guess that’s something at least ! The eiÂˇÎ¸Â and âeiÂˇ(Î¸ +Â Ď/2)Â functions differ by a phase shiftÂ andÂ a minus sign so… Well… That’s what it takes to reverse the direction of an electron. đ Let us mull over that in the coming days. As I mentioned, these more philosophical topics are not easily exhausted. đ

Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

# Working with base states and Hamiltonians

I wrote a pretty abstract post on working with amplitudes, followed by more of the same, and then illustrated how it worked with a practical example (the ammonia molecule as a two-state system). Now it’s time for even more advanced stuff. Here we’ll show how to switch to another set of base states, and what it implies in terms of the Hamiltonian matrix and all of those equations, like those differential equations andÂ â of courseÂ â the wavefunctionsÂ (or amplitudes) themselves. In short, don’t try to read this if you haven’t done your homework. đ

Let me continue the practical example, i.e. the example of theÂ NH3Â molecule, as shown below. We abstracted away from all of its motion, except for its angular momentum â or itsÂ spin, you’d like to say, but that’s rather confusing, because we shouldn’t be using that term for the classical situation we’re presenting hereÂ â around its axis of symmetry. That angular momentum doesn’t change from stateÂ | 1 âŞ to stateÂ | 2 âŞ. What’s happening here is that we allow the nitrogen atom to flip through the other side, so it tunnels through the plane of the hydrogen atoms, thereby going through an energy barrier.

It’s important to note that we doÂ notÂ specify what that energy barrier consists of. In fact, the illustration above may be misleading, because it presents all sorts of things we don’t need right now, like theÂ electricÂ dipole moment, or the center of mass of the molecule, which actually doesn’t change, unlike what’s suggested above. We just put them there to remind you thatÂ (a) quantum physics is based on physicsÂ â so there’s lots of stuff involvedÂ â and (b) because we’ll need that electric dipole moment later. But, as we’re introducing it, note that we’re using the Îź symbol for it, which is usually reserved for the magneticÂ dipole moment, which is what you’d usually associate when thinking about the angular momentum or the spin, both in classical as well as in quantum mechanics. So the direction of rotation of our molecule, as indicated by the arrow around the axis at the bottom, and theÂ Îź in the illustration itself, haveÂ nothingÂ to do with each other. So now you know. Also, as we’re talking symbols, you should note the use of Îľ to represent an electric field. We’d usually write the electric dipole moment and the electric field vector as p and E respectively, but so we use that now for linear momentum and energy, and so we borrowed them from our study of magnets. đ

The point to note is that, when we’re talking about the ‘up’ or ‘down’ state of our ammonia molecule, you shouldn’t think of it as ‘spin up’ or ‘spin down’. It’s not like that: it’s just the nitrogen atom being beneath or above the plane of the hydrogen atoms, and we define beneath or above assuming the direction of spin actually stays the same!

OK. That should be clear enough. In quantum mechanics, the situation is analyzed by associating two energy levels with the ammonia molecule,Â E0Â + A and E0Â â A, so they are separated by an amount equal to 2A. This pair of energy levels has been confirmed experimentally: they are separated by an energy amount equal to 1Ă10â4Â eV, so that’s less thanÂ aÂ ten-thousandthÂ of the energy of a photon in the visible-light spectrum. Therefore, a molecule that has a transition will emit a photon in the microwave range. The principle of a maser is based on exciting the theÂ NH3Â molecules, and then induceÂ transitions. One can do that by applying anÂ externalÂ electric field. TheÂ mechanism works pretty much like what we described when discussing the tunneling phenomenon: an external force field will change the energy factor in the wavefunction, by adding potential energyÂ (let’s say an amount equal to U) to the totalÂ energy, which usually consists of the internal (Eint) and kinetic (p2/(2m) = mÂˇv) energy only. So now we writeÂ aÂˇeâi[(EintÂ + mÂˇvÂ + U)Âˇt â pâx]/Ä§Â instead of aÂˇeâi[(EintÂ + mÂˇv)Âˇt â pâx]/Ä§.

Of course, aÂˇeâiÂˇ(EÂˇt â pâx)/Ä§Â is an idealized wavefunction only, or a PlatonicÂ wavefunctionÂ â as I jokingly referred to it in my previous post. AÂ realÂ wavefunction has to deal with these uncertainties: we don’t know E and p. At best, we have a discrete set of possible values, like E0Â + A and E0Â â A in this case. But it might as well be some range, which we denote asÂ ÎE and Îp, and then we need to make some assumption in regard to the probabilityÂ density function that we’re going to associate with it. But I am getting ahead of myself here. Back to Â NH3, i.e. our simpleÂ two-state system. Let’s first do some mathematical gymnastics.

#### Choosing another representation

We have two base states in this system: ‘up’ or ‘down’, which we denoted asÂ baseÂ stateÂ | 1 âŞ and baseÂ stateÂ | 2 âŞ respectively. You’ll also remember we wrote theÂ amplitudeÂ to find the molecule in either one of these two states as:

• C1Â =Â âŠ 1 | Ď âŞ = (1/2)Âˇeâ(i/Ä§)Âˇ(E0Â â A)ÂˇtÂ + (1/2)Âˇeâ(i/Ä§)Âˇ(E0Â + A)Âˇt
• C2Â =Â âŠ 2 | Ď âŞ = (1/2)Âˇeâ(i/Ä§)Âˇ(E0Â â A)ÂˇtÂ â (1/2)Âˇeâ(i/Ä§)Âˇ(E0Â + A)Âˇt

That gave us the following probabilities:

If our molecule can be in two states only, and it starts off in one, then the probability that it willÂ remain in that state will gradually decline, while the probability that it flips into the other state will gradually increase. So that’s what’s shown above, and it makes perfect sense.

Now, you may think there is only one possible set of base states here, as it’s not like measuring spin along this or that direction. These two base states are much simpler: it’s a matter of the nitrogen being beneath or above the plane of the hydrogens, and we’re only interested in the angular momentum of the molecule around its axis of symmetry to help us define what ‘up’ and what’s ‘down’. That’s all. However, from a quantum math point of view, we can actually choose some other ‘representation’. Now, these base state vectorsÂ | i âŞ are a bit tough to understand, so let’s, in our first go at it, use those coefficients Ci, which are ‘proper’ amplitudes. We’ll define two new coefficients, CIÂ and CII, which â you’ve guess it â we’ll associate with an alternative set of base states | I âŞ and | II âŞ. We’ll define them as follows:

• CIÂ =Â âŠ I | Ď âŞ = (1/â2)Âˇ(C1 â C2)
• CIIÂ =Â âŠ II | Ď âŞ = (1/â2)Âˇ(C1 + C2)

[The (1/â2) factor is there because of the normalization condition, obviously. We could take it out and then do the whole analysis to plug it in later, as Feynman does, but I prefer to do it this way, as it reminds us that our wavefunctions are to be related to probabilities at someÂ point in time. :-)]

Now, you can easily check that, when substituting ourÂ C1Â and C2Â for those wavefunctions above, we get:

• CIÂ =Â âŠ I | Ď âŞ = (1/â2)Âˇeâ(i/Ä§)Âˇ(E0Â + A)ÂˇtÂ
• CIIÂ =Â âŠ I | Ď âŞ = (1/â2)Âˇeâ(i/Ä§)Âˇ(E0Â â A)Âˇt

Note that the way plus and minus signs switch here makes things not so easy to remember, but that’s how it is. đÂ So we’ve got ourÂ stationary stateÂ solutions here, that are associated with probabilities that doÂ notÂ vary in time. [In case you wonder: that’s the definition of a ‘stationary state’: we’ve got something with aÂ definiteÂ energy and, therefore, the probability that’s associated with it is some constant.] Of course, now you’ll cry wolf and say: these wavefunctions don’t actuallyÂ meanÂ anything, do they? They don’t describe how ammonia actually behaves, do they? Well… Yes and no. The base states I and II actuallyÂ do allow us to describe whatever we need to describe. To be precise, describing theÂ stateÂ Ď in terms of the base states | 1 âŞ and | 2 âŞ, i.e. writingÂ | Ď âŞ as:

| Ď âŞ = | 1 âŞ C1Â +Â | 2 âŞ C2,

is mathematically equivalent to writing:

| Ď âŞ = | I âŞ CIÂ +Â | II âŞ CII.

We can easily show that, even if it requires some gymnastics indeedâbut then you should look at it as just another exercise in quantum math and so, yes, pleaseÂ doÂ go through the logic.Â First note that theÂ CIÂ =Â âŠ I | Ď âŞ = (1/â2)Âˇ(C1Â âÂ C2) and CIIÂ =Â âŠ II | Ď âŞ = (1/â2)Âˇ(C1Â + C2) expressions are equivalent to:

âŠ I | Ď âŞ = (1/â2)Âˇ[âŠ 1 | Ď âŞÂ â âŠ 2 | Ď âŞ] andÂ âŠ II | Ď âŞ = (1/â2)Âˇ[âŠ 1 | Ď âŞ + âŠ 2 | Ď âŞ]

Now, using our quantum math rules, we can abstract the | Ď âŞ away, and so we get:

âŠ I | = (1/â2)Âˇ[âŠ 1 |Â â âŠ 2 |] and âŠ II | = (1/â2)Âˇ[âŠ 1 | + âŠ 2 |]

We could also have applied the complex conjugate rule to the expression forÂ âŠ I | Ď âŞ above (the complex conjugate of a sum (or a product) is the sumÂ (or the product) of the complex conjugates), and then abstract âŠ Ď | away, so as to write:

| I âŞÂ = (1/â2)Âˇ[| 1 âŞ â | 2 âŞ] and |Â II âŞ = (1/â2)Âˇ[| 1 âŞ + |Â 2 âŞ]

OK. So what? We’ve only shown our new base states can be written as similar combinations as thoseÂ CI andÂ CIIÂ coefficients. What proves they are base states? Well… The first rule of quantum math actually defines them as states iÂ respecting the following condition:

âŠ i | jâŞ = âŠ j | iâŞ =Â Î´ij, with Î´ijÂ =Â Î´jiÂ is equal to 1 if i = j, and zero if i â Â j

We can prove that as follows. First, use the | I âŞÂ = (1/â2)Âˇ[| 1 âŞ â | 2 âŞ] and |Â II âŞ = (1/â2)Âˇ[| 1 âŞ + |Â 2 âŞ] result above to check the following:

• âŠ I | I âŞÂ = (1/â2)Âˇ[âŠ I | 1 âŞ â âŠ I | 2 âŞ]
• âŠ II | II âŞÂ = (1/â2)Âˇ[âŠ II | 1 âŞ + âŠ II | 2 âŞ]
• âŠ II | I âŞÂ = (1/â2)Âˇ[âŠ II | 1 âŞ â âŠ II | 2 âŞ]
• âŠ I | II âŞÂ = (1/â2)Âˇ[âŠ I | 1 âŞ + âŠ I | 2 âŞ]

Now we need to find thoseÂ âŠ I | i âŞ andÂ âŠ II | i âŞ amplitudes. To do that, we can use thatÂ âŠ I | Ď âŞ = (1/â2)Âˇ[âŠ 1 | Ď âŞÂ â âŠ 2 | Ď âŞ] andÂ âŠ II | Ď âŞ = (1/â2)Âˇ[âŠ 1 | Ď âŞ + âŠ 2 | Ď âŞ] equation and substitute:

• âŠ I | 1 âŞ = (1/â2)Âˇ[âŠ 1 | 1 âŞÂ â âŠ 2 | 1 âŞ] = (1/â2)
• âŠ I | 2 âŞ = (1/â2)Âˇ[âŠ 1 | 2 âŞÂ â âŠ 2 | 2 âŞ] = â(1/â2)
• âŠ II | 1 âŞ = (1/â2)Âˇ[âŠ 1 | 1 âŞ + âŠ 2 | 1 âŞ] =Â  (1/â2)
• âŠ II | 2 âŞ = (1/â2)Âˇ[âŠ 1 | 2 âŞ + âŠ 2 | 2 âŞ] =Â  (1/â2)

So we get:

• âŠ I | I âŞÂ = (1/â2)Âˇ[âŠ I | 1 âŞ â âŠ I | 2 âŞ] =Â (1/â2)Âˇ[(1/â2) +Â (1/â2)] =Â (2/(â2Âˇâ2) = 1
• âŠ II | II âŞÂ = (1/â2)Âˇ[âŠ II | 1 âŞ + âŠ II | 2 âŞ] = (1/â2)Âˇ[(1/â2) + (1/â2)] = 1
• âŠ II | I âŞÂ = (1/â2)Âˇ[âŠ II | 1 âŞ â âŠ II | 2 âŞ] =Â (1/â2)Âˇ[(1/â2) â (1/â2)] =Â 0
• âŠ I | II âŞÂ = (1/â2)Âˇ[âŠ I | 1 âŞ + âŠ I | 2 âŞ] = (1/â2)Âˇ[(1/â2) â (1/â2)] = 0

So… Well.. Yes. That’s equivalent to:

âŠ I | I âŞ = âŠ II | II âŞ = 1 and âŠ I | II âŞ = âŠ II | I âŞ = 0

Therefore, we can confidently say that ourÂ | I âŞÂ = (1/â2)Âˇ[| 1 âŞ â | 2 âŞ] andÂ |Â II âŞ = (1/â2)Âˇ[| 1 âŞ + |Â 2 âŞ] state vectors are, effectively,Â baseÂ vectors in their own right. Now, we’re going to have to grow very fond of matrices, so let me write our ‘definition’ of the new base vectors as a matrix formula:

You’ve seen this before. The two-by-two matrix is the transformation matrix for a rotation of state filtering apparatus about the y-axis, over an angle equal to (minus)Â 90 degrees, when only two states are involved:

You’ll wonder why we should go through all that trouble. Part of it, of course, is to just learn these tricks. The other reason, however, is that itÂ doesÂ simplify calculations. Here I need to remind you of the Hamiltonian matrix and the set of differential equations that comes with it.Â For a system with two base states, weâd have the following set of equations:

Now, adding and subtracting those two equations, and then differentiating the expressions you get (with respect to t), should give you the following two equations:

So what about it? Well… If we transform to the new set of base states, and use theÂ CI andÂ CIIÂ coefficients instead of those C1Â andÂ C2Â coefficients, then it turns out that our set of differential equations simplifies, because â as you can see âÂ two out of the four Hamiltonian coefficients are zero, so we can write:

Now you might think that’s not worth the trouble but, of course, now you know how it goes, and so next time it will be easier. đ

On a more serious note, I hope you can appreciate the fact that with more states than just two, it will become important toÂ diagonalizeÂ the Hamiltonian matrix so as simplify the problem of solving the related set of differential equations. Once we’ve got the solutions, we can always go back to calculate the wavefunctions we want, i.e. theÂ C1Â andÂ C2 functions that we happen to like more in this particular case. Just to remind you of how this works, remember that we can describe any stateÂ Ď both in terms of the base states | 1 âŞ and | 2 âŞ as well as in terms of the base states | I âŞ and | II âŞ, so we can either write:

| Ď âŞ = | 1 âŞ C1Â +Â | 2 âŞ C2Â or, alternatively,Â | Ď âŞ = | I âŞ CIÂ +Â | II âŞ CII.

Now, if we choose, or define,Â CIÂ and CIIÂ the way we doÂ â so that’s asÂ CIÂ = (1/â2)Âˇ(C1 â C2) andÂ CIIÂ = (1/â2)Âˇ(C1 + C2) respectivelyÂ â then the Hamiltonian matrices that come with them are the following ones:

To understand those matrices, let me remind you here of that equation for the Hamiltonian coefficients in those matrices:

Uij(t +Â Ît, t) = Î´ijÂ + Kij(t)ÂˇÎt = Î´ijÂ âÂ (i/Ä§)ÂˇHij(t)ÂˇÎt

In my humble opinion, this makes the difference clear. The | I âŞ and | II âŞ baseÂ states are clearly separated, mathematically,Â as much as theÂ | 1 âŞ and | 2 âŞ baseÂ states were separated conceptually. There is no amplitude to go from state I to state II, but then both states are a mix of state 1 and 2, so the physicalÂ reality they’re describing is exactly the same: we’re just pushing the temporalÂ variation of the probabilities involved from the coefficients we’re using in our differential equations to the base states we use to defineÂ those coefficientsÂ âÂ or vice versa.

Huh? Yes… I know it’s all quite deep, and I haven’t quite come to terms with it myself, so that’s why I’ll letÂ you think about it. đÂ To help you think this through, think about this: theÂ C1Â andÂ C2Â wavefunctions made sense but, at the same time, they were notÂ very ‘physical’ (read: classical), because they incorporated uncertaintyâas they mix two different energy levels. However, the associated base states â which I’ll call ‘up’ and ‘down’ here âÂ made perfect sense, in a classical ‘physical’ sense, that isÂ (my English seems to be getting poorer and poorerâsorry for that!). Indeed, in classical physics, the nitrogen atom is either here or there, right? Not somewhere in-between. đ Now, theÂ CIÂ andÂ CIIÂ wavefunctions makeÂ sense in the classicalÂ sense because they are stationary and, hence, they’re associated with a very definiteÂ energy level. In fact, as definite, or as classical,Â as when we say: the nitrogen atom is either here or there. Not somewhere in-between. But they don’t make sense in some other way: we know that the nitrogen atom will, sooner or later, effectively tunnel through. So they do not describe anything real. SoÂ how do we capture reality now? Our CIÂ andÂ CIIÂ wavefunctions don’t do that explicitly, but implicitly, asÂ the base states now incorporate all of the uncertainty. Indeed, the CIÂ andÂ CIIÂ wavefunctions are described in terms of the base states I and II, which themselvesÂ are a mixture of our ‘classical’ up or down states. So, yes, we are kicking the ball around here, from a math point of view. Does that make sense? If not, sorry. I can’t do much more. You’ll just have to think through this yourself. đ

Let me just add one little note, totally unrelated to what I just wrote, to conclude this little excursion. I must assume that, in regard of diagonalization, you’ve heard aboutÂ eigenvaluesÂ andÂ eigenvectors. In fact, I must assume you heard about this when you learned about matrices in high school. So… Well… In case you wonder, that’s where we need this stuff. đ

OK. On to the next !

#### The general solution for aÂ two-state system

Now, you’ll wonder why, after all of the talk about the need to simplify the Hamiltonian, I will now present a general solution for any two-state system, i.e. any pair of Hamiltonian equations for two-state systems. However, you’ll soon appreciate why, and you’ll also connect the dots with what I wrote above.

Let me first give you the general solution. In fact, I’ll copy it from FeynmanÂ (just click on it to enlarge it, or read it in Feynman’s LectureÂ on it yourself):

The problem is, of course, how do weÂ interpretÂ that solution? Let me make it big:

This says that the general solution to any two-state system amounts to calculatingÂ twoÂ separate energy levels using the Hamiltonian coefficients as they are being used in those equations above. So there is an ‘upper’ energy level, which is denoted as EI, and a ‘lower’ energy level, which is denoted as EII.

What?Â So it doesn’t say anything about the Hamiltonian coefficients themselves? No. It doesn’t. What did you expect? Those coefficients define theÂ systemÂ as such. So the solution is as general as the ‘two-state system’ we wanted to solve: conceptually, it’s characterized by two different energy levels, but that’s about all we can say about it.

[…] Well… No. The solutions above are specificÂ functional forms and, to find them, we had to make certain assumptions and impose certain conditions so as to ensure there’s any non-zero solution at all! In fact, that’s all the fine print above, so I won’t dwell on thatâandÂ youÂ had better stop complaining! đ Having said that, the solutions above are very general indeed, and so now it’s up to us to look at specificÂ two-state systems, like our ammonia molecule, and make educated guesses so as to come up with plausible values or functional forms for those Hamiltonian coefficients. That’s what we did when we equated H11Â and H22Â with some average energy E0, and H12Â and H12Â with some energy A. [MinusÂ A, in factâbut we might have chosen some positive value +A. Same solution. In fact, I wonder why Feynman didn’t go for the +A value. It doesn’t matter, really, because we’re talking energy differences, but… Well… Any case… That’s how it is. I guess he just wanted to avoid having to switch the indices 1 and 2, and the coefficients a and bÂ and what have you. But it’s the same. Honestly. :-)]

So… Well… We could do the same here and analyze the solutions we’ve found in our previous posts but… Well… I don’t think that’s very interesting. In addition, I’ll make some references to that in my next post anyway, where we’re going to be analyzing the ammonia molecule in terms of it I and II states, so as to prepare a full-blown analysis of how a maser works.

Just to wet your appetite, let me tell you that the mysterious I and II states doÂ have a wonderfully practical physical interpretation as well.Â Just scroll back it all the way up, and look at the oppositeÂ electricÂ dipole moment that’s associated with state 1 and 2. Now, the two pictures have the angular momentum in the same direction, but we might expect that, when looking at aÂ beamÂ of randomÂ NH3Â moleculesÂ â think of gas being let out of a little jet đÂ â the angular momentum will be distributed randomly. So… Well… The thing is: the molecules in state I, or in state II, will all have theirÂ electricÂ dipole moment lined up in the very same physical direction. So, in that sense, they’re really ‘up’ or ‘down’, and we’ll be able to separate them in an inhomogeneous electric field, just like we were able to separate ‘up’ or ‘down’ electrons, protons or whatever spin-1/2 particles in an inhomogeneousÂ magneticÂ field.

But so that’s for the next post. I just wanted to tell you that ourÂ | I âŞ and | II âŞ baseÂ states do make sense. They’re more than just ‘mathematical’ states. They make sense as soon as we’re moving away from an analysis in terms of oneÂ NH3Â molecule only because… Well…Â Are you surprised, really?Â You shouldn’t be. đ Let’s go for it straight away.

#### The ammonia molecule in an electric field

Our educating guess of the Hamiltonian matrix for the ammonia molecule was the following:

This guess was ‘educated’ because we knew what we wanted to get out of it, and that’s those time-dependent probabilities to be in state 1 or state 2:

Now, we also know that state 1 and 2 are associated withÂ oppositeÂ electric dipole moments, as illustrated below.

Hence, it’s only natural, when applying an external electric field Îľ to a whole bunch of ammonia molecules âthink of someÂ beamÂ â thatÂ our ‘educated’ guess would change to:

Why theÂ minusÂ sign forÂ ÎźÎľ in the H22Â term? You can answer that question yourself: the associated energy isÂ ÎźÂˇÎľ =Â ÎźÂˇÎľÂˇcosÎ¸, and Î¸ is ÂąĎ here, as we’re talking opposite directions. So… There we are. đ The consequences show when using those values in the general solution for our system of differential equations. Indeed, the

equations become:

The graph of this looks as follows:

The upshot is: we canÂ separateÂ the theÂ NH3Â molecules in a inhomogeneous electric field based on their state, and then I mean state I or II, not state 1 or 2. How? Let me copy Feynman on that: it’s like a Stern-Gerlach apparatus, really. đ

Â So that’s it. We get the following:

That will feed into the maser, which looks as follows:

But… Well… Analyzing how a maser works involves another realm of physics: cavities and resonances. I don’t want to get into that here. I only wanted to show you why and how different representations of the same thing are useful, and how it translates into a different Hamiltonian matrix. I think I’ve done that, and so let’s call it a night. đ I hope you enjoyed this one. If not… Well… I did. đ

Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here: