Quantum-mechanical magnitudes

As I was writing about those rotations in my previous post (on electron orbitals), I suddenly felt I should do some more thinking on (1) symmetries and (2) the concept of quantum-mechanical magnitudes of vectors. I’ll write about the first topic (symmetries) in some other post. Let’s first tackle the latter concept. Oh… And for those I frightened with my last post… Well… This should really be an easy read. More of a short philosophical reflection about quantum mechanics. Not a technical thing. Something intuitive. At least I hope it will come out that way. 🙂

First, you should note that the fundamental idea that quantities like energy, or momentum, may be quantized is a very natural one. In fact, it’s what the early Greek philosophers thought about Nature. Of course, while the idea of quantization comes naturally to us (I think it’s easier to understand than, say, the idea of infinity), it is, perhaps, not so easy to deal with it mathematically. Indeed, most mathematical ideas – like functions and derivatives – are based on what I’ll loosely refer to as continuum theory. So… Yes, quantization does yield some surprising results, like that formula for the magnitude of some vector J:Magnitude formulasThe J·J in the classical formula above is, of course, the equally classical vector dot product, and the formula itself is nothing but Pythagoras’ Theorem in three dimensions. Easy. I just put a + sign in front of the square roots so as to remind you we actually always have two square roots and that we should take the positive one. 🙂

I will now show you how we get that quantum-mechanical formula. The logic behind it is fairly straightforward but, at the same time… Well… You’ll see. 🙂 We know that a quantum-mechanical variable – like the spin of an electron, or the angular momentum of an atom – is not continuous but discrete: it will have some value = jj-1, j-2, …, -(j-2), -(j-1), –j. Our here is the maximum value of the magnitude of the component of our vector (J) in the direction of measurement, which – as you know – is usually written as Jz. Why? Because we will usually choose our coordinate system such that our z-axis is aligned accordingly. 🙂 Those values jj-1, j-2, …, -(j-2), -(j-1), –j are separated by one unit. That unit would be Planck’s quantum of action ħ ≈ 1.0545718×10−34 N·m·s – by the way, isn’t it amazing we can actually measure such tiny stuff in some experiment? 🙂 – if J would happen to be the angular momentum, but the approach here is more general – action can express itself in various ways 🙂 – so the unit doesn’t matter: it’s just the unit, so that’s just one. 🙂 It’s easy to see that this separation implies must be some integer or half-integer. [Of course, now you might think the values of a series like 2.4, 1.4, 0.4, -0.6, -1.6 are also separated by one unit, but… Well… That would violate the most basic symmetry requirement so… Well… No. Our has to be an integer or a half-integer. Please also note that the number of possible values for is equal to 2j+1, as we’ll use that in a moment.]

OK. You’re familiar with this by now and so I should not repeat the obvious. To make things somewhat more real, let’s assume = 3/2, so =  3/2, 1/2, -1/2 or +3/2. Now, we don’t know anything about the system and, therefore, these four values are all equally likely. Now, you may not agree with this assumption but… Well… You’ll have to agree that, at this point, you can’t come up with anything else that would make sense, right? It’s just like a classical situation: J might point in any direction, so we have to give all angles an equal probability. [In fact, I’ll show you – in a minute or so – that you actually have a point here: we should think some more about this assumption – but so that’s for later. I am asking you to just go along with this story as for now.]

So the expected value of Jz is E[Jz] is equal to E[Jz] = (1/4)·(3/2)+(1/4)·(1/2)+(1/4)·(-1/2)+(1/4)·(-3/2) = 0. Nothing new here. We just multiply probabilities with all of the possible values to get an expected value. So we get zero here because our values are distributed symmetrically around the zero point. No surprise. Now, to calculate a magnitude, we don’t need Jbut Jz2. In case you wonder, that’s what this squaring business is all about: we’re abstracting away from the direction and so we’re going to square both positive as well as negative values to then add it all up and take a square root. Now, the expected value of Jz2 is equal to E[Jz] = (1/4)·(3/2)2+(1/4)·(1/2)2+(1/4)·(-1/2)2+(1/4)·(-3/2)2 = 5/4 = 1.25. Some positive value.

You may note that it’s a bit larger than the average of the absolute value of our variable, which is equal to (|3/2|+|1/2|+|-1/2|+|-3/2|)/4 = 1, but that’s just because the squaring favors larger values 🙂 Also note that, of course, we’d also get some positive value if Jwould be a continuous variable over the [-3/2, +3/2] interval, but I’ll let you think about what positive value we’d get for Jzassuming Jz is uniform distributed over the [-3/2, +3/2] interval, because that calculation is actually not so straightforward as it may seem at first. In any case, these considerations are not very relevant to our story here, so let’s move on.

Of course, our z-direction was random, and so we get the same thing for whatever direction. More in particular, we’ll also get it for the x– and y-directions: E[Jx] = E[Jy] = E[Jz] = 5/4. Now, at this point it’s probably good to give you a more generalized formula for these quantities. I think you’ll easily agree to the following one:magnitude squared formulaSo now we can apply our classical J·J = JxJyJzformula to these quantities by calculating the expected value of JJ·J, which is equal to:

E[J·J] = E[Jx2] + E[Jy2] + E[Jz2] = 3·E[Jx2] = 3·E[Jy2] = 3·E[Jz2]

You should note we’re making use of the E[X Y] = E[X]+ E[Y] property here: the expected value of the sum of two variables is equal to the sum of the expected values of the variables, and you should also note this is true even if the individual variables would happen to be correlated – which might or might not be the case. [What do you think is the case here?]

For = 3/2, it’s easy to see we get E[J·J] = 3·E[Jx] = 3·5/4 = (3/2)·(3/2+1) = j·(j+1). We should now generalize this formula for other values of j,  which is not so easy… Hmm… It obviously involves some formula for a series, and I am not good at that… So… Well… I just checked if it was true for = 1/2 and = 1 (please check that at least for yourself too!) and then I just believe the authorities on this for all other values of j. 🙂

Now, in a classical situation, we know that J·J product will be the same for whatever direction J would happen to have, and so its expected value will be equal to its constant value J·J. So we can write: E[J·J] = J·J. So… Well… That’s why we write what we wrote above:Magnitude formulas

Makes sense, no? E[J·J] = E[Jx2+Jy2+Jz2] = E[Jx2]+E[Jy2]+E[Jz2] = j·(j+1) = J·J = J2, so = +√[j(j+1)], right?

Hold your horses, man! Think! What are we doing here, really? We didn’t calculate all that much above. We only found that E[Jx2]+E[Jy2]+E[Jz2] = E[Jx2+Jy2+Jz2] =  j·(j+1). So what? Well… That’s not a proof that the J vector actually exists.

Huh? 

Yes. That J vector might just be some theoretical concept. When everything is said and done, all we’ve been doing – or at least, we imagined we did – is those repeated measurements of JxJy and Jz here – or whatever subscript you’d want to use, like Jθ,φ, for example (the example is not random, of course) – and so, of course, it’s only natural that we assume these things are the magnitude of the component (in the direction of measurement) of some real vector that is out there, but then… Well… Who knows? Think of what we wrote about the angular momentum in our previous post on electron orbitals. We imagine – or do like to think – that there’s some angular momentum vector J out there, which we think of as being “cocked” at some angle, so its projection onto the z-axis gives us those discrete values for m which, for = 2, for example, are equal to 0, 1 or 2 (and -1 and -2, of course) – like in the illustration below. 🙂cocked angle 2But… Well… Note those weird angles: we get something close to 24.1° and then another value close to 54.7°. No symmetry here. 😦 The table below gives some more values for larger j. They’re easy to calculate – it’s, once again, just Pythagoras’ Theorem – but… Well… No symmetries here. Just weird values. [I am not saying the formula for these angles is not straightforward. That formula is easy enough: θ = sin-1(m/√[j(j+1)]). It’s just… Well… No symmetry. You’ll see why that matters in a moment.]CaptureI skipped the half-integer values for in the table above so you might think they might make it easier to come up with some kind of sensible explanation for the angles. Well… No. They don’t. For example, for = 1/2 and m = ± 1/2, the angles are ±35.2644° – more or less, that is. 🙂 As you can see, these angles do not nicely cut up our circle in equal pieces, which triggers the obvious question: are these angles really equally likely? Equal angles do not correspond to equal distances on the z-axis (in case you don’t appreciate the point, look at the illustration below).  angles distance

So… Well… Let me summarize the issue on hand as follows: the idea of the angle of the vector being randomly distributed is not compatible with the idea of those Jz values being equally spaced and equally likely. The latter idea – equally spaced and equally likely Jz values – relates to different possible states of the system being equally likely, so… Well… It’s just a different idea. 😦

Now there is another thing which we should mention here. The maximum value of the z-component of our J vector is always smaller than that quantum-mechanical magnitude, and quite significantly so for small j, as shown in the table below. It is only for larger values of that the ratio of the two starts to converge to 1. For example, for = 25, it is about 1.02, so that’s only 2% off. convergenceThat’s why physicists tell us that, in quantum mechanics, the angular momentum is never “completely along the z-direction.” It is obvious that this actually challenges the idea of a very precise direction in quantum mechanics, but then that shouldn’t surprise us, does it? After, isn’t this what the Uncertainty Principle is all about?

Different states, rather than different directions… And then Uncertainty because… Well… Because of discrete variables that won’t split in the middle. Hmm… 😦

Perhaps. Perhaps I should just accept all of this and go along with it… But… Well… I am really not satisfied here, despite Feynman’s assurance that that’s OK: “Understanding of these matters comes very slowly, if at all. Of course, one does get better able to know what is going to happen in a quantum-mechanical situation—if that is what understanding means—but one never gets a comfortable feeling that these quantum-mechanical rules are ‘natural’.”

I do want to get that comfortable feeling – on some sunny day, at least. 🙂 And so I’ll keep playing with this, until… Well… Until I give up. 🙂 In the meanwhile, if you’d feel you’ve got some better or some more intuitive explanation for all of this, please do let me know. I’d be very grateful to you. 🙂

Post scriptum: Of course, we would all want to believe that J somehow exists because… Well… We want to explain those states somehow, right? I, for one, am not happy with being told to just accept things and shut up. So let me add some remarks here. First, you may think that the narrative above should distinguish between polar and axial vectors. You’ll remember polar vectors are the real vectors, like a radius vector r, or a force F, or velocity or (linear) momentum. Axial vectors (also known as pseudo-vectors) are vectors like the angular momentum vector: we sort of construct them from… Well… From real vectors. The angular momentum L, for example, is the vector cross product of the radius vector r and the linear momentum vector p: we write L = r×p. In that sense, they’re a figment of our imagination. But then… What’s real and unreal? The magnitude of L, for example, does correspond to something real, doesn’t it? And its direction does give us the direction of circulation, right? You’re right. Hence, I think polar and axial vectors are both real – in whatever sense you’d want to define real. Their reality is just different, and that’s reflected in their mathematical behavior: if you change the direction of the axes of your reference frame, polar vectors will change sign too, as opposed to axial vectors: they don’t swap sign. They do something else, which I’ll explain in my next post, where I’ll be talking symmetries.

But let us, for the sake of argument, assume whatever I wrote about those angles applies to axial vectors only. Let’s be even more specific, and say it applies to the angular momentum vector only. If that’s the case, we may want to think of a classical equivalent for the mentioned lack of a precise direction: free nutation. It’s a complicated thing – even more complicated than the phenomenon of precession, which we should be familiar with by now. Look at the illustration below (which I took from an article of a physics professor from Saint Petersburg), which shows both precession as well as nutation. Think of the movement of a spinning top when you release it: its axis will, at first, nutate around the axis of precession, before it settles in a more steady precession.nutationThe nutation is caused by the gravitational force field, and the nutation movement usually dies out quickly because of dampening forces (read: friction). Now, we don’t think of gravitational fields when analyzing angular momentum in quantum mechanics, and we shouldn’t. But there is something else we may want to think of. There is also a phenomenon which is referred to as free nutation, i.e. a nutation that is not caused by an external force field. The Earth, for example, nutates slowly because of a gravitational pull from the Sun and the other planets – so that’s not a free nutation – but, in addition to this, there’s an even smaller wobble – which is an example of free nutation – because the Earth is not exactly spherical. In fact, the Great Mathematician, Leonhard Euler, had already predicted this, back in 1765, but it took another 125 years or so before an astronomist, Seth Chandler, could finally experimentally confirm and measure it. So they named this wobble the Chandler wobble (Euler already has too many things named after him). 🙂

Now I don’t have much backup here – none, actually 🙂 – but why wouldn’t we imagine our electron would also sort of nutate freely because of… Well… Some symmetric asymmetry – something like the slightly elliptical shape of our Earth. 🙂 We may then effectively imagine the angular momentum vector as continually changing direction between a minimum and a maximum angle – something like what’s shown below, perhaps, between 0 and 40 degrees. Think of it as a rotation within a rotation, or an oscillation within an oscillation – or a standing wave within a standing wave. 🙂wobblingI am not sure if this approach would solve the problem of our angles and distances – the issue of whether we should think in equally likely angles or equally likely distances along the z-axis, really – but… Well… I’ll let you play with this. Please do send me some feedback if you think you’ve found something. 🙂

Whatever your solution is, it is likely to involve the equipartition theorem and harmonics, right? Perhaps we can, indeed, imagine standing waves within standing waves, and then standing waves within standing waves. How far can we go? 🙂

Post scriptum 2: When re-reading this post, I was thinking I should probably do something with the following idea. If we’ve got a sphere, and we’re thinking of some vector pointing to some point on the surface of that sphere, then we’re doing something which is referred to as point picking on the surface of a sphere, and the probability distributions – as a function of the polar and azimuthal angles θ and φ – are quite particular. See the article on the Wolfram site on this, for example. I am not sure if it’s going to lead to some easy explanation of the ‘angle problem’ we’ve laid out here but… Well… It’s surely an element in the explanation. The key idea here is shown in the illustration below: if the direction of our momentum in three-dimensional space is really random, there may still be more of a chance of an orientation towards the equator, rather than towards the pole. So… Well… We need to study the math of this. 🙂 But that’s for later.density

The quantization of magnetic moments

Pre-script (dated 26 June 2020): This post got mutilated by the removal of some material by the dark force. You should be able to follow the main story line, however. If anything, the lack of illustrations might actually help you to think things through for yourself.

Original post:

You may not have many questions after a first read of Feynman’s Lecture on the Stern-Gerlach experiment and his more general musings on the quantization of the magnetic moment of an elementary particle. [At least I didn’t have all that many after my first reading, which I summarized in a previous post.]

However, a second, third or fourth reading should trigger some, I’d think. My key question is the following: what happens to that magnetic moment of a particle – and its spin [1] – as it travels through a homogeneous or inhomogeneous magnetic field? We know – or, to be precise, we assume – its spin is either “up” (Jz = +ħ/2) or “down” (Jz = −ħ/2) when it enters the Stern-Gerlach apparatus, but then – when it’s moving in the field itself – we would expect that the magnetic field would, somehow, line up the magnetic moment, right?

Feynman says that it doesn’t: from all of the schematic drawings – and the subsequent discussion of Stern-Gerlach filters – it is obvious that the magnetic field – which we denote as B, and which we assume to be inhomogeneous [2] – should not result in a change of the magnetic moment. Feynman states it as follows: “The magnetic field produces a torque. Such a torque you would think is trying to line up the (atomic) magnet with the field, but it only causes its precession.”

[…] OK. That’s too much information already, I guess. Let’s start with the basics. The key to a good understanding of this discussion is the force formula:

f1

We should first explain this formula before discussing the obvious question: over what time – or over what distance – should we expect this force to pull the particle up or down in the magnetic field? Indeed, if the force ends up aligning the moment, then the force will disappear!

So let’s first explain the formula. We start by explaining the energy U. U is the potential energy of our particle, which it gets from its magnetic moment μ and its orientation in the magnetic field B. To be precise, we can write the following:

f2

Of course, μ and B are the magnitudes of μ and B respectively, and θ is the angle between μ and B: if the angle θ is zero, then Umag will be negative. Hence, the total energy of our particle (U) will actually be less than what it would be without the magnetic field: it is the energy when the magnetic moment of our particle is fully lined up with the magnetic field. When the angle is a right angle (θ = ±π/2), then the energy doesn’t change (Umag = 0). Finally, when θ is equal to π or −π, then its energy will be more than what it would be outside of the magnetic field. [Note that the angle θ effectively varies between –π and π – not between 0 and 2π!]anglesOf course, we may already note that, in quantum mechanics, Umag will only take on a very limited set of values. To be precise, for a particle with spin number j = 1/2, the possible values of Umag will be limited to two values only. We will come back to that in a moment. First that force formula.

Energy is force over a distance. To be precise, when a particle is moved from point a to point b, then its change in energy can be written as the following line integral:

f3

Note that the minus sign is there because of the convention that we’re doing work against the force when increasing the (potential) energy of that what we’re moving. Also note that F∙ds product is a vector (dot) product: it is, obviously, equal to Ft times ds, with Ft the magnitude of the tangential component of the force. The equation above gives us that force formula:

f4

Feynman calls it the principle of virtual work, which sounds a bit mysterious – but so you get it by taking the derivative of both sides of the energy formula.

Let me now get back to the real mystery of quantum mechanics, which tells us that the magnetic moment – as measured along our z-axis – will only take one of two possible values. To be precise, we have the following formula for μz:

f5

This is a formula you just have to accept for the moment. It needs a bit of interpretation, and you need to watch out for the sign. The g-factor is the so-called Landé g-factor: it is equal to 1 for a so-called pure orbital moment, 2 for a so-called pure spin moment, and some number in-between in reality, which is always some mixture of the two: both the electron’s orbit around the nucleus as well as the electron’s rotation about its own axis contribute to the total angular momentum and, hence, to the total magnetic moment of our electron. As for the other factors, m and qe are, of course, the mass and the charge of our electron, and Jz is either +ħ/2 or −ħ/2. Hence, if we know g, we can easily calculate the two possible values for μz.

Now, that also means we could – theoretically – calculate the two possible values of that angle θ. For some reason, no handbook in physics ever does that. The reason is probably a good one: electron orbits, and the concept of spin itself, are not like the orbit and the spin of some planet in a planetary system. In fact, we know that we should not think of electrons like that at all: quantum physicists tell us we may only think of it as some kind of weird cloud around a center. That cloud has a density which is to be calculated by taking the absolute square of the quantum-mechanical amplitude of our electron.

In fact, when thinking about the two possible values for θ, we may want to remind ourselves of another peculiar consequence of the fact that the angular momentum – and, hence, the magnetic moment – is not continuous but quantized: the magnitude of the angular momentum J is not  J = √(J·J) = √J2 in quantum mechanics but J = √(J·J) = √[j·(j+1)·ħ2] = √[j·(j+1)]·ħ. For our electron, j = 1/2 and, hence, the magnitude of J is equal to J = √[(1/2)∙(3/2)]∙ ħ = √(3/4)∙ħ ≈ 0.866∙ħ. Hence, the magnitude of the angular momentum is larger than the maximum value of Jz – and not just a little bit, because the maximum value of ħ is ħ/2! That leads to that weird conclusion: in quantum mechanics, we find that the angular momentum is never completely along any one direction [3]! In fact, this conclusion basically undercuts the very idea of the angular momentum – and, hence, the magnetic moment – of having any precise direction at all! [This may sound spectacular, but there is actually a classical equivalent to the idea of the angular momentum having no precisely defined direction: gyroscopes may not only precess, but nutate as well. Nutation refers to a kind of wobbling around the direction of the angular momentum. For more details, see the post I wrote after my first reading of Feynman’s Lecture on the quantization of magnetic moments. :-)] 

Let’s move on. So if, in quantum mechanics, we cannot associate the magnetic moment – or the angular momentum – with some specific direction, then how should we imagine it? Well… I won’t dwell on that here, but you may want to have a look at another post of mine, where I develop a metaphor for the wavefunction which may help you to sort of understand what it might be. The metaphor may help you to think of some oscillation in two directions – rather than in one only – with the two directions separated by a right angle. Hence, the whole thing obviously points in some direction but it’s not very precise. In any case, I need to move on here.

We said that the magnetic moment will take one of two values only, in any direction along which we’d want to measure it. We also said that the (maximum) value along that direction – any direction, really – will be smaller than the magnitude of the moment. [To be precise, we said that for the angular momentum, but the formulas above make it clear the conclusions also hold for the magnetic moment.] So that means that the magnetic moment is, in fact, never fully aligned with the magnetic field. Now, if it is not aligned – and, importantly, if it also does not line up – then it should precess. Now, precession is a difficult enough concept in classical mechanics, so you may think it’s going to be totally abstruse in quantum mechanics. Well… That is true – to some extent. At the same time, it is surely not unintelligible. I will not repeat Feynman’s argument here, but he uses the classical formulas once more to calculate an angular velocity and a precession frequency – although he doesn’t explain what they might actually physically represent. Let me just jot down the formula for the precession frequency:

f7

We get the same factors: g, qe and m. In addition, you should also note that the precession frequency is directly proportional  to the strength of the magnetic field, which makes sense. Now, you may wonder: what is the relevance of this? Can we actually measure any of this?

We can. In fact, you may wonder about the if I inserted above: if we can measure the Landé g-factor… Can we? We can. It’s done in a resonance experiment, which is referred to as the Rabi molecular-beam method – but then it might also be just an atomic beam, of course!

The experiment is interesting, because it shows the precession is – somehow – real. It also illustrates some other principles we have been describing above.

The set-up looks pretty complicated. We have a series of three magnets. The first magnet is just a Stern-Gerlach apparatus: a magnet with a very sharp edge on one of the pole tips so as to produce an inhomogeneous magnetic field. Indeed, a homogeneous magnetic field implies that ∂B/∂z = 0 and, hence, the force along the z-direction would be zero and our atomic magnets would not be displaced.

The second magnet is more complicated. Its magnetic field is uniform, so there are no vertical forces on the atoms and they go straight through. However, the magnet includes an extra set of coils that can produce an alternating horizontal field as well. I’ll come back to that in a moment. Finally, the third magnet is just like the first one, but with the field inverted. Have a look at it:

rabi-apparatus

It may not look very obvious but, after some thinking, you’ll agree that the atoms can only arrive at the detector if they follow the trajectories a and/or b. In fact, these trajectories are the only possible ones because of the slits S1 and S2.

Now what’s the idea of that horizontal field B’ in magnet 2? In a classical situation, we could change the angular momentum – and the magnetic moment – by applying some torque about the z-axis. The idea is shown in Figure (a) and (b) below.

changing-angular-momentum

Figure (a) shows – or tries to show – some rotating field B’ – one that is always at right angles to both the angular momentum as well as to the (uniform) B field. That would be effective. However, Figure (b) shows another arrangement that is almost equally effective: an oscillating field that sort of pulls and pushes at some frequency ω. Classically, such fields would effectively change the angle of our gyroscope with respect to the z-axis. Is it also the case quantum-mechanically?

It turns out it sort of works the same in quantum mechanics. There is a big difference though. Classically, μz would change gradually, but in quantum mechanics it cannot: in quantum mechanics, it must jump suddenly from one value to the other, i.e. from +ħ/2 to −ħ/2, or the other way around. In other words, it must flip up or down. Now, if an atom flips, then it will, of course, no longer follow the (a) or (b) trajectories: it will follow some other path, like a’ or b’, which make it crash into the magnet. Now, it turns out that almost all atoms will flip if we get that frequency ω right. The graph below shows this ‘resonance’ phenomenon: there is a sharp drop in the ’current’ of atoms if ω is close or equal to ωp.

resonance

What’s ωp? It’s that precession frequency for which we gave you that formula above. To make a long story short, from the experiment, we can calculate the Landé g-factor for that particular beam of atoms – say, silver atoms [4]. So… Well… Now we know it all, don’t we?

Maybe. As mentioned when I started this post, when going through all of this material, I always wonder why there is no magnetization effect: why would an atom remain in the same state when it crosses a magnetic field? When it’s already aligned with the magnetic field – to the maximum extent possible, that is – then it shouldn’t flip, but what if its magnetic moment is opposite? It should lower its energy by flipping, right? And it should flip just like that. Why would it need an oscillating B’ field?

In fact, Feynman does describe how the magnetization phenomenon can be analyzed – classically and quantum-mechanically, but he does that for bulk materials: solids, or liquids, or gases – anything that involves lots of atoms that are kicked around because of the thermal motions. So that involves statistical mechanics – which I am sure you’ve skipped so far. 🙂 It is a beautiful argument – which ends with an equally beautiful formula, which tells us the magnetization (M) of a material – which is defined as the net magnetic moment per unit volume – has the same direction as the magnetic field (B) and a magnitude M that is proportional the magnitude of B:

f6The μ in this formula is the magnitude of the magnetic moment of the individual atoms and so… Well… It’s just like the formula for the electric polarization P, which we described in some other post. In fact, the formula for P and M are same-same but different, as they would say in Thailand. 🙂 But this wonderful story doesn’t answer our question. The magnetic moment of an individual particle should not stay what it is: if it doesn’t change because of all the kicking around as a result of thermal motions, then… Well… These little atomic magnets should line up. That means atoms with their spin “up” should go into the “spin-down” state.

I don’t have an answer to my own question as for now. I suspect it’s got to do with the strength of the magnetic field: a Stern-Gerlach apparatus involves a weak magnetic field. If it’s too strong, the atomic magnets must flip. Hence, a more advanced analysis should probably include that flipping effect. When quickly googling – just now – I found an MIT lab exercise on it, which also provides a historical account of the Stern-Gerlach experiment itself. I skimmed through it – and will read all of it in the coming days – but let me just quote this from the historical background section:

“Stern predicted that the effect would be be just barely observable. They had difficulty in raising support in the midst of the post war financial turmoil in Germany. The apparatus, which required extremely precise alignment and a high vacuum, kept breaking down. Finally, after a year of struggle, they obtained an exposure of sufficient length to give promise of an observable silver deposit. At first, when they examined the glass plate they saw nothing. Then, gradually, the deposit became visible, showing a beam separation of 0.2 millimeters! Apparently, Stern could only afford cheap cigars with a high sulfur content. As he breathed on the glass plate, sulfur fumes converted the invisible silver deposit into visible black silver sufide, and the splitting of the beam was discovered.”

Isn’t this funny? And great at the same time? 🙂 But… Well… The point is: the paper for that MIT lab exercise makes me realize Feynman does cut corners when explaining stuff – and some corners are more significant than others. I note, for example, that they talk about interference peaks rather than “two distinct spots on the glass plate.” Hence, the analysis is somewhat more sophisticated than Feynman pretends it to be. So, when everything is said and done, Feynman’s Lectures may indeed be reading for undergraduate students only. Is it time to move on?

[1] The magnetic moment – as measured in a particular coordinate system – is equal to μ = −g·[q/(2m)]·J. The factor J in this expression is the angular momentum, and the coordinate system is chosen such that its z-axis is along the direction of the magnetic field B. The component of J along the z-axis is written as Jz. This z-component of the angular momentum is what is, rather loosely, being referred to as the spin of the particle in this context. In most other contexts, spin refers to the spin number j which appears in the formula for the value of Jz, which is Jz = j∙ħ, (j−1)∙ħ, (j−2)∙ħ,…, (−j+2)∙ħ, (−j+1), −j∙ħ. Note the separation between the possible values of Jz is equal to ħ. Hence, j itself must be an integer (e.g. 1 or 2) or a half-integer (e.g. 1/2). We usually look at electrons, whose spin number j is 1/2.

[2] One of the pole tips of the magnet that is used in the Stern-Gerlach experiment has a sharp edge. Therefore, the magnetic field strength varies with z. We write: ∂B/∂z ≠ 0.

[3] The z-direction can be any direction, really.

[4] The original experiment was effectively done with a beam of silver atoms. The lab exercise which MIT uses to show the effect to physics students involves potassium atoms.

Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/