Atomic magnets: precession and diamagnetism

This and the next posts will further build on the concepts introduced in my previous post on particle spin. This post in particular will focus on some of the math we’ll need to understand what quantum mechanics is all about. The first topic is about the quantum-mechanical equivalent of the phenomenon of precession. The other topics are… Well… You’ll see… šŸ™‚

The Larmor frequency

The motion of a spinning object in a force field is quite complicated. In our post on gyroscopes, we introduced the concepts of precession and nutation. The concept of precession is illustrated below for the Earth as well as for a spinning top. In both cases, the external force is just gravity.

precession_earth

Nutation is an additional movement: on top of the precessional movement, a spinning object may wobble, as illustrated below.

17_Precession and Nutation

There seems to be no analog for nutation in quantum mechanics. In fact, the terms nutation and precession seem to be used interchangeably in quantum physics, although they are very different in classical physics. But let’s not complicate things and, hence, talk about the phenomenon of precession only.

We will not re-explain the phenomenon of precession here but just remind you that the phenomenon can be described in terms of (a) the angle between the symmetry axis and the momentum vector, which we’ll denote by Īø, and (b) the angular velocity of the precession, which we’ll denote by ω= dφ/dt, as shown below. The J in the illustration below is the angular momentum of the object. Hence, if we’d imagine it to be an electron, then J would be the spin angular momentum only, not its orbital angular momentum—although the analysis would obviously be valid for the orbital and/or total angular momentum as well.

precession

OK. Let’s look at what’s going on. The angular displacement – which is also, rather confusingly, referred to as the angle of precession ā€“ in the time interval Ī”t is, obviously, equal to Δφ = ωpĀ·Ī”t. Now, looking at the geometry of the situation, and using the small-angle approximation for the sine, one can also see that Ī”J ā‰ˆ (JĀ·sinĪø)Ā·(ωpĀ·Ī”t). In fact, going to the limit (i.e. for infinitesimally small Ī”φ and Ī”J), we can write:

dJ/dt = ωpĀ·JĀ·sinĪø

But the angular momentum cannot change if there’s no torque. In fact, the time rate of change of the angular momentum is equal to the torque. [You should look this up but, if you don’t want to do that, note that this is just the equivalent, for rotational motion, of the F = dp/dt law for linear motion.] Now, in my post on magnetic dipoles, I showed that the torque Ļ„ on a loop of current with magnetic moment Ī¼ in an external magnetic field B  is equal to Ļ„ = μ×B. So the magnitude of the torque is equal to |Ļ„| = |μ|Ā·|B|Ā·sinĪø = μ·BĀ·sinĪø. Therefore, Ļ‰pĀ·JĀ·sinĪø = μ·BĀ·sinĪø and, hence,

ω= μ·B/J

However, from the general Ī¼/J = –gĀ·(qe/2m) equation we derived in our previous post, we know that μ/J ā€“ for an atomic magnet, that is – must be equal to μ/J = gĀ·qe/2m. So we get the formula we wanted to get here:

ω= gĀ·(qe/2m)Ā·B

This equation says that the angular velocity of the precession is proportional to the magnitude of the external magnetic field, and that the constant of proportionality is equal to gĀ·(qe/2m). It’s good to do the math and actually calculate the precession frequency fp = ωp/2Ļ€. It’s easy. We had calculated qe/2m already: it was equal to 1.6Ɨ10āˆ’19 C divided by 2Ā·9.1Ɨ10āˆ’31 kg, so that’s 0.0879Ɨ1012  C/kg or 0.0879Ɨ1012 (CĀ·m)/(NĀ·s2), more or less. šŸ™‚ Now, g is dimensionless, and B is expressed in tesla: 1 T = (NĀ·s)/(CĀ·m), so we get the sāˆ’1 dimension we want for a frequency. For g = 2 (so we look at the spin of the electron itself only), we get:

fp = Ļ‰p/2Ļ€ = 2Ā·0.0879Ɨ1012/2Ļ€ ā‰ˆ 28Ɨ109 = 28 gigacycles per tesla = 28 GHz/T

This is a number expressed per unit of the magnetic field strength B. Note that you’ll often see this number expressed as 1.4 megacycles per gauss, using the older gauss unit for magnetic field strength: 1 tesla = 10,000 gauss. For a nucleus, we get a somewhat less impressive number because the proton (or neutron) mass is so much bigger: it’s a number expressed in megacycles per tesla, indeed, and for a proton (i.e. a hydrogen nucleus), it’s about 42.58 MHz/T.

Now, you may wonder about the numbers here. Are they astronomical? Maybe. Maybe not. It’s probably good to note that the strength of the magnetic field in medical MRI systems (magnetic resonance imaging systems) is only 1.5 to 3 tesla, so it’s a rather large unit. You should also note that the clock speed of the CPU in your laptop – so that’s the speed at which it executes instructions – is measured in GHz too, so perhaps it’s not so astronomic. I’ll let you judge. šŸ™‚

So… Well… That’s all nice. The key question, of course, is whether or not this classical view of the electron spinning around a proton is accurate, quantum-mechanically, that is. I’ll let Feynman answer that question provisionally:

“According to the classical theory, then, the electron orbits—and spins—in an atom should precess in a magnetic field. Is it also true quantum-mechanically? It is essentially true, but the meaning of the ā€œprecessionā€ is different. In quantum mechanics one cannot talk about the direction of the angular momentum in the same sense as one does classically; nevertheless, there is a very close analogy—so close that we continue to call it precession.”

To distinguish classical and quantum-mechanical precession, quantum-mechanical precession is usually referred to as Larmor precession, and the frequencies above are often referred to as Larmor frequencies. However, I should note that, technically speaking, the term Larmor frequency is actually reserved for the frequency I’ll describe in the next section. I should also note that the Ļ‰= gĀ·(qe/2m)Ā·B is usually written, quite simply, as Ļ‰= γ·B. Of course, the gamma is not the Lorentz factor here, but the so-called gyromagnetic ratio (aka as the magnetogyric ratio): Ī³ = gĀ·(qe/2m). Oh—just so you know: Sir Joseph Larmor was a British physicists and, yes, he developed all of the stuff we’re talking about here. šŸ™‚

At this point, you may wonder if and why all of the above is relevant. Well… There’s more than one answer to this question, but I’d recommend you start with reading the Wikipedia article on NMR spectroscopy. šŸ™‚ And then you should also read Feynman’s exposĆ© on the Rabi atomic or molecular beam method for determining the precession frequency. It’s really fascinating stuff, but you are sufficiently armed now to read those things for yourself, and so I’ll just move on. Indeed, there’s something else I need to talk about here, and that’s Larmor’s Theorem.

Larmor’s Theorem

We’ve been talking single electrons only so far. Now, you may fear that things become quite complicated when many electrons are involved and… Well… That’s true, of course. And then you may also think that things become even more complicated when external fields are involved, like that external magnetic field we introduced above, and that led our electrons to precess at extraordinary frequencies. Well… That’s not true. Here we get some help: Larmor proved a theorem that basically says that, if we can work out the motions of the electrons without the external field, the solution for the motions with the external field is the no-field solution with an added rotation about the axis of the field. More specifically, for an external magnetic field, the added rotation will have an angular frequency equal to:

ω= (qe/2m)Ā·B

So that’s the same formula as we found for the angular velocity of the precession if g = 1, so that’s very easy to remember. The Ļ‰L  frequency, which is the precession frequency for g = 1, is referred to as the Larmor frequency. The proof of the above is remarkably easy, but… Well… I don’t want to copy Feynman here, so I’ll just refer you to the relevant Lecture on it. šŸ™‚

Diamagnetism

I guess it’s about time we relate all of what we learned so far to properties of matter we can relate to, and so that’s what I’ll do here. We’re not going to talk about ferromagnetism here, i.e. the mechanism through which iron, nickel and cobalt and most of their alloys become permanent magnets. That’s quite peculiar and so we will not discuss it here. Here we’ll talk about the very weak quantum-mechanical magnetic effect ā€“ a thousand to a million times less than the effects in ferromagnetic materials ā€“ that occurs in all materials when placed in an external magnetic field.

While the effect is there in all materials, it’s stronger for some than for others. In fact, it’s usually so weak it is hard to detect, and so it’s usually demonstrated using elements for which the diamagnetic effect is somewhat stronger, like bismuth or antimony. The effect is demonstrated by suspending a piece of material in a non-uniform field, as illustrated below. The diamagnetic effect will cause a small displacement of the material, away from the high-field region, i.e. away from the pointed pole.

diamagnetism

I should immediately add that some materials, like aluminium, will actually be attracted to the pointed pole, but that’s because of yet another effect that not all materials share: paramagnetism. I’ll talk about that in another post, together with ferromagnetism. So… Diamagnetism: what is it?

The illustration below shows our spinning electron (q) once again. It also shows a magnetic field B but, unlike our analysis above, or the analysis in our previous post, we assume the external magnetic field is not just there. We assume it changes, because it’s been turned on or off—hopefully slowly: if not, we’d have eddy-current forces causing potentially strong impulses.

diagmagnetism 2But so we’ve got some change in the magnetic flux , and so we know, because of Faraday or Maxwell ā€“ you choose šŸ™‚ ā€“ that we’ll have some circulation of E, i.e. the electric field. The magnetic flux is B times the surface area, and the circulation is the average tangential component E times the length of the path. Because our model of the orbiting electron is so nice and symmetric, we can write Faraday’s Law here as:

EĀ·2π·r = āˆ’d(B·π·r2)/dt ⇔ E = āˆ’(r/2)Ā·dB/dt

A field implies a force and, therefore, a torque on the electron. The torque is equal to the force times the lever arm, so it’s equal to (āˆ’qeĀ·E)Ā·r = āˆ’qeĀ·EĀ·r. Of course, the torque is also equal to the rate of the change of the angular momentum, so dJ/dt must equal:

dJ/dt = āˆ’qeĀ·EĀ·r =  qeĀ·(r/2)Ā·(dB/dt)Ā·r = (qeĀ·r2/2)Ā·(dB/dt)

Now, the assumption is that the field goes from zero to B, so ΔB = B. Therefore, ΔJ must be equal to:

Ī”J = (qeĀ·r2/2)Ā·B

You should, in fact, derive this more formally, by integrating—but let’s keep things as simple as we can. šŸ™‚ What does this formula say, really? It’s the extra angular momentum from the ‘twist’ that’s given to the electrons as the field is turned on. Now, this added angular momentum makes an extra magnetic moment which, because it is an orbital motion, is just qe/2m times the angular momentum that’s already there. But more angular momentum means the magnetic moment has changed, according to the Ī¼ = (qe/2m)Ā·J formula we derived in our previous post, so we have:

Δμ = –(qe/2m)Ā·Ī”J

The minus sign is there because of Lenz’ law: the added momentum is opposite to the magnetic field—and, yes, I know: it’s hard to keep track of all of the conventions involved here. :-/ In any case, we get the following grand equation:

lens

So we found that the induced magnetic moment is directly proportional to the magnetic field B, and opposing it. Now that is what explains why our piece of bismuth does what it does in that non-uniform magnetic field. Of course, you’ll say: why is stronger for bismuth than for other materials? And what about aluminium, or paramagnetism in general? Well… Good questions, but we’ll tackle them in the next posts. šŸ™‚

Let me conclude this post by copying Feynman’s little exposĆ© on why the phenomenon of diamagnetism is so particular. In fact, he notes that, because we’re talking a piece of material here that can’t spin – so it’s held in place, so to say – we should have “no magnetic effects whatsoever”. The reasoning is as follows:

Capture

This is very interesting indeed. This classical theorem basically says that the energy of a system should not be affected by the presence of a magnetic field. However, we know magnetic effects, such as the diamagnetic effect, are there, so these effects are referred to as ‘quantum-mechanical’ effects indeed: they cannot be explained using classical theory only, even if all of what we wrote above used classical theory only.

I should also note another point: why do we need a non-homogeneous field? Well… The situation is comparable to what we wrote on the Stern-Gerlach experiment. If we would have a homogeneous magnetic field, then we would only have a torque on all of the atomic magnets, but no net force in one or the other direction. There’s something else here too: you may think that the forces pointing towards and away from the pointed tip should cancel each other out, so there should actually be no net movement of the material at all! Feynman’s analysis works for one atom, indeed, but does it still make sense if we look at the whole piece of material? It does, because we’re talking an induced magnetic moment that’s opposing the field, regardless of the orientation of the magnetic moment of the individual atoms in the piece of material. So, even if the individual atoms have opposite momenta, the extra induced magnetic moment will point in the same direction for all. So that solves that issue. However, it does not address Feynman’s own critical remark in regard to the supposed ‘impossibility’ of diamagnetism in classical mechanics.

But I’ll let you think about this, and sign off for today. šŸ™‚ I hope you enjoyed this post.

Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/

Taking the magic out of God’s number: some additional reflections

Note: I have published a paper that is very coherent and fully explains this so-called God-given number. There is nothing magical about it. It is just a scaling constant. Check it out: The Meaning of the Fine-Structure Constant. No ambiguity. No hocus-pocus.

Jean Louis Van Belle, 23 December 2018

Original post:

In my previous post, I explained why the fine-structure constant α isĀ notĀ a ‘magical’ number, even ifĀ it relates all fundamental properties of the electron: its mass, its energy, its charge, its radius, its photonĀ scattering cross-sectionĀ (i.e. the Bohr radius, or the size of the atom really) and, finally, the couplingĀ constant for photon-electron interactions.Ā The key to such understanding of α was the model of an electron as a tiny ball of charge. As such, we have two energy formulas for it. One is the energy that’s needed to assemble the charge from infinitely dispersed infinitesimal charges, which we denoted as Uelec. The other formula is the energy of the fieldĀ of the tiny ball of charge, which we denoted as Eelec.

The formula for EelecĀ is calculated using the formula for the field momentum of a moving charge and, using the m = E/c2Ā mas-energy equivalence relationship, is equivalentĀ to the electromagnetic mass. We went through the derivation in our previous post, so let me just jot down the result:

emm - 2

The second formula depends on what ball of charge we’re thinking of, because the formulas for a charged sphere and aĀ sphericalĀ shellĀ of charge are different: both have the same structureĀ as the relationship above (so the energy is also proportional to the square of the electron charge andĀ inverselyĀ proportional to the radius a), but the constant of proportionality is different. For a sphere of charge, we write:

Ā f sphre

For a spherical shellĀ of charge we write:

shell

To compare the formulas, you need to note that the square of the electron chargeĀ eĀ in the formula for the field energy isĀ equal to e2Ā = qe2/4πε0Ā = keĀ·qe2. So we multiplyĀ the square of the actualĀ electron charge by the Coulomb constant keĀ = 1/4πε0. As you can see, the three formulas have exactlyĀ the same form then. It’s just the proportionality constant that’s different: it’s 2/3, 3/5 and 1/2 respectively.Ā It’s interesting to quickly reflect on the dimensions here: [ke] ā‰ˆ 9Ɨ109 NĀ·m2/C2, so e2Ā is expressed in NĀ·m2. That makes the units come out alright, as we divide by a (so that’s in meter) and so we get the energy inĀ jouleĀ (which is newtonĀ·meter). In fact, now that we’re here, let’s quickly calculate the value of e2: it’s that keĀ·qe2Ā product, so it’s equal to 2.3Ɨ10āˆ’28Ā NĀ·m2. We can quickly check this value because we know that the classical electron radius is equal to:

classical electron radius

So we divide 2.3Ɨ10āˆ’28Ā NĀ·m2Ā by mec2Ā ā‰ˆ 8.2Ɨ10āˆ’14 J, so we get r0Ā ā‰ˆ 2.82Ɨ10āˆ’15Ā m. So we’re spot on! Why did I do this check? Not really to check what I wrote. It’s more to show what’s going on.Ā We’ve got yet another formula relating the energy and the radius of an electron here, so now we have three. In fact we have more because the formula for UelecĀ depends on the finer details of our model for the electron (sphere versus shell, uniform versus non-uniform distribution):

  1. EelecĀ = (2/3)Ā·(e2/a): This is the formula for the energy of the field, so we may all it isĀ externalĀ energy.
  2. UelecĀ = (3/5)Ā·(e2/a), or UelecĀ = (1/2)Ā·(e2/a): This is the energy needed to assembleĀ our electron, so we might, perhaps, call it itsĀ internalĀ energy. The first formula assumes our electron is a uniformly chargedĀ sphere. The second assumes all charges sit on the surface of the sphere. If we drop the assumption of the charge having to be uniformly distributed, we’ll find yet another formula.
  3. mec2Ā =Ā e2/r0: This is the energy associated with the so-called classical electron radiusĀ (r0) and the electron’s rest mass (me).

In our previous posts, we assumed the last equation was the right one. Why? Because it’s the one that’s been verified experimentally. The discrepancies between the various proportionality coefficients – i.e. the difference between 2/3 and 1, basically – are to be explained because of theĀ binding forces within the electron, without which the electron would just ‘explode’, as the French physicist and polymath Henri PoincarĆ© famously put it.Ā Indeed, if the electron is a little ball of negative charge, the repulsive forces between its parts should rip it apart. So we will not say anything more about this. You can have fun yourself byĀ googlingĀ all the various theories that try to model these binding forces. [I may do the same some day, but now I’ve got other priorities: I want to move to Feynman’s third volume ofĀ Lectures, which is devoted to quantum physicsĀ only, so I look very much forward to that.]

In this post, I just wanted to reflect once more on what constants are reallyĀ fundamentalĀ and what constants are somewhat less fundamental. From all what I wrote in my previous post, I said there were three:

  1. The fine-structure constant α, which is a dimensionless number.
  2. Planck’s constant h, whose dimension is jouleĀ·second, so that’s the dimension ofĀ action.
  3. The speed of light c, whose dimension is that of a velocity.

The three are related through the following expression:

alpha re-expressed

This is an interesting expression. Let’s first check its dimension. We already explained thatĀ e2Ā is expressed inĀ NĀ·m2. That’s rather strange, because it means the dimension of e itself is N1/2Ā·m: what’s the square root of a force of oneĀ newton? In fact, to interpret the formula above, it’s probably better to re-writeĀ e2Ā asĀ e2Ā = qe2/4πε0Ā = keĀ·qe2. That shows you how the electron charge and Coulomb’s constant are related. Of course, they are part and parcel of one and the same force law:Ā Coulomb’s law. We don’t need anything else, except for relativity theory, because we need to explain theĀ magnetic forceĀ as well—andĀ thatĀ we can do because magnetism is just a relativistic effect. Think of the field momentum indeed: the magnetic field comes into play only when we start to move our electron. The relativity effect is captured by cĀ  in that formula for α above. As for ħ, ħ = h/2Ļ€ comes with the E = hĀ·fĀ equation, which links us to the electron’s Compton wavelengthĀ Ī» through theĀ de BroglieĀ relationĀ Ī» = h/p.

The point is: we should probably not look at α as a ‘fundamental physical constant’. It’s e2Ā that’s the third fundamental constant, besides h and c. Indeed, it’s fromĀ e2Ā that all the rest follows: the electron’s internal energy, its external energy, and its radius, and then all the rest by combining stuff with other stuff.

Now, we took the magic out of α by doing what we did in the previous posts, and that’s to combine stuff with other stuff, and so now you may think I am putting the magic back in with that formula for α, which seems to define α in terms of the three mentioned ‘fundamental’ constants. That’s not the case: this relation comes out of all of the other relationships we found, and so it’s nothing new really. It’s actually notĀ a definition of α: it just does what it does, and that’s toĀ relate α to the ‘fundamental’ physical constants behind.

So… No new magic. In fact, I want to close this post by taking away even more of the magic. If you read my previous post, I said that α was ‘God’s cut-off factor’ šŸ™‚ ensuring our energy functions do not blow up, but I also said it was impossible to sayĀ whyĀ he choseĀ 0.00729735256Ā as the cut-off factor. The question is actually easily answered by thinking about thoseĀ twoĀ formulas we had for the internal and external energy respectively. Let’s re-write them in natural units and, temporarily, two different subscripts for α, so we write:

  1. Eelec = αe/r0: This is the formula for the energy of the field.
  2. Uelec = αu/r0: This is the energy needed to assemble our electron.

Both energies are determined by the above-mentioned laws, i.e. Coulomb’s Law and the theory of relativity, so α has got nothing to do what that. However, both energies have to be the same, and so αeĀ has to be equal to αu. In that sense, α is, quite simply, aĀ proportionality constantĀ that achieves that equality. NowĀ thatĀ explains why we canĀ derive α from the three other constants which, as mentioned above, are probablyĀ moreĀ fundamental. In fact, we’ve got only three degrees of freedom here, so if we chose c,Ā h andĀ eĀ as ‘fundamental’, then α isn’t any more.

The underlying deep question behind it all isĀ whyĀ those two energies should be equal.Ā Why would our electron have some internal energy if it’s elementary? The answer to that question is: because it has some non-zero radius, and it has some non-zero radius because we don’t want our formula for the field energy (or the field momentum) to blow up. Now, if it has some radius, then itĀ hasĀ to have some internal energy.

You’ll say: that makes sense, but it doesn’t answer the question. Why would it haveĀ internal energy, with or without a zero radius? If an electron is an elementaryĀ particle, then it’s reallyĀ elementary, isn’t? And so then we shouldn’t try to ‘assemble’ it from an infinite number of infinitesimally small charges. You’re right, and here we can also note that the fact that the electron doesn’t blow up is firm evidence it’s veryĀ elementary indeed.

I should also note that Feynman actually doesn’t talk about the energy that’s needed to assemble a charge: he gets his UelecĀ = (1/2)Ā·(e2/a) by calculating the externalĀ field energy for a spherical shell of charge, and he sticks to it—presumably because it’s the same field for a uniform or non-uniformĀ sphere of charge. He only notes there has to be some radius because, if not, the formula he uses blows up, indeed. So – who knows? – perhaps he doesn’t quite believe that formula for theĀ internalĀ energy is relevant either.

So perhaps there is no internal energy indeed. Perhaps there’s just the energy of the field. So… Well… I can’t say much about this… Except… Well…Ā Perhaps just one more thing. Let me note somethingĀ that, I hope, you noticed as well: theĀ keĀ·qe2 is the numerator in Coulomb’s Law itself. You also know that energy equals force times distance. So if we divide both sides by r0, we get Coulomb’s Law itself FelecĀ = keĀ·qe2/r02. The only thing is: what’s the distance? It’s one charge only, and there is no distance between one charge, is there? Well… Yes and no. I have been thinking that the requirement of the internal and external energies being equal resembles the statement that the forces between two charges are equal and opposite. That ties in with the idea of the internal energy itself: remember we were basically talking forces between infinitesimally small elements of charge within the electron itself? So r0Ā is, perhaps, some averageĀ distance or so. There must beĀ someĀ way of thinking of it like that. But… Well… Which oneĀ exactly?

This kind of reflection may not make sense. Who knows? I obviously need to think all of this through and so this post is, indeed, just a bunch of reflections for which I will have more time later—hopefully. šŸ™‚ Perhaps we’re all just pushing the matter too far. Perhaps we should just accept that the external energy has that 2/3 factor but that the actual energy of the electron should also include the equivalent energy of some binding force that holds the electron together. Well… In any case. That’s all I am going to do on this extremely complicated matter. It’s time to move indeed! So the point to take home here is probably just this:

  1. When calculating the radius of an electron using classical theory, we get in trouble: not only do we find different radii, but the radii that we find do not respect the E =Ā mec2Ā law. It’s only theĀ mec2Ā =Ā e2/r0Ā that’s relativistically correct.
  2. That suggests the electron also has some non-electric mass, which are referred to as ‘binding forces’ or ‘PoincarĆ© stresses’, but which remain to be explained convincingly.
  3. All of this shouldn’t surprise us: for all we know, the electron is something fuzzy. šŸ™‚

So my next posts will focus on the ‘essentials’ preparing for Feynman’s Volume on quantum mechanics. Those ‘essentials’ will still involve some classical stuff but, as you will see, even more contradictions, that – hopefully! – will then be solved in the quantum-mechanicalĀ picture of itĀ all. šŸ™‚

Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 20, 2020 as a result of a DMCA takedown notice from Michael A. Gottlieb, Rudolf Pfeiffer, and The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 20, 2020 as a result of a DMCA takedown notice from Michael A. Gottlieb, Rudolf Pfeiffer, and The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/

Taking the magic out of God’s number

Note: I have published a paper that is very coherent and fully explains this so-called God-given number. There is nothing magical about it. It is just a scaling constant. Check it out: The Meaning of the Fine-Structure Constant. No ambiguity. No hocus-pocus.

Jean Louis Van Belle, 23 December 2018

Original post:

I think the post scriptum to my previous post is interesting enough to separate it out as a piece of its own, so let me do that here. You’ll remember that we were trying to find some kind of a model for the electron, picturing it like a tiny little ball of charge, and then we just applied the classical energy formulas to it to see what comes out of it. TheĀ key formula is the integral that gives us the energy that goes into assembling a charge. It was the following thing:

U 4

This is a double integral which we simplified in two stages, so we’re looking at an integral within an integral really, butĀ we can substitute the integral over the ρ(2)Ā·dV2Ā product by the formula we got for the potential, so we write that as Φ(1), and so the integral above becomes:

U 5Now, thisĀ integral integrates the ρ(1)·Φ(1)Ā·dV1Ā product over all of space, so that’s over all points in space, and so we just dropped the index and wrote the whole thing as the integral of ρ·Φ·dVĀ over all of space:

U 6

We thenĀ established that this integral was mathematically equivalent to the following equation:

U 7

So this integral is actually quite simple: it just integrates E•E = E2Ā over all of space. The illustration below shows E as a function of the distanceĀ rĀ for a sphere of radius R filledĀ uniformlyĀ with charge.

uniform density

So the field (E) goes asĀ rĀ for r ≤ R and as 1/r2Ā for r ≄ R. So, for r ≄ R, the integral will have (1/r2)2Ā = 1/r4Ā in it. Now, you know that the integral of some function is the surface under the graph of that function. Look at the 1/r4 function below: it blows up between 1 and 0. That’s where the problem is: there needs to be some kind of cut-off, because that integral will effectively blow up when the radius of our little sphere of charge gets ‘too small’.Ā So that makes it clear why it doesn’t make sense to use this formula to try to calculate the energy of a point charge. It just doesn’t make sense to do that.

graph

In fact, the need for a ‘cut-off factor’ so as to ensure our energy function doesn’t ‘blow up’ is not because of the exponent in theĀ 1/r4 expression: the need is also there for any 1/r relation, as illustrated below. AllĀ 1/rnĀ function have the sameĀ pivotĀ point, as you can see from the simple illustration below. So, yes, we cannot go all the way to zero from there when integrating: we have to stopĀ somewhere.

graph 2So what’s the ‘cut-off point’? What’s ‘too small’ a radius? Let’s look at the formula we got for our electron as a shellĀ of charge (so the assumption here is that the charge isĀ uniformly distributed on the surfaceĀ of a sphere with radius a):

energy electron

So we’ve got an even simpler formula here: it’s just a 1/rĀ relation (a is rĀ in this formula), not 1/r4. Why is that? Well… It’s just the way the math turns out: we’re integrating over volumes and so that involves an r3Ā factor and so it all simplifies to 1/r, and so that gives us this simple inverselyĀ proportional relationship between U and r, i.eĀ a, in this case. šŸ™‚Ā I copied the detail of Feynman’s calculation in my previous post, so you can double-check it. It’s quite wonderful, really. Look at it again: we have a very simpleĀ inverselyĀ proportional relationship between theĀ radiusĀ of our electron andĀ itsĀ energy as a sphere of charge. We could write it as:

Uelect  = α/a, with α = e2/2

Still… We need the cut-off point’. Also note that, as I pointed out, we don’t necessarily need to assume that the charge in our little ball of charge (i.e. our electron) sits on the surface only: if we’d assume it’s a uniformly charged sphere of charge, we’d just get another constant of proportionality: our 1/2 factor would become a 3/5 factor, so we’d write: UelectĀ Ā = (3/5)Ā·e2/a. But we’re not interested in finding the rightĀ model here. We know theĀ UelectĀ Ā = (3/5)Ā·e2/aĀ gives us a value forĀ aĀ that differs with a 2/5 factor as the classical electron radius. That’s not so bad and so let’s go along with it. šŸ™‚

We’re going to look at the simpleĀ structureĀ of this relation, and all of its implications. The simple equation above says that the energy of our electron is (a) proportional to the square of its charge and (b) inversely proportional to its radius. Now,Ā thatĀ is a very remarkable result. In fact,Ā we’ve seen something like this before, and we were astonished.Ā We saw it when we were discussing the wonderful properties of that magical number, theĀ fine-structure constant, which we also denoted by α. However, because we used α already, I’ll denote the fine-structure constant as αe here, so you don’t get confused. You’ll remember thatĀ the fine-structure constant is a God-like number indeed: it links allĀ of the fundamental properties of the electron, i.e. its charge, its radius, its distance to the nucleus (i.e. the Bohr radius), its velocity, its mass (and, hence, its energy), its de BroglieĀ wavelength. Whatever: all these physical constants are all related through the fine-structure constant.Ā 

In my various posts on this topic, I’ve repeatedly said that, but I never showed why it’s true, and so it was a very magical number indeed. I am going to take some of the magic out now. Not too much but… Well… You can judge for yourself how much of the magic remains after I am done here. šŸ™‚

So, at this stage of the argument, α can be anything, and αeĀ cannot, of course. It’s just that magical number out there, which relates everything to everything: it’s the God-given number we don’t understand,Ā or didn’t understand, I should say. Past tense. Indeed, we’re going to get some understanding here because we know that one of theĀ many expressions involving αe was the following one:

me = αe/re

This says that theĀ massĀ of the electron is equal to the ratio of the fine-structure constant and the electron radius. [Note that we express everything in natural units here, so that’sĀ Planck units. For the detail of the conversion, please see the relevant section on that in my one of my posts on this and other stuff.] In fact, the U =Ā (3/5)Ā·e2/aĀ andĀ meĀ = αe/reĀ relations looks exactlyĀ the same, because one of the other equations involving the fine-structure constant was: αeĀ =Ā eP2. So we’ve got the square of the charge here as well! Indeed, as I’ll explain in a moment, the difference between the two formulas is just a matter of units.

Now, mass is equivalent to energy, of course: it’s just a matter of units, so we can equate meĀ with EeĀ (this amounts to expressing the energy of the electron in aĀ kg unit—bit weird, but OK) and so we get:

Ee = αe/re

So there we have: the fine-structure constant αeĀ is Nature’s ‘cut-off’ factor, so to speak. Why? Only God knows. šŸ™‚ But it’s now (fairly) easy to see why all the relations involving αeĀ are what they are. As I mentioned already, we also know that αeĀ is the square of the electron charge expressed in Planck units, so we have:

 αe = eP2 and, therefore, Ee = eP2/re

Now, you can check for yourself: it’s just a matter of re-expressing everything in standard SI units, and relating eP2Ā to e2, and it should all work: you should get the EelectĀ Ā = (2/3)Ā·e2/aĀ expression. So… Well… At least this takes some of the magic out the fine-structure constant. It’s still a wonderful thing, but so you see that the fundamental relationship between (a) the energy (and, hence, the mass), (b) the radius and (c) the charge of an electron isĀ notĀ something God-given. What’s God-given are Maxwell’s equations, and so theĀ EeĀ = αe/reĀ = eP2/reĀ is just one of the many wonderful things that you can get out of Ā them.

So we found God’s ‘cut-off factor’ šŸ™‚ It’s equal to αeĀ ā‰ˆ 0.0073 = 7.3Ɨ10āˆ’3. So 7.3 thousands of… What? Well… Nothing. It’s just a pure ratio between the energy and the radius of an electron (if both are expressed in Planck units, of course). And so it determines the electron charge (again, expressed in Planck units). Indeed, we write:

eP = √αe

Really? Yes. JustĀ doĀ all these formulas:

ePĀ = √αeĀ ā‰ˆĀ āˆš0.0073Ā·1.9Ɨ10āˆ’18Ā coulombĀ ā‰ˆ 1.6Ā Ć—10āˆ’19 C

Just re-check itĀ withĀ all the known decimals: you’ll see it’s bang on. Let’s look at the EeĀ = meĀ = αe/reĀ ratio once again. What’s theĀ meaningĀ of it? Let’s first calculate the value of reĀ and me,Ā i.e. the electron radius and electron mass expressed in Planck units. It’s equal to the classical electron radius divided by the Planck length, and then the same for the mass, so we get the following thing:

reĀ ā‰ˆ (2.81794Ɨ10āˆ’15Ā m)/(1.6162Ɨ10āˆ’35Ā m) = 1.7435Ɨ1020Ā 

meĀ ā‰ˆ (9.1Ɨ10āˆ’31Ā kg)/(2.17651Ɨ10āˆ’8 kg) = 4.18Ɨ10āˆ’23

αeĀ = (4.18Ɨ10āˆ’23)Ā·(1.7435Ɨ1020) ā‰ˆ 0.0073

It works like a charm, but what does it mean? Well… It’s just a ratio between two physical quantities, and the scaleĀ you use to measure those quantities matters very much. We’ve explained that the Planck mass is a rather large unit at the atomic scale and, therefore, it’s perhaps not quite appropriate to use it here. In fact, out of the many interesting expressions for αe, I should highlight the following one:

αeĀ = e2/(ħ·c) ā‰ˆ (1.60217662Ɨ10āˆ’19 C)2/(4πε0Ā·[(1.054572Ɨ10āˆ’34 NĀ·mĀ·s)Ā·(2.998Ɨ108 m/s)])Ā ā‰ˆ 0.0073 once more šŸ™‚

Note that the elementary charge e is actually equal to qe/4πε0, which is what I am using in the formula. I know that’s confusing, but it what it is. As for the units, it’s a bit tedious to write it all out, but you’ll get there. Note that ε0Ā ā‰ˆ 8.8542Ɨ10āˆ’12Ā C2/(NĀ·m2) so… Well… All the units do cancel out, and we get a dimensionless number indeed, which is what αeĀ is.

The point is: this expression links αeĀ to the the de BroglieĀ relation (p =Ā h/Ī»), with Ī» the wavelengthĀ that’s associated with the electron. Of course, because of the Uncertainty Principle, we know we’re talking some wavelengthĀ rangeĀ really, so we should write the de Broglie relation asĀ Ī”p = hĀ·Ī”(1/Ī»). Now, that, in turn, allows us to try to work out the Bohr radius, which is the other ‘dimension’ we associate with an electron. Of course, now you’ll say: why would you do that. Why would you bring in the de BroglieĀ relation here?

Well… We’re talking energy, and so we have the Planck-EinsteinĀ relation first: the energy of some particle can always be written as the product of hĀ and some frequencyĀ f: E = hĀ·f. The only thing thatĀ de Broglie relation adds is the Uncertainty Principle indeed: the frequencyĀ fĀ will be some frequency range, associated with someĀ momentumĀ range, and so that’s what the Uncertainty Principle really says. I can’t dwell too much on that here, because otherwise this post would become a book. šŸ™‚ For more detail, you can check out one of my many posts on the Uncertainty Principle. In fact, the one I am referring to here has Feynman’s calculation of the Bohr radius, so I warmly recommend you check it out. The thrust of the argument is as follows:

  1. If we assume that (a) an electron takes some space – which I’ll denote by rĀ šŸ™‚ – and (b) that it has some momentum p because of its mass m and its velocity v, then the Ī”xĪ”p = ħ relation (i.e. the Uncertainty Principle in its roughest form) suggests that the order of magnitude of r and p should be related in the very same way. Hence, let’s just boldly write r ā‰ˆ ħ/p and see what we can do with that.
  2. We know that the kinetic energy of our electron equals mv2/2, which we can write as p2/2m so we get rid of the velocity factor.Well… Substituting our p ā‰ˆ ħ/r conjecture, we get K.E. = ħ2/2mr2. So that’s a formula for the kinetic energy. Next is potential.
  3. TheĀ formula for the potential energy is U = q1q2/4πε0r12. Now, we’re actually talking about the size of an atomĀ here, so one charge is the proton (+e) and the other is the electron (–e), so the potential energy is U = P.E. = –e2/4πε0r, with rĀ the ā€˜distance’ between the proton and the electron—so that’s the Bohr radius we’re looking for!
  4. We can now write the total energy (which I’ll denote by E, but don’t confuse it with the electric field vector!) asĀ E = K.E. + P.E. =  ħ2/2mr2 – e2/4πε0r. Now,Ā the electron (whatever it is) is, obviously, in some kind of equilibrium state. Why is that obvious? Well… Otherwise our hydrogen atom wouldn’t or couldn’t exist. šŸ™‚ Hence, it’s in some kind of energy ā€˜well’ indeed, at the bottom. Such equilibrium point ā€˜at the bottom’ is characterized by its derivative (in respect to whatever variable) being equal to zero. Now, the only ā€˜variable’ here isĀ rĀ (all the other symbols are physical constants), so we have to solve for dE/dr = 0. Writing it all out yields:Ā dE/dr = –ħ2/mr3Ā + e2/4πε0r2Ā = 0 ⇔ r =Ā 4πε0ħ2/me2
  5. We can now put the values in:Ā r =Ā 4πε0h2/me2Ā = [(1/(9Ɨ109) C2/NĀ·m2)Ā·(1.055Ɨ10–34Ā JĀ·s)2]/[(9.1Ɨ10–31Ā kg)Ā·(1.6Ɨ10–19Ā C)2]Ā = 53Ɨ10–12Ā m = 53 pico-meter (pm)

Done. We’re right on the spot.Ā The Bohr radius is, effectively, about 53 trillionthsĀ of a meter indeed!

Phew!

Yes… I know… Relax. We’re almost done. You should now be able to figure out why the classical electron radius and the Bohr radius can also be related to each other through the fine-structure constant. We write:

me = α/re = α/α2r = 1/αr

So we get that α/reĀ =Ā 1/αr and, therefore, we get re/r = α2, which explains why α is also equal to the so-called junction number, or the coupling constant, for an electron-photon coupling (see my post on the quantum-mechanical aspects of the photon-electron interaction). It gives a physical meaning to the probability (which, as you know, is the absolute square of the probability amplitude) in terms of the chance of a photon actually ‘hitting’ the electron as it goes through the atom. Indeed, the ratio of the Thomson scattering cross-section and the Bohr size of the atom should be of the same order asĀ re/r, and so that’s α2.

[Note: To be fully correct and complete, I should add that the coupling constant itself is not α2Ā but √α = eP. Why do we have this square root? You’re right: the fact that the probability is the absolute squareĀ of the amplitude explains one square root (√α2Ā = α), but not two. The thing is: the photon-electron interaction consists of twoĀ things. First, the electron sort of ‘absorbs’ the photon, and then it emits another one, that has the same or a different frequency depending on whether or not the ‘collision’ was elastic or not. So if we denote the coupling constant as j, then the whole interaction will have a probability amplitude equal to j2. In fact, theĀ value which Feynman uses in his wonderful popular presentation of quantum mechanics (The Strange Theory of Light and Matter), is āˆ’Ī± ā‰ˆ āˆ’0.0073. I am not quite sure why the minus sign is there. It must be something with the angles involved (the emitted photon will not be following the trajectory of the incoming photon) or, else, with the special arithmetic involved in boson-fermion interactions (we add amplitudes when bosons are involved, butĀ subtractĀ amplitudes when it’s fermions interacting.Ā I’ll probably find out once I am true through Feynman’s third volume ofĀ Lectures, which focus on quantum mechanics only.]

Finally, the last bit of unexplained ‘magic’ in the fine-structure constant is that the fine-structure constant (which I’ve started to write as α again, instead of αe) also gives us the (classical) relative speed of an electron, so that’s its speed as it orbits around the nucleus (according to the classical theory, that is), so we write

α = v/c = β

I should go through the motions here – I’ll probably do so in the coming days – but you can see we must be able to get it out somehow from all what we wrote above. See how powerfulĀ ourĀ Uelect  ∼ e2/a relation really is? It links the electron, charge, its radius and its energy, and it’s all we need to all the rest out of it: its mass, its momentum, its speed and – through the Uncertainty Principle – the Bohr radius, which is the size of the atom.

We’ve come a long way. This is truly a milestone. We’ve taken the magic out of God’s number—to some extent at least. šŸ™‚

You’ll have one last question, of course: if proportionality constants are all about theĀ scaleĀ in which weĀ measureĀ the physical quantities on either side of an equation, is there some way the fine-structure constant would come out differently? That’s the same as asking: what if we’d measure energy in units that are equivalent to the energy of an electron, and the radius of our electron just as… Well… What if we’d equate our unit of distance with the radius of the electron, so we’d write re = 1? What would happen to α? Well…Ā I’ll let you figure that one out yourself. I am tired and so I should go to bed now. šŸ™‚

[…] OK. OK. Let me tell you. It’s not that simple here. All those relationships involving α, in one form or the other, are very deep. They relate a lot of stuff to a lot of stuff, and we can appreciate that only when doing a dimensional analysis. A dimensional analysis of the EeĀ = αe/reĀ = eP2/rĀ yields [eP2/r] = C2/m on the right-hand side and [Ee] = J = NĀ·mĀ on the left-hand side. How can we reconcile both? The coulomb is an SI base unitĀ , so we can’t ‘translate’ it into something with N and m. [To be fully correct, for some reason, theĀ ampĆØreĀ (i.e. coulombĀ per second) was chosen as an SI base unit, but they’re interchangeable in regard to their place in the international system of units: they can’t be reduced.] So we’ve got a problem. Yes. That’s where we sort of ‘smuggled’ the 4πε0 factor in when doing our calculations above. That ε0 constant is, obviously, not ‘as fundamental’ as cĀ or α (just think of the cāˆ’2Ā = ε0μ0 relationship to understand what I mean here) but, still, it was necessary to make the dimensions come out alright: we need the reciprocalĀ dimensionĀ of ε0, i.e. (NĀ·m2)/C2, to make the dimensional analysis work.Ā We get: (C2/m)Ā·(NĀ·m2)/C2Ā = NĀ·m = J, i.e.Ā joule, so that’s the unit in which we measure energy or – using the E = mc2Ā equivalence – mass, which is the aspect of energy emphasizing itsĀ inertia.

So the answer is: no. Changing units won’t change alpha. So all that’s left is to play with it now. Let’s try to do that. Let me first plot that EeĀ = meĀ = αe/re = 0.00729735256/re:

graph 3Unsurprisingly, we find theĀ pivotĀ point of this curve is at the intersection of the diagonal and the curve itself, so that’s at the (0.00729735256,Ā 0.00729735256) point, where slopes are ± 1, i.e. plus or minus unity.Ā What does this show? Nothing much. What?Ā I can hear you: I should be excited because… Well… Yes! Think of it. If youĀ would have to chose a cut-off point, you’d chose this one, wouldn’t you? šŸ™‚ Sure, you’re right. How exciting! Let me show you. Look at it! It proves that God thinks in terms of logarithms. He has chosen α such that ln(E) = ln(α/r) = lnα – lnrĀ = lnα – lnrĀ = 0, so ln α = lnr and, therefore, α = r. šŸ™‚

Huh? Excuse me?

I am sorry. […] Well… I am not, of course… šŸ™‚ I just wanted to illustrate the kind of exercise some people are tempted to do. It’s no use. The fine-structure constant is what it is: it sort ofĀ summarizesĀ an awful lot of formulas. It basically shows what Maxwell’s equation imply in terms of theĀ structureĀ of an atom defined as a negativeĀ charge orbiting around some positive charge. It shows we can get calculate everything as a function of something else, and that’s what the fine-structure constant tells us: it relates everything to everything. However, when everything is said and done, the fine-structure constant shows us two things:

  1. Maxwell’s equations are complete: we canĀ construct a complete model of the electron and the atom, which includes: the electron’s energy and mass, its velocity, its own radius, and the radius of the atom. [I might have forgotten one of the dimensions here, but you’ll add it. :-)]
  2. God doesn’t want our equations to blow up. Our equations are all correct but, in reality, there’s a cut-off factor that ensures we don’t go to the limit with them.

So the fine-structure constant anchors our world, so to speak. In other words: of all the worlds that are possible, we live in this one.

[…] It’s pretty good as far as I am concerned. Isn’t it amazing that our mind is able to justĀ graspĀ things like that? I know my approach here is pretty intuitive, and with ‘intuitive’, I mean ‘not scientific’ here. šŸ™‚ Frankly, I don’t like the talk about physicists “looking into God’s mind.” I don’t think that’s what they’re trying to do. I think they’re just trying to understand the fundamentalĀ unityĀ behind it all. And that’s religion enough for me. šŸ™‚

So… What’s the conclusion? Nothing much. We’ve sort of concluded our description of the classical world… Well… Of its ‘electromagnetic sector’ at least. šŸ™‚ That sector can be summarized in Maxwell’s equations, which describe an infinite world of possible worlds. However, God fixed three constants:Ā h,Ā cĀ and α. So we live in a world that’s defined by this Trinity of fundamental physical constants. Why is it not two, or four?

My guts instinct tells me it’s because we live in three dimensions, and so there’s three degrees of freedom really. But what about time? Time is the fourth dimension, isn’t it? Yes. But time is symmetric in the ‘electromagnetic’ sector: we can reverse the arrow of time in our equations and everything still works. TheĀ arrow of timeĀ involves other theories: statistics (physicists refer to it as ‘statistical mechanics‘) and the ‘weak force’ sector, which I discussed when talking about symmetries in physics. So… Well… We’re not done. God gave us plenty of other stuff to try to understand. šŸ™‚

The classical explanation for the electron’s mass and radius

Feynman’s 28thĀ LectureĀ in his series on electromagnetism is one of the more interesting but, at the same time, it’s one of the fewĀ LecturesĀ that is clearly (out)dated. In essence, it talks about the difficulties involved in applying Maxwell’s equations to theĀ elementary chargesĀ themselves, i.e. the electron and the proton.Ā We already signaled some of these problems in previous posts. For example, in our post on the energy in electrostatic fields, we showed how our formulas for the field energy and/or the potential of a charge blow up when we use it to calculate the energy we’d need toĀ assembleĀ a point charge. What comes out is infinity: āˆž. So our formulas tell us we’d need an infinite amount of energy to assemble a point charge.

Well… That’s no surprise, is it? The idea itself is impossible: how can one have a finite amount of charge in something that’s infinitely small? Something that has no size whatsoever? It’s pretty obvious we get some division by zero there. šŸ™‚ The mathematicalĀ approach is often inconsistent. Indeed, a lot of blah-blahĀ in physics is obviously just about applying formulas to situations that are clearly notĀ within the relevant area of application of the formula.Ā So that’s why I went through the trouble (in my previous post, that is) of explaining you how we getĀ these energy and potential formulas, and that’s by bringing chargesĀ (note the plural) together. Now, we may assume these charges are point charges, but that assumption is not so essential. What I tried to say when being so explicitĀ was the following: yes, aĀ charge causes a field, but theĀ ideaĀ of a potential makes sense only when we’re thinking of placing someĀ other charge in that field. So point charges with ‘infinite energy’ should not be a problem. Feynman admits as much when he writes:

“If the energy can’t get out, but must stay there forever, is there any real difficulty with an infinite energy? Of course, a quantity that comes out infinite may be annoying, but what really matters is only whether there are any observable physical effects.”

So… Well… Let’s see. There’s another, more interesting, way to look at an electron: let’s have a look at the field it creates. A electron – stationary or moving – will create a field in Maxwell’s world, which we know inside out now. So let’s just calculate it. In fact, Feynman calculates it for the unit charge (+1), so that’s a positron. It eases the analysis because we don’t have to drag any minus sign along. So how does it work? Well…

We’ll have anĀ energy flux density vector – i.e. the Poynting vector S – as well as a momentum densityĀ vector g all over space. Both are related through the g = S/c2Ā equation which, as I explained in my previous post, is probably best written as cg = S/c, because we’ve got units then, on both sides, that we can readily understand, like N/m2 (so that’s force per unit area) or J/m3Ā (so that’s energy per unit volume). On the other hand, we’ll need something that’s written as a function of the velocity of our positron, so that’s v, and so it’s probably best to just calculate g, the momentum,Ā which is measured in NĀ·s or kgĀ·(m/s2)Ā·s (both are equivalent units for theĀ momentum p = mv, indeed) per unit volumeĀ (so we need to add a 1/Ā m3Ā to the unit).Ā So we’ll have some integral all over space, but I won’t bother you with it. Why not? Well… Feynman uses a rather particular volume element to solve the integral, and so I want you to focus on the solution. The geometry of the situation, and the solution for g, i.e. the momentum of the field per unit volume,Ā is what matters here.

So let’s look at that geometry. It’s depicted below. We’ve got a radial electric field—a Coulomb field really, because our charge is moving at a non-relativistic speed, so v << c and we can approximate with a Coulomb field indeed. Maxwell’s equations imply that B = vƗE/c2, so g = ε0EƗBĀ is what it is in the illustration below. Note that we’d have to reverse the direction of both E and B for an electron (because it’s negative), but g would be the same. It is directed obliquely toward the line of motion and its magnitude is g = (ε0v/c2)Ā·E2Ā·sinĪø. Don’t worry about it: Feynman integrates this thing for you. šŸ™‚ It’s notĀ thatĀ difficult, but still… To solve it, he uses the fact that the fields are symmetric about the line of motion, which is indicated by the littleĀ arrow around the v-axis, with the Φ symbol next to it (it symbolizes the potential). [The ‘rather particular volume element’ is a ring around the v-axis, and it’s because of this symmetry that Feynman picks the ring. Feynman’s Lectures are not onlyĀ great to learn physics: they’re a treasure drove of mathematical tricks too. :-)]

momentum field

As said, I don’t want to bother you with the technicalities of the integral here. This is the result:

Ā Ā emm

What does this say? It says that the momentum of the field – i.e. the electromagnetic momentum, integrated over all of space – is proportional to the velocity v of our charge. That makes sense: when v = 0, we’ll have an electrostatic field all over space and, hence, some inertia, but it’s only when we try toĀ moveĀ our charge that Newton’s Law comes into play: then we’ll needĀ someĀ forceĀ to overcome that inertia. It all works through the Poynting formula: S = EƗB/μ0. If nothing’s moving, then B = 0, and so we’ll have some EĀ and, therefore, we’ll have field energy alright, but the energyĀ flowĀ will be zero. But when we move the charge, we’re moving the field, and so then B ≠ 0 and so it’s through B that the E in our S equation start kicking in. Does that make sense? Think about it: it’s good to try to visualize things in your mind. šŸ™‚

The constants in the proportionality constant (2e2)/(3ac2) of our p ∼ v formula above are:

  • e2Ā =Ā qe2/(4πε0), withĀ qeĀ the electron charge (without the minus sign) and ε0Ā our ubiquitous electric constant. [Note that, unlike Feynman, I prefer toĀ notĀ write e in italics, so as toĀ notĀ confuse it with Euler’s numberĀ eĀ ā‰ˆĀ 2.71828 etc. However, I know I am not always consistent in my notation. :-/ We don’t need Euler’s number in this post, so e or eĀ is always an expression for the electron charge,Ā notĀ Euler’s number. Stupid remark, perhaps, but I don’t want you to be confused.]
  • a is the radius of our charge—see we got away from the idea of a point charge? šŸ™‚
  • c2Ā is just c2, i.e. our weird constant (the square of the speed of light) which seems to connect everything to everything. Indeed, think about stuff like this: S/g = c2Ā = 1/(ε0μ0).

Now, p = mv, so that formula for p basically says that our elementary charge (as mentioned, g is the same for a positron or an electron: E and BĀ will be reversed, but g is not) has anĀ electromagnetic mass melecĀ equal to:

emm - 2

That’s an amazing result. We don’t need to give our electron anyĀ rest mass: just its charge and its movement will do!Ā Super!Ā So we don’t need any Higgs fields here! šŸ™‚ The electromagnetic field will do!

Well… Maybe. Let’s explore what we’ve got here.

First, let’s compare that radius a in our formula to what’s found in experiments.Ā Huh? Did someone ever try to measure the electron radius? Of course. There are all theseĀ scattering experiments in which electrons get fired at atoms. They can fly through or, else, hit something. Therefore, one can some statistical analysis and determine what is referred to as aĀ cross-section. A cross-section is denoted by the same symbol as the standard deviation: σ (sigma). In any case… So there’s something that’s referred to as the classical electron radius, and it’s equal to the so-calledĀ Thomsom scattering length.Ā Thomson scattering, as opposed to ComptonĀ scattering, is elastic scattering, so it preserves kinetic energy (unlike Compton scattering, where energy gets absorbed and changes frequencies). So… Well… I won’t go into too much detail but, yes, this is theĀ electronĀ radius we need. [I am saying this rather explicitly because there are two other numbers around: the so-called Bohr radiusĀ and, as you might imagine, the ComptonĀ scattering cross-section.]

The Thomson scattering length isĀ 2.82 femtometer (so that’sĀ 2.82Ɨ10āˆ’15Ā m), more or less that is :-), and it’s usually related to theĀ observedĀ electron massĀ meĀ through the fine-structure constant α. In fact, using Planck units, we can write:Ā  reĀ·me= α, which is an amazing formula but, unfortunately, I can’t dwell on it here. Using ordinary m, s, C and what have you units, we can write reĀ as:

classical electron radius

That’s good, because if we equateĀ meĀ and melecĀ and switch melecĀ and a in our formula for melec, we get:

a

So, frankly, we’reĀ spot on!Ā Well… Almost. The two numbers differ by 1/3. But who cares about a 1/3 factor indeed? We’re talking rather fuzzy stuff here – scattering cross-sections and standard deviations and all that – so… Yes. Well done! Our theory works!

Well… Maybe. Physicists don’t think so. They think the 1/3 factor isĀ an issue. It’sĀ sad because it really makes a lot of sense. In fact, the Dutch physicist Hendrik Lorentz – whom we know so well by now šŸ™‚Ā ā€“ had also worked out that, because of the length contraction effect, our spherical charge would contract into an ellipsoid and… Well… He worked it all out, and it was not a problem: he found that the momentum was altered by the factorĀ (1āˆ’v2/c2)āˆ’1/2, so that’s the ubiquitous Lorentz factor γ! He got this formula in the 1890s already, so that’sĀ long before the theory of relativity had been developed. So, many years before Planck and Einstein would come up with theirĀ stuff, Hendrik AntoonĀ Lorentz had the correct formulas already: the mass, or everything really, all should vary with that γ-factor. šŸ™‚

Why bother about the 1/3 factor? [I should note it’s actually referred to as the 4/3 problem in physics.] Well… The critics do have a point: if we assume that (a) an electron isĀ notĀ a point charge – so if we allow it to have some radius a – and (b) that Maxwell’s Laws apply, then we should go all the way. TheĀ energyĀ that’s needed toĀ assembleĀ an electron should then, effectively, be the same as the value we’d get out of those field energy formulas. So what doĀ we get when we apply those formulas? Well… Let me quickly copy Feynman as he does the calculation for an electron, notĀ looking at it as a point particle, but as a tinyĀ shellĀ of charge, i.e. a sphere with all charge sitting on the surface:

Feynman energy

Ā Let me enlarge the formula:

energy electron

Now, if we combine that with our formula forĀ melecĀ above, then we get:

4-3 problem

So that formula does notĀ respect Einstein’s universal mass-energy equivalence formula E = mc2. Now, youĀ willĀ agree that we really want Einstein’s mass-energy equivalence relation to be respected by all, so our electron should respect it too. šŸ™‚ So, yes, we’ve got a problem here, and it’s referred to as theĀ 4/3 problem (yes, the ratio got turned around).

Now, you may think it got solved in the meanwhile. Well… No. It’s still a bit of a puzzle today, and the current-day explanation is not really different from what the French scientist Henri Poincaré proposed as a ‘solution’ to the problem back in the 1890s. He basically told Lorentz the following: “If the electron is someĀ littleĀ ball of charge, then it should explode because of the repulsive forces inside. So there should be some binding forces there, and so that energy explains the ‘missing mass’ of the electron.” So these forces are effectively being referred to as PoincarĆ© stresses, and the non-electromagnetic energy that’s associated with them – which, of course,Ā has to be equal to 1/3 of the electromagnetic energy (I am sure you see why) šŸ™‚Ā ā€“ adds to theĀ totalĀ energy and all is alright now. We get:

U = mc2 = (melec + mPoincaré)c2

So… Yes… Pretty ad hoc.Ā Worse, according to the Wikipedia article on electromagnetic mass, that’s still where we are. And, no, don’t read Feynman’s overview of all of the theories that were around then (so that’s in the 1960s, or earlier). As I said, it’s theĀ oneĀ LectureĀ you don’t want to waste time on. So I won’t do that either.

In fact, let me try to do something else here, and that’s to de-construct the whole argument really. šŸ™‚Ā Before I do so, let me highlight theĀ essenceĀ of what was written above. It’s quite amazing really.Ā Think of it: we say that the mass of an electron – i.e.Ā its inertia, or the proportionality factor in Newton’sĀ F = mĀ·aĀ law of motion – is the energy in the electric and magnetic field it causes. So the electron itself is just aĀ hook for the force law, so to say. There’s nothing there, except for the charge causing the field. But so its mass is everywhere and, hence, nowhere really. Well… I should correct that: the field strength falls of as 1/r2Ā and, hence, the energy flow and momentum density that’s associated with it, falls of as 1/r4, so it falls of veryĀ rapidly and so the bulk of the energy is pretty near the charge. šŸ™‚

[Note: You’ll remember that the field that’s associated with electromagneticĀ radiationĀ falls of as 1/r, not as 1/r2, which is why there is an energy flux there which is never lost, which can travel independently through space. It’s not the same here, so don’t get confused.]

So that’s something to note: theĀ melecĀ = (2cāˆ’2/3)Ā·(e2/a) has the radiusĀ aĀ in it, but that radius is only theĀ hook, so to say. That’s fine, because it is not inconsistent with theĀ ideaĀ of the Thomson scattering cross-section, which is the area that oneĀ canĀ hit. Now, you’ll wonder how one can hit an electron: you can readily imagine an electron beam aimed at nuclei, but how would one hit electrons? Well… You can shoot photons at them, and see if they bounce back elastically or non-elastically. The cross-section area that bounces them off elastically must be pretty ‘hard’, and the cross-section that deflects them non-elastically somewhat less so. šŸ™‚

OK… But… Yes?Ā Hey! How did we get that electron radius in that formula?Ā 

Good question! Brilliant, in fact! You’re right: it’s here that the whole argument falls apart really. We did a substitution. That radius a is the radius of a sphericalĀ shell of chargeĀ with an energy that’s equal to UelecĀ = (1/2)Ā·(e2/a), so there’s another way of stating the inconsistency: the equivalent energy of melecĀ = (2cāˆ’2/3)Ā·(e2)/a) is equal to E =Ā melecĀ·c2Ā = (2/3)Ā·(e2/a) and that’s notĀ the same as UelecĀ = (1/2)Ā·(e2/a). If we take the ratio of UelecĀ and melecĀ·c2Ā =, we get the same factor: (1/2)/(2/3) = 3/4. But… Your question is superb! Look at it: putting it the way we put it reveals the inconsistency in the whole argument. We’re mixing two things here:

  1. We first calculate the momentum density, and the momentum, that’s causedĀ by the unit charge,Ā so we get some energy which I’ll denote as EelecĀ =Ā melecĀ·c2
  2. Now, we then assume this energy must beĀ equal to the energy that’s needed to assembleĀ the unit charge from an infinite number of infinitesimally small charges, thereby also assumingĀ the unit charge is a uniformly charged sphere of charge with radius a.
  3. We then use this radius a to simplify our formula for EelecĀ =Ā melecĀ·c2

Now thatĀ is not kosher, really! First, it’s (a) a lot of assumptions, both implicit as well as explicit, and then (b) it’s, quite simply, not a legit mathematical procedure: calculating the energy in the field, or calculating the energy we need to assemble a uniformly charged sphere of radius a are twoĀ veryĀ different things.

Well… Let me put it differently. We’re using the same laws – it’s all Maxwell’s equations, really – but we should be clear about what we’re doing with them, and those two things are veryĀ different.Ā The legitimate conclusion must be that our a is wrong. In other words, we should not assume that our electron is spherical shell of charge. So then what? Well… We could easily imagine something else, like a uniform or even a non-uniformly charged sphere. Indeed, if we’re just filling empty space with infinitesimally small charge ‘elements’, then we may want to think the density at the ‘center’ will be much higher, like what’s going on when planets form: the density of the inner core of our own planet Earth is more than four times the density of its surface material. [OK. Perhaps not very relevant here, but you get the idea.] Or, conversely, taking into account PoincarĆ©’s objection, we may want to think all of the charge will be on the surface, just like on a perfect conductor, where all charge is surface charge!

Note that the field outside of a uniformly charged sphere and the field of a sphericalĀ shellĀ of charge is exactly the same, so we wouldĀ notĀ find a different number forĀ EelecĀ =Ā melecĀ·c2, but we surely would find a different number for Uelec. You may want to look up some formulas here: you’ll find that the energy of a uniformly distributed sphereĀ of charge (so we do notĀ assume that all of the charge sits on the surface here)Ā is equal to (3/5)Ā·(e2/a). So we’d already have much less of a problem, because the 3/4 factor in the UelecĀ = (3/4)Ā·melecĀ·c2Ā becomes a (5/3)Ā·(2/3) = 10/9 factor. So now we have a discrepancy of some 10% only. šŸ™‚

You’ll say: 10% is 10%. It’s huge in physics, as it’s supposed to be anĀ exact science.Ā Well… It is and it isn’t. Do you realize we haven’t even started to talk about stuff likeĀ spin? Indeed, in modern physics, we think ofĀ electrons as something that also spinsĀ around one or the other axis, so there’s energy there too, and we didn’t include that in our analysis.

In short, Feynman’s approach here is disappointing. NaiveĀ even, but then… Well… Who knows? Perhaps he didn’t do thisĀ LectureĀ himself. Perhaps it’s just an assistant or so.Ā In fact, I should wonder why there’s still physicists wasting time on this!Ā I should also note that naively comparing that a radius with the classical electron radius also makes little or no sense. Unlike what you’d expect, the classical electron radius reĀ and the Thomson scattering cross-section σeĀ are notĀ related like you might think they are, i.e. like σeĀ = π·re2Ā or σeĀ = π·(re/2)2Ā or σeĀ =Ā re2Ā or σeĀ = π·(2Ā·re)2Ā or whatever circular surface calculation rule that might make sense here.Ā No. The Thomson scattering cross-section is equal to:

σeĀ =Ā (8Ļ€/3)Ā·re2Ā = (2Ļ€/3)Ā·(2Ā·re)2Ā =Ā (2/3)·π·(2Ā·re)2Ā ā‰ˆ 66.5Ɨ10āˆ’30Ā m2Ā = 66.5 (fm)2

Why? I am not sure. I must assume it’s got to do with the standard deviation and all that. The point is, we’ve got a 2/3 factor here too, so do we have a problem really? I mean… The a we got was equal to a = (2/3)Ā·re, wasn’t it? It was. But, unfortunately, it doesn’t mean anything. It’s just a coincidence. In fact, looking at the Thomson scattering cross-section, instead of the Thomson scattering radius, makesĀ the ‘problem’ a little bit worse. Indeed, applying the π·r2Ā rule for a circular surface, we get that the radiusĀ would be equal to (8/3)1/2Ā·reĀ ā‰ˆ 1.633Ā·re, so we get something that’s muchĀ largerĀ rather than something that’s smaller here.

In any case, it doesn’t matter. The point is: this kind of comparisons should not be taken too seriously. Indeed, when everything is said and done, we’re comparing three very different things here:

  1. The radius that’s associated with the energy that’s needed toĀ assembleĀ our electron from infinitesimally small charges, and so that’s based onĀ Coulomb’sĀ law and the model we use for our electron: is it a shell or a sphere of charge? If it’s a sphere, do we want to think of it as something that’s of uniform of non-uniform density.
  2. The second radius is associated with the field of an electron, which we calculate using Poynting’s formula for the energy flow and/or the momentum density. So that’s notĀ about the internal structure of the electron but, of course, it would be nice if we could find some model of an electron that matchesĀ thisĀ radius.
  3. Finally, there’s the radius that’s associated with elasticĀ scattering, which is also referred to asĀ hardĀ scattering because it’s like the collision of two hard spheres indeed. But so that’s some value that has to be established experimentallyĀ and so it involves judicious choices because there’s probabilities and standard deviations involved.

So should we worry about the gaps between these three different concepts? In my humble opinion: no. Why? Because they’re all damn close and so we’re actually talking about the same thing. I mean: isn’t terrific that we’ve got a model that brings the first and the second radius together with a difference of 10% only? As far as I am concerned, that shows the theory works. So what Feynman’s doing in that (in)famous chapter is some kind of ‘dimensional analysis’ which confirms rather than invalidatesĀ classical electromagnetic theory. So it shows classical theory’s strength, rather than its weakness. It actually shows our formulaĀ doĀ work where we wouldn’t expect them to work. šŸ™‚

The thing is: when looking at the behavior of electrons themselves, we’ll need a different conceptual framework altogether. I am talking quantum mechanics here. Indeed, we’ll encounter other anomalies than the ones we presented above. There’s the issue of the anomalous magnetic moment of electrons, for example. Indeed, as I mentioned above, we’ll also want to think as electrons as spinning around their own axis, and so that implies some circulationĀ of EĀ that will generate a permanent magnetic dipole moment… […]Ā OK, just think of some magnetic field if you don’t have a clue what I am saying here (but then you should check out my post on it). […] The point is: here too, the so-called ‘classical result’, so that’s its theoreticalĀ value, will differ from the experimentally measured value. Now, the difference here will be 0.0011614, so that’s about 0.1%, i.e. 100 timesĀ smallerĀ than my 10%. šŸ™‚

Personally, I think that’s not so bad. šŸ™‚ But then physicists need to stay in business, of course. So, yes, itĀ isĀ a problem. šŸ™‚

Post scriptum on the math versus the physics

The key to the calculation of the energy that goes into assembling a charge was the following integral:

U 4

This is a double integral which we simplified in two stages, so we’re looking at an integral within an integral really, butĀ we can substitute the integral over the ρ(2)Ā·dV2Ā product by the formula we got for the potential, so we write that as Φ(1), and so the integral above becomes:

U 5Now, thisĀ integral integrates the ρ(1)·Φ(1)Ā·dV1Ā product over all of space, so that’s over all points in space, and so we just dropped the index and wrote the whole thing as the integral of ρ·Φ·dVĀ over all of space:

U 6

We thenĀ established that this integral was mathematically equivalent to the following equation:

U 7

So this integral is actually quite simple: it just integrates E•E = E2Ā over all of space. The illustration below shows E as a function of the distanceĀ rĀ for a sphere of radius R filledĀ uniformlyĀ with charge.

uniform density

So the field (E) goes asĀ rĀ for r ≤ R and as 1/r2Ā for r ≄ R. So, for r ≄ R, the integral will have (1/r2)2Ā = 1/r4Ā in it. Now, you know that the integral of some function is the surface under the graph of that function. Look at the 1/r4 function below: it blows up between 1 and 0. That’s where the problem is: there needs to be some kind of cut-off, because that integral will effectively blow up when the radius of our little sphere of charge gets ‘too small’.Ā So that makes it clear why it doesn’t make sense to use this formula to try to calculate the energy of a point charge. It just doesn’t make sense to do that.

graph

What’s ‘too small’? Let’s look at the formula we got for our electron as a spherical shellĀ of charge:

energy electron

So we’ve got an even simpler formula here: it’s just a 1/rĀ relation. Why is that? Well… It’s just the way the math turns it out. I copied the detail of Feynman’s calculation above, so you can double-check it. It’s quite wonderful, really. We have a very simpleĀ inverselyĀ proportional relationship between theĀ radiusĀ of our electron andĀ itsĀ energy as a sphere of charge. We could write it as:

Uelect  = α/a , with α = e2/2

But – Hey!Ā Wait a minute! We’ve seen something like this before, haven’t we?Ā We did. We did when we were discussing the wonderful properties of that magical number, theĀ fine-structure constant, which we also denoted by α. šŸ™‚ However, because we used α already, I’ll denote the fine-structure constant as αe here, so you don’t get confused. As you can see, the fine-structure constant links allĀ of the fundamental properties of the electron: its charge, its radius, its distance to the nucleus (i.e. the Bohr radius), its velocity, and its mass (and, hence, its energy). So, at this stage of the argument, α can be anything, and αeĀ cannot, of course. It’s just that magical number out there, which relates everything to everything: it’s the God-given number we don’t understand. šŸ™‚ Having said that, it seems like we’re going to get some understanding here because we know that, one theĀ many expressions involving αe was the following one:

me = αe/re

This says that theĀ massĀ of the electron is equal to the ratio of the fine-structure constant and the electron radius. [Note that we express everything in natural units here, so that’sĀ Planck units. For the detail of the conversion, please see the relevant section on that in my one of my posts on this and other stuff.] Now, mass is equivalent to energy, of course: it’s just a matter of units, so we can equate meĀ with EeĀ (this amounts to expressing the energy of the electron in aĀ kg unit—bit weird, but OK) and so we get:

Ee = αe/re

So there we have: the fine-structure constant αeĀ is Nature’s ‘cut-off’ factor, so to speak. Why? Only God knows. šŸ™‚ But it’s now (fairly) easy to see why all the relations involving αeĀ are what they are. For example, we also know that αeĀ is the square of the electron charge expressed in Planck units, so we have:

 α = eP2 and, therefore, Ee = eP2/re

Now, you can check for yourself: it’s just a matter of re-expressing everything in standard SI units, and relating eP2Ā to e2, and it should all work: you should get theĀ UelectĀ Ā = (1/2)Ā·e2/aĀ expression. So… Well… At least this takes some of the magic out the fine-structure constant. It’s still a wonderful thing, but so you see that the fundamental relationship between (a) the energy (and, hence, the mass), (b) the radius and (c) the charge of an electron isĀ notĀ something God-given. What’s God-given are Maxwell’s equations, and so theĀ EeĀ = αe/reĀ = eP2/reĀ is just one of the many wonderful things that you can get out of Ā them.Ā šŸ™‚

Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 17, 2020 as a result of a DMCA takedown notice from Michael A. Gottlieb, Rudolf Pfeiffer, and The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/

Field energy and field momentum

This post goes to the heart of the E = mc2, equation. It’sĀ kindaĀ funny, because Feynman just compresses all of it in a sub-section of hisĀ Lectures. However, as far as I am concerned, I feel it’s a very crucial section. Pivotal, I’d say, which would fit with its place in all of the 115 LecturesĀ that make up the three volumes, which is sort of mid-way, which is where we are here.Ā So let’s get go for it. šŸ™‚

Let’s first recall what we wrote about the Poynting vector S, which we calculate from the magnetic and electric field vectors E and B by taking their cross-product:

S formula

This vector represents the energy flow, per unit area and per unit time, in electrodynamical situations. If E and/or BĀ are zero (which is the case in electrostatics, for example, because we don’t have magnetic fields in electrostatics), then S is zero too, so there is no energy flow then. That makes sense, because we have no moving charges, so where would the energy go to?

I also made it clear we should think of S as something physical, by comparing it to the heat flow vectorĀ h, which we presented when discussing vector analysis and vector operators. The heat flow out of a surface element daĀ is the area times the component ofĀ hĀ perpendicular to da, so that’s (h•n)Ā·da = hnĀ·da. Likewise, we can writeĀ (S•n)Ā·da = SnĀ·da. The units of S and h are also the same:Ā joule per second and per square meterĀ or, using the definition of theĀ wattĀ (1 W = 1 J/s), in watt per square meter.Ā In fact, if you google a bit, you’ll find that both hĀ and S are referred to as aĀ flux density:

  1. The heat flow vector hĀ is the heat flux densityĀ vector, from which we get the heat flux through an area through the (h•n)Ā·da = hnĀ·daĀ product.
  2. The energy flow SĀ is the energy flux density vector, from which we get the energy flux through the (S•n)Ā·da = SnĀ·daĀ product.

So that should be enough as an introduction to what I want to talk about here. Let’s first look at the energy conservation principle once again.

Local energy conservation

In a way, you can look atĀ my previous postĀ as being all about the equation below, which we referred to as the ‘local’ energy conservation law:

energy flux

Of course, it is notĀ theĀ completeĀ energy conservation law. The local energy is not only in the field. We’ve got matter as well, and so that’s what I want to discuss here: we want to look at the energy in the field as well as the energy that’s in the matter. Indeed, field energy is conserved, and then it isn’t: if the field is doing work on matter, or matter is doing work on the field, then… Well… Energy goes from one to the other, i.e. from the field to the matter or from the matter to the field. So we need to include matter in our analysis, which we didn’t do in our last post. Feynman gives the following simple example: we’re in a dark room, and suddenly someone turns on the light switch. So now the room is full of field energy—and, yes, I just mean it’s not dark anymore. :-). So that means someĀ matter out thereĀ must have radiated its energy out and, in the process, it must have lost the equivalent mass of that energy. So, yes, we had matter losing energy and, hence, losing mass.

Now, we know that energy and momentum are related. Respecting and incorporating relativity theory, we’ve got two equivalentĀ formulas for it:

  1. E2Ā āˆ’ p2c2Ā = m02c4
  2. pcĀ = EĀ·(v/c) ⇔ p = vĀ·E/c2Ā = mĀ·v

The EĀ = mc2Ā and m = Ā·m0Ā·(1āˆ’v2/c2)āˆ’1/2Ā formulas connect both expressions. So we canĀ look at it in either of two ways. We couldĀ useĀ the energy conservation law, but Feynman prefers the conservation of momentum approach, so let’s see where he takes us. If the field has some energyĀ (and, hence, some equivalent mass) per unit volume, and if there’s someĀ flow, so if there’s someĀ velocityĀ (which there is: that’s what our previous post was all about), then it will have a certain momentumĀ per unit volume. [Remember: momentum isĀ massĀ timesĀ velocity.] That momentum will have aĀ direction, so it’s a vector, just like p = mv. We’ll write it as g, so we define g as:

g is the momentum of the field per unit volume.

What units would we express it in? We’ve got a bit of choice here. For example, because we’re relating everything to energy here, we may want to convert our kilogram into eV/c2Ā or J/c2Ā units, using the mass-energy equivalence relation EĀ = mc2. Hmm… Let’s first keep the kg as a measure of inertia though. So we write: [g] = [m]Ā·[v]/m3Ā = (kgĀ·m/s)/m3. Hmm… That doesn’t show it’s energy, so let’s replace the kg with a unit that’s got newton and meter in it, cf. the F = ma law. So we write:Ā [g] = (kgĀ·m/s)/m3Ā =Ā (kg/s)/m2Ā = [(NĀ·s2/m)/s]/m2Ā =Ā NĀ·s/m3. Well… OK. The newtonĀ·second is the unit of momentum indeed, and we can re-write it including the joule (1 J = 1 NĀ·m), so then we get [g] = (JĀ·s/m4), so what’s that? Well… Nothing much. However, I do note it happens to be the dimension of S/c2, so that’s [S/c2] = [J/(sĀ·m2)]Ā·(s2/m2) =Ā (JĀ·s/m4). šŸ™‚ Let’s continue the discussion.

Now, momentum is conserved, and eachĀ componentĀ of it is conserved. So let’s look at the x-direction. We should have something like:

formula

If you look at this carefully, you’ll probably say: “OK. I understood the thing with the dark room and light switch. Mass got converted into field energy, but what’s that second term of the left?”

Good. Smart. RightĀ remark. Perfect. […] Let me try to answer the question. While all of the quantities above are expressedĀ per unit volume, we’re actually looking at the sameĀ infinitesimal volume element here, so the example of the light switch is actually an example of a ‘momentum outflow’, so it’s actually an example of that second term of the left-hand side of the equation above kicking in! šŸ™‚

Indeed, the first term just sort of reiterates the mass-energy equivalence: the energy that’s in the matter can become field energy, so to speak, in our infinitesimal volume element itself, and vice versa. But if it doesn’t, then it should get out and, hence, become ‘momentum outflow’. Does that make sense? No?

Hmm… What to say? You’ll need to look at that equation a couple of times more, I guess. :-/ But I need to move on, unfortunately. [Don’t get put off when I say things like this: I am basically talking to myself, so it means I’ll need to re-visit this myself. :-/]

Let’s look at all of the three terms:

  1. The left-hand side (i.e. the time rate-of-change of the momentum of matter) isĀ easy. It’s just the force on it, which we know is equal to F =Ā q(E+vƗB). Do we know that? OK… I’ll admit it.Ā Sometimes it’s easy to forget where we are in an analysis like this, but so we’re looking at the electromagnetic force here. šŸ™‚ As we’re talking infinitesimals here and, therefore, chargeĀ densityĀ rather than discreteĀ charges, we should re-write this as the force per unit volume which is ρE+jƗB. [This is an interesting formula which I didn’t use before, so you should double-check it. :-)]
  2. The first term on the right-hand side should be equally obvious, or… Well… Perhaps somewhat less so. But with all my rambling on the Uncertainty PrincipleĀ and/or the wave-particle duality, it should make sense. If we scrap the second term on the right-hand side, we basically have an equation that is equivalent to the E = mc2Ā equation. No? Sorry. Just look at it, again and again. You’ll end up understanding it. šŸ™‚
  3. So it’s that second term on the right-hand side. What the hell does thatĀ say? Well… I could say: it’s the local energy or momentum conservation law. If the energy or momentum doesn’t stay in, it has to go out. šŸ™‚ But that’s not very satisfactory as an answer, of course. However, please just go along with thisĀ ‘temporary’ answer for a while.

So what isĀ that second term on the right-hand side? As we wrote it, it’s an x-component – or, let’s put it differently, it is or was part of the x-component of the momentum density – but, frankly, we should probably allow it to go out in any direction really, as the only constraint on the left-hand side is a per secondĀ rate of change of something. Hence, Feynman suggest to equate it to something like this:

general

What a, b and c? The components of some vector? Not sure. We’re stuck. This pieceĀ reallyĀ requiresĀ veryĀ advanced math. In fact, as far as I know, this is the onlyĀ time where Feynman says: “Sorry. This is too advanced. I’ll just give you the equation. Sorry.” So that’s what he does. He explains the philosophyĀ of the argument, which is the following:

  1. On the left-hand side, we’ve got the time rate-of-change of momentum, so that obeys the F = dp/dt = d(mv)/dt law, with the force F,Ā per unit volume, being equal to F(unit volume) = ρE+jƗB.
  2. On the right-hand side, we’ve got something that can be written as:

general 2

So we’d need to find a way to ρE+jƗBĀ in terms of EĀ andĀ B only – eliminating ρ andĀ jĀ by using Maxwell’s equations or whatever other trick  – and then juggle terms and make substitutions to get it into a form that looks like the formula above, i.e. the right-hand side of that equation. But so Feynman doesn’t show us how it’s being done. He just mentions some theorem in physics, which says that the energy that’s flowing through a unit area per unit time divided byĀ c2 – so that’s E/c2Ā per unit area and per unit time – must be equal to the momentum per unit volume in the space, so we write:

g = S/c2

HeĀ illustratesĀ the general theorem that’s used to get the equation above by giving two examples:

example theorem

OK. Two good examples. However, it’s still frustrating toĀ notĀ see how we get the g = S/c2Ā in the specificĀ context of the electromagnetic force, so let’s do a dimensional analysis at least. In my previous post, I showed thatĀ the dimension ofĀ S mustĀ be J/(m2Ā·s), so [S/c2] = [J/(m2Ā·s)]/(m2/s2) = [NĀ·m/(m2Ā·s)]Ā·(s2/m2) = [NĀ·s/m3]. Now, we know that the unit of mass is 1 kg = N/(m/s2). That’s just the force law: a force of 1 newton will give a mass of 1 kg an acceleration of 1 m/s per second, so 1 N = 1 kgĀ·(m/s2). So theĀ [NĀ·s/m3] dimension is equal to [kgĀ·(m/s2)Ā·s/m3] = [(kgĀ·(m/s)/m3] =Ā [(kgĀ·(m/s)]/m3, which is the dimension of momentum (p = mv) per unit volume, indeed.Ā So, yes, the dimensional analysis works out, and it’s also in line with theĀ p = vĀ·E/c2Ā = mĀ·v equation, but… Oh… We did a dimensional analysis already, where we also showed that [g]Ā = [S/c2] = (JĀ·s/m4). Well… In any case… It’s a bit frustrating to notĀ see the detail here, but let us note the the Grand ResultĀ once again:

The Poynting vector S gives us the energy flow as well as the momentum density gĀ = S/c2.

But what does it all mean, really? Let’s go through Einstein’s illustration of the principle. That will help us a lot. Before we do, however, I’d like to note something. I’ve always wondered a bit about that dichotomy between energy and momentum. Energy is force times distance: 1Ā jouleĀ is 1Ā newtonĀ Ć— 1Ā meterĀ indeed (1 J = 1 NĀ·m). Momentum is force times time, as we can express it in NĀ·s. Planck’s constant hĀ combines all three in the dimension ofĀ action, which isĀ forceĀ timesĀ distanceĀ timesĀ time: hĀ ā‰ˆĀ 6.6Ɨ10āˆ’34Ā NĀ·mĀ·s, indeed. I like that unity. In this regard, you should, perhaps, quickly review that post in which I explain that hĀ isĀ the energy per cycle, i.e. per wavelength or per period, of a photon, regardless of its wavelength. So it’s really something very fundamental.

We’ve got something similar here: energy and momentum coming together, and being shown as one aspect of the same thing: some oscillation. Indeed, just see what happens with the dimensions when we ‘distribute’ the 1/c2Ā factor on the right-hand side over the two sides, so we write:Ā cĀ·gĀ = S/c and work out the dimensions:

  1. [cĀ·g ]Ā = (m/s)Ā·(NĀ·s)/m3Ā = N/m2Ā = J/m3.
  2. [S/c] =Ā (s/m)Ā·(NĀ·m)/(sĀ·m2) = N/m2Ā = J/m3.

Isn’t that nice? Both sides of the equation now have a dimension like ‘the force per unit area’, or ‘the energy per unit volume’. To get that, we just re-scaled g and S, by c and 1/c respectively. As far as I am concerned, this shows an underlying unity we probably tend to mask with our ‘related but different’ energy and momentum concepts. It’s like E and B: I just love it we can write them together in our Poynting formula SĀ = ε0c2EƗB. In fact, let me show something else here, which you should think about. You know that c2Ā = 1/(ε0μ0), so we can write SĀ also as S =Ā EƗB/μ0. That’s nice, but what’s nice too is the following:

  1. S/c = cĀ·gĀ = ε0cEƗBĀ = EƗB/μ0c
  2. S/g = c2 = 1/(ε0μ0)

So, once again, Feynman may feel the Poynting vector is sort of counter-intuitive when analyzing specific situations but, as far as I am concerned, I feel the Poyning vector makes things actuallyĀ easierĀ to understand. Instead of two E and B vectors, and two concepts to deal with ‘energy’ (i.e. energy and momentum), we’re sort of unifying things here. In that regard – i.e in regard of feeling we’re talking the same thing really – I’d really highlight the S/g = c2 = 1/(ε0μ0) equation. Indeed, the universal constantĀ cĀ acts just like the fine-structure constant here: it links everything to everything. šŸ™‚

And, yes, it’s also about time we introduce the so-called principle of least action to explain things, becauseĀ action, as a concept, combines force, distance and time indeed, so it’s a bit more promising than just energy, of just momentum. Having said that, you’ll see in the next section that it’s sometimes quite useful to have the choice between one formula or the other. But… Well…Ā Enough talk. Let’s look at Einstein’s car.

Einstein’s car

Einstein’s car is a wonderful device: it rolls without any friction and it moves with a little flashlight. That’s all it needs. It’s pictured below. šŸ™‚ So the situation is the following: the flashlight shoots some light out from one side, which is then stopped at the opposite end of the car. When the light is emitted, there must be some recoil. In fact, we know it’s going to be equal to 1/c times the energy because all we need to do is apply theĀ pcĀ = EĀ·(v/c) formula forĀ v = c, so we know that p = E/c. Of course, this momentum now needs to move Einstein’s car. It’s frictionless, so it should work, but still… The car has some mass M, and so that will determine its recoil velocity: v = p/M. We just apply the general p = mv formula here, and v is notĀ equal toĀ c here, of course! Of course, then the light hits the opposite end of the car and delivers the same momentum, so that stops the car again. However, itĀ didĀ move over some distance x = vt. So we could flash our light again and get to wherever we want to get. [Never mind the infinite accelerations involved!] So… Well… Great! Yes, but Einstein didn’t like this car when he first saw it. In fact, he still doesn’t like it, because he knows it won’t take you very far. šŸ™‚

Einsteins' car

The problem is that we seem to be moving the center of gravity of this car by fooling around on the inside only. Einstein doesn’t like that. He thinks it’s impossible. And he’s right of course. The thing is: the center of gravity didĀ notĀ change. What happened here is that we’ve got someĀ blob of energy, and so that blob has some equivalent mass (which we’ll denote by U/c2), and so that equivalent mass moved all the way from one side to the other, i.e. over the length of the car, which we denote by L. In fact, it’s stuff like this that inspired the whole theory of the field energy and field momentum, and how it interacts withĀ matter.

What happens here is like switching the light on in the dark room: we’ve got matter doing work on the field, and so matter loses mass, and the field gains it, through its momentum and/or energy. To calculate how much, we could integrate S/cĀ or cĀ·gĀ over the volume of our blob, and we’d get something in jouleĀ indeed, but there’s a simpler way here. The momentum conservation says that the momentum of our car and the momentum of our blob must be equal, so if T is the time that was needed for our blob to go to the other side – and so that’s, of course, also the time during which our car was rolling – then MĀ·v = MĀ·x/T must be equal to (U/c2)Ā·cĀ =Ā (U/c2)Ā·L/T. The 1/T factor on both sides cancel, so we write:Ā MĀ·x = (U/c2)Ā·L.Ā Now, what is x? Yes. In case you were wondering, that’s what we’re looking for here. šŸ™‚ Here it is:

x = vT = vL/c = (p/M)Ā·(L/c) = [U/c)/M]Ā·(L/c) = (U/c2)Ā·(L/M)

So what’s next? Well… Now we need to show that the center-of-mass actually didĀ notĀ move with this ‘transfer’ of the blob. I’ll leave the math to you here: it should all work out. And you can also think through the obvious questions:

  1. Where is the energy and, hence, the mass of our blob after it stops the car? Hint: think about excited atoms and imagine they might radiate some light back. šŸ™‚
  2. As the car did move a little bit, we should be able to move it further and further away from its center of gravity, until the center of gravity is no longer in the car. Hint: think about batteries and energy levels going down while shooting light out. It just won’t happen. šŸ™‚

Now, what about a blob of light going from the top to the bottom of the car? Well… That involves the conservation ofĀ angularĀ momentum: we’ll have more mass on the bottom, but on a shorter lever-arm, soĀ angularĀ momentum is being conserved. It’s aĀ veryĀ good question though, and it led Einstein to combine the center-of-gravity theorem with the angular momentum conservation theorem to explain stuff like this.

It’s all fascinating, and one can think of a great many paradoxes that, at first, seem to contradict the Grand Principles we used here, which means that they would contradict all that we have learned so far. However, a careful analysis of those paradox reveals that they are paradoxes indeed:Ā propositions which soundĀ true but are, in the end,Ā self-contradictory. In fact, when explaining electromagnetism over his variousĀ Lectures, Feynman tasks his readers with a rather formidable paradox when discussing the laws of induction, he solves it here, ten chapters later, after describing what we described above. You can busy yourself with it but… Well… I guess you’ve got something better to do. If so, just take away the key lesson: there’s momentum in the field, and it’s also possible to build upĀ angularĀ momentum in a magneticĀ field and, if you switch it off, the angular momentum will be given back, somehow, as it’s stored energy.

That’s also why the seemingly irrelevant circulation of SĀ we discussed in my previous post, where we had a charge next to an ordinary magnet, and where we found that there was energy circulating around, is not so queer. The energy is there, in the circulating field, and it’sĀ real. As real as can be. šŸ™‚

crazy

Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/

The energy of fields and the Poynting vector

For some reason, I always thought that Poynting was a Russian physicist, like Minkowski. He wasn’t. I just looked it up. Poynting was anĀ Englishman, born near Manchester, and he teached in Birmingham. I should have known. Poynting is a very English name, isn’t it? My confusion probably stems from the fact that it was someĀ Russian physicist,Ā Nikolay Umov, who first proposed the basic concepts we are going to discuss here, i.e. the speed and direction of energy itself, or its movement. And as I am double-checking, I just learned that Hermann Minkowski is generally considered to be German-Jewish, not Russian. Makes sense. With Einstein and all that. His personal life story is actually quite interesting. You should check it out. šŸ™‚

Let’s go for it. We’ve done a few posts on the energy in the fields already, but all in the contexts of electrostatics. Let me first walk you through the ideas we presented there.

The basic concepts: force, work, energy and potential

1.Ā A charge q causes an electric field E, and E‘sĀ magnitude E is a simple function of the charge (q) and its distance (r) from the point that we’re looking at, which we usually write as P = (x, y, z). Of course, the origin of our reference frame here is q. The formula is the simple inverse-square law that you (should) know: E ∼ q/r2, and the proportionality constant is just Coulomb’s constant, which I think you wrote as keĀ in your high-school days and which, as you know, is there so as to make sure the units come out alright. So we could just writeĀ E = keĀ·q/r2. However, just to makeĀ sure it doesĀ notĀ look like a piece of cake šŸ™‚ physicists writeĀ the proportionality constant as 1/4πε0, so we get:

E 3

Now, the field is the force on any unit charge (+1) we’d bring to P. This led us to think of energy,Ā potentialĀ energy, because… Well… You know: energy is measured by work, so that’s some force acting over some distance. The potential energy of a charge increases if we move it againstĀ the field, so we wrote:

formula 1

Well… We actually gave the formula below in that post, so that’s the work done per unit charge.Ā To interpret it, you just need to remember that F = qE, which is equivalent to saying that E is the force per unit charge.

unit chage

As for the F•ds or E•ds product in the integrals, that’s a vector dot product, which we need because it’s only the tangential component of the force that’s doing work, as evidenced by the formula F•ds =Ā |F|Ā·|ds|Ā·cosĪø = FtĀ·ds, and as depicted below.

illustration 1

Now, thisĀ allowed us to describe the field in terms of the (electric)Ā potential Φ and the potential differencesĀ between two points, like the points a and b in the integral above. We have to chose some reference point, of course, someĀ P0Ā defining zero potential, which is usually infinitely far away. So we wrote our formula for the work that’s being done on a unit charge, i.e. W(unit) as:

potential

2.Ā The world is full of charges, of course, and so we need to add all of their fields. But so now you need a bit of imagination. Let’s reconstruct the world by moving all charges out, and then we bring them back one by one. So we take q1Ā now, and we bring it back into the now-empty world. NowĀ that does notĀ require any energy, because there’s no field to start with. However, when we take our second chargeĀ q2, we will be doing work as we move it against the field or, if it’s an opposite charge, we’ll be taking energy out of the field.Ā Huh?Ā Yes. Think about it. All is symmetric.Ā Just to make sure you’re comfortable with every step we take, let me jot down the formula for the force that’s involved. It’s just theĀ CoulombĀ force of course:

Coulomb's law

F1Ā is the force on charge q1, andĀ F2Ā is the force on charge q2. Now, q1Ā and q2. may attract or repel each other but the forces will alwaysĀ be equal and opposite. The e12Ā vector makes sure the directions and signs come out alright, as it’s the unit vector from q2Ā to q1Ā (not from q1Ā to q2, as you might expect when looking at the order of the indices). So we would need to integrate this forĀ rĀ going from infinity to… Well… The distance betweenĀ q1Ā and q2 – wherever they end up as we put them back into the world – so that’s what’s denoted by r12. Now I hate integrals too, but this is an easy one. Just note that ∫ rāˆ’2dr = 1/r and you’ll be able to figure out that what I’ll write now makes sense (if not, I’ll do a similar integral in a moment): the work done in bringing two charges together from a large distance (infinity) is equal to:

U 1So now we should bring in q3Ā and then q4, of course. That’s easy enough.Ā Bringing the first two charges into that world we had emptied took a lot of time, but now we can automate processes. Trust me: we’ll be done in no time. šŸ™‚ We just need to sum over all of theĀ pairsĀ of chargesĀ qiĀ andĀ qj. So we writeĀ the total electrostatic energy UĀ as the sum of the energies of all possible pairs of charges:

U 3

Huh?Ā Can we do that?Ā I mean… Every new charge that we’re bringing in here changes the field, doesn’t it? It does. But it’s the magic of the superposition principle at work here. Our third charge q3Ā is associated with two pairs in this formula. Think of it: we’ve got theĀ q1q3Ā and theĀ q2q3Ā combination, indeed. Likewise, our fourh chargeĀ q4Ā is to be paired up withĀ three charges now: q1,Ā q1Ā andĀ q3. This formula takes care of it, and theĀ ‘all pairs’ mentionĀ under the summationĀ sign (Ī£) reminds us we should watch we don’t double-count pairs:Ā theĀ q1q3Ā and q3q1Ā combination, for example, count for one pair only, obviously. So, yes, we write ‘all pairs’ instead of the usual i, j subscripts. But then, yes, this formula takes care of it. We’re done!

Well… Not really, of course. We’ve still got some way to go before I can introduce the Poynting vector. šŸ™‚ However, to make sure you ‘get’ the energy formula above, let me insert an extremely simple diagram so you’ve got a bit of a visual of what we’re talking about.

U system

3.Ā Now, let’s take a step back. We just calculated the (potential)Ā energy of the world (U), which is great. But perhaps we should also be interested in the world’sĀ potential Φ, rather than its potential energy U. Why? Well, we’ll want to know what happens when we bring yet another charge in—from outer space or so. šŸ™‚ And so then it’s easier to know the world’s potential, rather than its energy, because we can calculate the field from it using the E =Ā āˆ’āˆ‡Ī¦ formula.Ā So let’s de- and re-construct the world once again šŸ™‚ but now we’ll look at what happens with the field and the potential.

We know our first charge created a field with a field strength we calculated as:

E 3

So, when bringing in our second charge, we can use our Φ(P) integral to calculate the potential:

potential

[Let me make a note here, just for the record. You probablyĀ think I am being pretty childish when talking about my re-construction of the world in terms of bringing all charges out and then back in again but, believe me, there will be a lot of confusion when we’ll start talking about the energy of oneĀ charge, and that confusion can be avoided, to a large extent, when you realize that the idea (I mean the concept itself, really—not its formula)Ā of a potential involves two charges really. Just remember: it’s theĀ firstĀ charge that causes the field (and, of course, anyĀ charge causes a field),Ā but calculating a potential only makes sense when we’re talking someĀ other charge.Ā Just make a mental note of it. You’ll be grateful to me later.]

Let’s nowĀ combine the integral and the formula for E above. Because you hate integrals as much as I do, I’ll spell it out: the antiderivative of the Φ(P) integral is ∫ q/(4πε0r2)Ā·dr. Now, let’s bring q/4πε0 out for a while so we can focus on solving ∫(1/r2)dr. Now, ∫(1/r2)drĀ is equal to –1/r + k, and so the whole antiderivative is –q/4πε0rĀ + k. Now, we integrate from r = āˆž to r, and so the definite integral is [–q/(4πε0)]Ā·[1/āˆž āˆ’ 1/r] = [–q/(4πε0)]Ā·[0 āˆ’ 1/r]Ā = q/(4πε0r). Let me present this somewhat nicer:

E 4

You’ll say: so what? Well… We’re done! The only thing we need to do now is add up the potentials of all of the charges in the world. So the formula for the potential Φ at a point which we’ll simply refer to as point 1, is:

P 1

Note that our index jĀ starts at 2, otherwise it doesn’t make sense: we’d have a division by zero for the q1/r11Ā term. Again, it’s an obvious remark, butĀ notĀ thinking about it can cause a lot of confusion down the line.

4.Ā Now, I am veryĀ sorry but I have to inform you that we’ll be talking charge densitiesĀ and all that shortly, rather than discreteĀ charges, so I have to give you the continuumĀ version of this formula, i.e. the formula we’ll use when we’ve got charge densities rather than individual charges. That sum above then becomes an infinite sum (i.e. an integral), and qjĀ becomes a variable which we write as ρ(2). [That’s totally in line with our index jĀ starts at 2, rather than from 1.] We get:

U 2

Just look at this integral, and try to understand it: we’re integrating over all of space – so we’re integrating the whole world, really šŸ™‚Ā ā€“ and the ρ(2)Ā·dV2Ā product in the integral is just the charge of an infinitesimally small volumeĀ of our world. So the whole integral is just the (infinite) sum of the contributions to the potential (at point 1) of all (infinitesimally small) charges that are around indeed. Now, there’s something funny here. It’s just a mathematical thing: we don’t need to worry about double-counting here. Why? We’re not having products of volumes here. Just make a mental note of it because it will be different in a moment.

Now we’re going to look at the continuum version for our energy formula indeed. Which energy formula? That electrostatic energy formula, which said thatĀ the total electrostatic energy UĀ as the sum of the energies of all possible pairs of charges:

U 3

Its continuum version is the following monster:

U 4

Hmm… What kind of integral is that? We’ve gotĀ two variables here: dV2Ā and dV1. Yes. And we’ve also got a 1/2 factor now, because we do notĀ want to double-count and, unfortunately, there is no convenient way of writing an integral like this that keeps track of the pairs. It’s a so-called double integral, but I’ll let you look up the math yourself. In any case, we can simplify this integral so you don’t need to worry about it too much. How do we simplify it? Well… Just look at that integral we got for Φ(1): we calculated the potential at point 1 by integratingĀ the ρ(2)Ā·dV2Ā product over all of space, so the integral above can be written as:

U 5But so thisĀ integral integrates the ρ(1)·Φ(1)Ā·dV1Ā product over all of space, so that’s over all points in space. So we can just drop the index and write the whole thing as the integral of ρ·Φ·dVĀ over all of space:

U 6

5. It’s time for the hat-trick now. The equation above is mathematically equivalent to the following equation:

U 7

Huh?Ā Yes. Let me make two remarks here. First on the math, the E =Ā āˆ’āˆ‡Ī¦ formula allows you to the integrand of the integral above asĀ E•EĀ =Ā (āˆ’āˆ‡Ī¦)•(āˆ’āˆ‡Ī¦) = (āˆ‡Ī¦)•(āˆ‡Ī¦). And then you may or may not remember that, when substitutingĀ E =Ā āˆ’āˆ‡Ī¦ in Maxwell’s first equation (āˆ‡ā€¢E = ρ/ε0), we got the following equality: ρ = ε0Ā·āˆ‡ā€¢(āˆ‡Ī¦) = ε0Ā·āˆ‡2Φ, so we can write ρΦ as ε0Ā·Ī¦Ā·āˆ‡2Φ. However, that still doesn’t show the two integrals are the same thing. The proof is actually rather involved, and so I’ll refer to that post I referred to, so you can check the proof there.

The second remark is much more fundamental. The two integrals are mathematically equivalent, but are they also physically? What do I mean with that? Well… Look at it. The second integral implies that we can look at (ε0/2)Ā·E•E = ε0E2/2 as an energy density, which we’ll denote by u, so we write:

D 6

Just to make sure you ā€˜get’ what we’re talking about here: u is the energy density in the little cube dV in the rather simplistic (and, therefore, extremely useful) illustration below (which, just like most of what I write above, I got from Feynman).

Capture

Now the question: what is theĀ realityĀ of that formula? Indeed, what we did when calculating U amounted to calculating theĀ Universe with some numberĀ U – and that’s kinda nice, of course! – but then what? Is u = ε0E2/2 anything real? Well… That’s what this post is about. So we’re finished with the introduction now. šŸ™‚

Energy density and energy flow in electrodynamics

Before giving you any more formulas, let me answer the question: there is no doubt, in the classical theory of electromagnetism at least, that the energy density u is somethingĀ very real. ItĀ hasĀ to be because of the charge conservation law. Charges cannot just disappear in space, to then re-appear somewhere else. The charge conservation law is written asĀ āˆ‡ā€¢j =Ā āˆ’āˆ‚Ļ/āˆ‚t, and that makes it clear it’s aĀ localĀ conservation law. Therefore, charges can only disappear and re-appear through some current. We write dQ1/dt = ∫ (j•n)Ā·da = āˆ’dQ2/dt, and here’s the simple illustration that comes with it:

charge flow

So we doĀ notĀ allow for any ‘non-local’ interactions here! Therefore, we say that, if energy goes away from a region, it’s because itĀ flowsĀ away through the boundaries of that region.Ā So that’s what the Poynting formulas are all about, and so I want to be clear on that from the outset.

Now, to get going with the discussion, I need to give you the formula for the energy density inĀ electrodynamics. Its shape won’t surprise you:

energy density

However, it’s just like the electrostatic formula: it takes quite a bit of juggling to get this from our electrodynamic equations, so, if you want to see how it’s done, I’ll refer you to Feynman. Indeed, I feel the derivation doesn’t matter all that much, because the formula itself is veryĀ intuitive: it’s really the thing everyone knows about a wave, electromagnetic or not: the energy in it is proportional to the square of its amplitude, and so that’s E•EĀ =Ā E2Ā and B•BĀ = B2. Now, you also know that the magnitudeĀ of B is 1/c of that of E, so cB = E, and so that explains the extra c2Ā factor in the second term.

The second formula is also very intuitive. Let me write it down:

energy flux

Just look at it: uĀ is the energy density, so that’s the amount of energy per unit volumeĀ at a given point, and so whatever flows out of that point must represent itsĀ time rate of change. As for the ā€“āˆ‡ā€¢S expression… Well… Sorry, I can’t keep re-explaining things: theĀ āˆ‡ā€¢ operator is the divergence, and so it give us the magnitude of a (vector) field’s source or sinkĀ at a given point. āˆ‡ā€¢CĀ is a scalar, and if it’sĀ positive in a region, then that region is a source. Conversely, if it’s negative, then it’s a sink. To be precise, the divergence represents the volume density of the outward fluxĀ of a vector field from an infinitesimal volume around a given point. So, in this case, it gives us the volume density of the flux of S. As you can see, the formula has exactly the same shape asĀ āˆ‡ā€¢j =Ā āˆ’āˆ‚Ļ/āˆ‚t.

So what is S? Well… Think about the more general formula for the flux out of some closed surface, which we get from integrating over the volume enclosed. It’s just Gauss’ Theorem:

Gauss Theorem

Just replace C by E, and think about what it meant: the flux of E was the field strength multiplied by the surface area, so it was the totalĀ flow of E. Likewise, SĀ represents the flow of (field) energy. Let me repeat this, because it’s an important result:

SĀ represents the flow of field energy.

Huh?Ā What flow? Per unit area? Per second? How do you define such ‘flow’? Good question. Let’s do a dimensional analysis:

  1. E is measured inĀ newton per coulomb, so [E•E] = [E2] = N2/C2.
  2. B is measured in (N/C)/(m/s). [Huh?Ā Well… Yes. I explained that a couple of times already. Just check it in my introduction to electric circuits.] So we get [B•B] = [B2] = (N2/C2)Ā·(s2/m2) but the dimension of ourĀ c2Ā factor is (m2/s2) so we’re left with N2/C2. That’s nice, because we need to add in the same units.
  3. Now we need to look at ε0. That constant usually ‘fixes’ our units, but can we trust it to do the same now? Let’s see… One of the many ways in which we can express its dimension is [ε0] =Ā C2/(NĀ·m2), so if we multiply that with N2/C2, we find that u is expressed inĀ N/m2.Ā Wow!Ā That’sĀ kindaĀ neat. Why? Well… Just multiply with m/m and its dimension becomes NĀ·m/m3Ā = J/m3, so that’s Ā joule per cubic meter, so… Yes: uĀ has got the right unit for something that’s supposed to measure energy density!
  4. OK. Now, we take the time rate of change of u, and so both the right and left of our āˆ‚u/āˆ‚t = āˆ’āˆ‡ā€¢SĀ formula are expressed in (J/m3)/s, which means that the dimension ofĀ S itself mustĀ be J/(m2Ā·s). Just check it by writing it all out: āˆ‡ā€¢SĀ = āˆ‚Sx/āˆ‚x + āˆ‚Sy/āˆ‚x + āˆ‚Sz/āˆ‚z, and so that’s something per meterĀ so, to get the dimension of S itself, we need to go from cubic meter to square meter. Done! Let me highlight the grand result:

S is the energy flow per unit area and per second.

Now we’ve got its magnitude and its dimension, but what is its direction? Indeed, we’ve been writing S as a vector, but… Well… What’s its direction indeed?

Well… Hmm… I referred you to FeynmanĀ for that derivation of that u = ε0E2/2 + ε0c2B2/2 formula energy forĀ u, and so the direction of S – I should actually say, its complete definition – comes out of that derivation as well. So… Well… I think you should just believe what I’ll be writing here for S:

S formula

So it’s the vector cross product of E and B with ε0c2Ā thrown in. It’s a simple formula really, and because I didn’t drag you through the whole argument, you should just quickly do a dimensional analysis again—just to make sure I am not talking too much nonsense. šŸ™‚ So what’s the direction? Well… You just need to apply the usual right-hand rule:

right hand rule

OK. We’re done! This S vector, which – let me repeat it – represents the energy flow per unit area and per second, is what is referred to as Poynting’s vector, and it’s a most remarkable thing, as I’ll show now. Let’s think about theĀ implicationsĀ of this thing.

Poynting’s vector in electrodynamics

The S vector is actually quite similar to the heat flow vectorĀ h, which we presented when discussing vector analysis and vector operators. The heat flow out of a surface element daĀ is the area times the component ofĀ hĀ perpendicular to da, so that’s (h•n)Ā·da = hnĀ·da. Likewise, we can writeĀ (S•n)Ā·da = SnĀ·da. The units of S and h are also the same:Ā joule per second and per square meterĀ or, using the definition of theĀ wattĀ (1 W = 1 J/s), in watt per square meter.Ā In fact, if you google a bit, you’ll find that both hĀ and S are referred to as aĀ flux density:

  1. The heat flow vector hĀ is the heat flux densityĀ vector, from which we get the heat flux through an area through the (h•n)Ā·da = hnĀ·daĀ product.
  2. The energy flow SĀ is the energy flux density vector, from which we get the energy flux through the (S•n)Ā·da = SnĀ·daĀ product.

The big difference, of course, is that we getĀ hĀ from a simpler vector equation:

h =Ā āˆ’Īŗāˆ‡T ⇔ (hx,Ā hy,Ā hz) =Ā āˆ’Īŗ(āˆ‚Tx/āˆ‚x,Ā āˆ‚Ty/āˆ‚y,āˆ‚Tz/āˆ‚x)

The vector equation for SĀ is more complicated:

S formula

So it’s a vector product. Note that S will be zero if E = 0 and/or if B = 0. So S = 0 in electrostatics, i.e. when there are no moving charges and only steady currents. Let’s examine Feynman’s examples.

The illustration below shows the geometry of theĀ E, B and SĀ vectors for a light wave. It’s neat, and totally in line with what we wrote on theĀ radiation pressure, or the momentum of light. So I’ll refer you to that post for an explanation, and to Feynman himself, of course.

light wave

OK. The situation here is rather simple. Feynman gives a few others examples that are notĀ so simple, like that ofĀ a charging capacitor, which is depicted below.

capacitor

The Poynting vector points inwards here, toward the axis. What does it mean?Ā It means the energy isn’t actually coming down the wires, but from the space surrounding the capacitor.Ā 

What?Ā I know. It’s completely counter-intuitive, at first that is. You’d think it’s the charges. But it actually makes sense. The illustration below shows how we should think of it. The charges outside of the capacitor are associated with a weak, enormously spread-out field that surrounds the capacitor. So if we bring them to the capacitor, that field gets weaker, and the field between the plates gets stronger. So the field energy which is way out moves into the space between the capacitor plates indeed, and so that’s what Poynting’s vector tells us here.

capacitor 2

Hmm… Yes. You can be skeptic. You should be. But that’s how it works. The next illustration looks at a current-carrying wire itself. Let’s first look at the B and E vectors. You’re familiar with the magnetic field around a wire, so the B vector makes sense, but what about the electric field? Aren’t wires supposed to be electrically neutral? It’s a tricky question, and we handled it in our post on the relativity of fields. The positive and negative charges in a wire should cancel out, indeed, but then it’s the negative charges that move and, because of their movement, we have the relativistic effect of length contraction, so the volumes are different, and theĀ positive andĀ negative charge density do notĀ cancel out: the wire appears to be charged, so we do have a mix of E and B! Let me quickly give you the formula: E = (2πε0)Ā·(Ī»/r), withĀ Ī» the (apparent) charge per unit length, so it’s the same formula as for a long line of charge, or for a long uniformly charged cylinder.

So we have a non-zeroĀ E and B and, hence, a non-zero Poynting vector S, whose direction isĀ radially inward, so there is a flow of energy into the wire, all around. What the hell?Ā Where does it go? Well… There’s a few possibilities here: the charges need kineticĀ energy to move, or as they increaseĀ theirĀ potentialĀ energy when moving towards the terminals of our capacitor to increase the charge on the plates or, much more mundane, the energy may be radiated out again in the form of heat. It looks crazy, but that’s how it is really. In fact, the more you think about, the more logical it all starts to sound. EnergyĀ mustĀ be conserved locally, and so it’s just field energy going in and re-appearing in some other form. So itĀ doesĀ make sense. But, yes, it’s weird, because no one bothered to teach us thisĀ in school. šŸ™‚

wire

The ‘craziest’ example is the one below: we’ve got a charge and a magnet here. All is at rest. Nothing is moving… Well… I’ll correct that in a moment. šŸ™‚ The charge (q) causes a (static) Coulomb field, while our magnet produces the usual magnetic field, whose shape we (should) recognize: it’s the usual dipole field. So E and B are not changing. But so when we calculate our Poynting vector, we see there is a circulation of S. TheĀ EƗBĀ product is notĀ zero.Ā So what’s going on here?

crazy

Well… There is no netĀ change in energy with time: the energy just circulates around and around. Everything which flows into one volume flows out again. As Feynman puts it: “It is like incompressible water flowing around.” What’s the explanation? Well… Let me copy Feynman’s explanation of this ‘craziness’:

“Perhaps it isn’t so terribly puzzling, though, when you remember that what we called a ā€œstaticā€ magnet is really a circulating permanent current. In a permanent magnet the electrons are spinning permanently inside. So maybe a circulation of the energy outside isn’t so queer after all.”

So… Well… It looks like we do need to revise some of our ‘intuitions’ here. I’ll conclude this post by quoting Feynman on it once more:

“You no doubt get the impression that the Poynting theory at least partially violates your intuition as to where energy is located in an electromagnetic field. You might believe that you must revamp all your intuitions, and, therefore have a lot of things to study here. But it seems really not necessary. You don’t need to feel that you will be in great trouble if you forget once in a while that the energy in a wire is flowing into the wire from the outside, rather than along the wire. It seems to be only rarely of value, when using the idea of energy conservation, to notice in detail what path the energy is taking. The circulation of energy around a magnet and a charge seems, in most circumstances, to be quite unimportant. It is not a vital detail, but it is clear that our ordinary intuitions are quite wrong.”

Well… That says it all, I guess. As far as I am concerned, I feel the Poyning vector makes things actuallyĀ easierĀ to understand. Indeed, the E and B vectors were quite confusing, because we had twoĀ of them, and the magnetic field is, frankly, a weirdĀ thing. Just think about the units in which we’re measuring B: (N/C)/(m/s). IĀ can’t imagine what a unit like that could possible represent, so I must assume you can’t either.Ā But so now we’ve got this Poynting vector that combines both E and B, and which represents the flow of the field energy. Frankly, I think that makes a lot of sense, and it’s surely much easier to visualize than EĀ and/or B. [Having said that, of course, you should note that E and B do have their value, obviously, if only because they represent theĀ lines of force, and so that’s something very physical too, of course.Ā I guess it’s a matter of taste, to some extent, but so I’d tend to soften Feynman’s comments on the supposed ‘craziness’ of S.

In any case… The next thing I should discuss isĀ field momentum. Indeed, if we’ve got flow, we’ve got momentum. But I’ll leave that for my next post. This topic can’t be exhausted in one post only, indeed. šŸ™‚ So let me conclude this post. I’ll do with a very nice illustration I got from the Wikipedia article on the Poynting vector. It shows the Poynting vector around a voltage source and a resistor, as well as what’s going on in-between. [Note that the magnetic field is given by the field vector H, which is related to B as follows: B = μ0(H + M), with M the magnetization of the medium. B and H are obviously just proportional in empty space, with μ0Ā as the proportionality constant.]

Poynting_vectors_of_DC_circuit

Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 17, 2020 as a result of a DMCA takedown notice from Michael A. Gottlieb, Rudolf Pfeiffer, and The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 20, 2020 as a result of a DMCA takedown notice from Michael A. Gottlieb, Rudolf Pfeiffer, and The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/

Re-visiting relativity and four-vectors: the proper time, the tensor and the four-force

Pre-script (dated 26 June 2020): Our ideas have evolved into a full-blown realistic (or classical) interpretation of all things quantum-mechanical. In addition, I note the dark force has amused himself by removing some material. So no use to read this. Read my recent papers instead. šŸ™‚

Original post:

My previous post explained how four-vectors transformĀ from one reference frame to the other. Indeed, a four-vector isĀ notĀ just some one-dimensional array of four numbers: it represent something—a physical vector that… Well… Transforms like a vector. šŸ™‚ So whatĀ vectorsĀ are we talking about? Let’s see what we have:

  1. We knew the position four-vector already, which we’ll write as xμ = (ct, x, y, z) = (ct, x).
  2. We also proved that Aμ = (Φ,Ā Ax, Ay, Az) = (Φ,Ā A)Ā is a four-vector: it’s referred to as the four-potential.
  3. We also know the momentumĀ four-vector from theĀ LecturesĀ on special relativity. We write it as pμ = (E, px, py, pz) = (E, p), with E = γm0, p = γm0v, and γ = (1āˆ’v2/c2)āˆ’1/2Ā or, forĀ cĀ = 1, γ =Ā (1āˆ’v2)āˆ’1/2

To show that it’s notĀ just a matter of adding some fourth t-component to aĀ three-vector, Feynman gives the example of the four-velocity vector. We have vxĀ = dx/dt,Ā vyĀ = dy/dt and vzĀ = dz/dt, but a vμ = (d(ct)/dt, dx/dt, dy/dt, dz/dt) = (c, dx/dt, dy/dt, dz/dt) ‘vector’ is, obviously, not a four-vector. [Why obviously? The inner productĀ vμvμ Ā is not invariant.] In fact,Ā Feynman ‘fixes’ the problem by noting that ct, x, y and z have the ‘right behavior’, but the d/dt operator doesn’t. The d/dt operator isĀ not an invariant operator. So how does he fix it then? He tries the (1āˆ’v2/c2)āˆ’1/2Ā·d/dt operator and, yes, it turns out we do get a four-vector then. In fact, we get that four-velocity vector uμ that we were looking for:four-velocity vector[Note we assume we’re using equivalentĀ time and distance units now, soĀ cĀ = 1 andĀ v/c reduces to a new variable v.]

Now how do we know this is four-vector? How can we prove this one? It’s simple. We can get it from ourĀ pμ = (E, p) by dividing it by m0, which is an invariantĀ scalar in four dimensions too. Now, it is easy to see that a division by an invariantĀ scalar doesĀ notĀ change the transformation properties. So just write it all out, and you’ll see that pμ/m0Ā = uμ and, hence, that uμ is a four-vector too. šŸ™‚

We’ve got an interesting thing here actually: division by an invariant scalar, or applying that (1āˆ’v2/c2)āˆ’1/2Ā·d/dt operator, which is referred to as an invariant operator, on a four-vector will give us another four-vector. Why is that? Let’s switch to compatible time and distance units soĀ c = 1 so to simplify the analysis that follows.

The invariant (1āˆ’v2)āˆ’1/2Ā·d/dt operatorĀ and the proper time s

Why is theĀ (1āˆ’v2)āˆ’1/2Ā·d/dt operator invariant? Why does it ‘fix’ things? Well… Think about the invariant spacetime interval (Ī”s)2Ā = Ī”t2Ā āˆ’ Ī”x2Ā āˆ’ Ī”y2Ā āˆ’Ā Ī”z2Ā going to the limit (ds)2Ā = dt2Ā āˆ’ dx2Ā āˆ’ dy2Ā āˆ’ dz2Ā . Of course, we can and should relate this to an invariant quantity s = ∫ ds. Just like Ī”s, this quantity also ‘mixes’ time and distance. Now, we could try to associate some derivative d/ds with it because, as Feynman puts it, “it should be a nice four-dimensional operation because it is invariant with respect to a Lorentz transformation.” Yes. It should be. So let’s relate ds to dt and see what we get. That’s easy enough: dx = vxĀ·dt,Ā dy = vyĀ·dt, dz = vzĀ·dt, so we write:

(ds)2Ā = dt2Ā āˆ’ vx2Ā·dt2Ā āˆ’ vy2Ā·dt2Ā āˆ’Ā vz2Ā·dt2 ⇔ (ds)2Ā = dt2Ā·(1 āˆ’ vx2Ā āˆ’ vy2Ā āˆ’Ā vz2) = dt2Ā·(1 āˆ’ v2)

and, therefore, ds = dtĀ·(1āˆ’v2)1/2. So our operator d/ds is equal to (1āˆ’v2)āˆ’1/2Ā·d/dt, and we can apply it toĀ anyĀ four-vector, as we are sure that, as an invariant operator, it’s going to give us another four-vector. I’ll highlight the result, because it’s important:

The d/ds = (1āˆ’v2)āˆ’1/2Ā·d/dt operator is an invariant operator for four-vectors.

For example, if we apply it to xμ = (t, x, y, z), we get the very same four-velocity vector μμ:

dxμ/ds = uμ = pμ/m0

Now, if you’re somewhat awake, you should ask yourself: what is this s, really,Ā and what is this operator all about? Our new function s = ∫ ds is notĀ the distance function, as it’s got both time and distance in it. Likewise, the invariant operatorĀ d/ds = (1āˆ’v2)āˆ’1/2Ā·d/dt has both time and distance in it (the distance is implicit in the v2Ā factor). Still, it is referred to as the proper timeĀ along the path of a particle. Now why is that? If it’s got distance andĀ time in it, why don’t we call it the ‘proper distance-time’ or something?

Well… The invariant quantity s actually is the time that would be measured by a clock that’s moving along, in spacetime, with the particle. Just think of it: in the reference frame of the moving particle itself, Ī”x, Ī”yĀ and Ī”zĀ must be zero, because it’s not moving in its own reference frame. So theĀ (Ī”s)2Ā = Ī”t2Ā āˆ’ Ī”x2Ā āˆ’ Ī”y2Ā āˆ’Ā Ī”z2Ā reduces to (Ī”s)2Ā = Ī”t2, and so we’re only adding time to s. Of course, this view of things implies that theĀ proper time itself is fixed only up to some arbitrary additive constant, namely the setting of the clock at some event along the ‘world line’ of our particle, which is its path in four-dimensional spacetime. But… Well… In a way, s is the ‘genuine’ or ‘proper’ time coming with the particle’s reference frame, and so that’s why Einstein called it like that. You’ll see (later) that it plays a very important role in general relativity theory (which is a topic we haven’t discussed yet: we’ve only touched special relativity, so no gravity effects).

OK. I know this is simple and complicated at the same time: the math is (fairly) easy but, yes, it may be difficult to ‘understand’ this in some kind of intuitiveĀ way. But let’s move on.

The four-force vector fμ

We know the relativistically correct equation for the motionĀ of some charge q. It’s just Newton’s Law F = dp/dt = d(mv)/dt. The only difference is that we areĀ not assuming that m is some constant. Instead, we use the pĀ = γm0vĀ formula to get:

motion

How can we get a four-vector for the force? It turns out that we get it when applying our new invariant operator to the momentum four-vector pμ = (E, p), so we write: fμ = dpμ/ds. But pμ = m0uμ = m0dxμ/ds, so we can re-write this as fμ = d(m0·dxμ/ds)/ds, which gives us a formula which is reminiscent of the Newtonian F = ma equation:

force formula

WhatĀ isĀ this thing? Well… It’s not so difficult to verify that the x, y and z-components are just our old-fashioned Fx,Ā FyĀ andĀ Fz, so these are the components of F. The t-component is (1āˆ’v2)āˆ’1/2Ā·dE/dt. Now, dE/dt is the time rate of change of energy and, hence, it’s equal to the rate of doing work on our charge, which is equal to F•v. So we can write fμ as:

froce

The force and the tensor

We will now derive that formula which we ended the previous postĀ with. We start with calculating the spacelike components of fμ from the Lorentz formula F = q(E + vƗB). [The terminology is nice, isn’t it? The spacelike components of the four-force vector!Ā Now thatĀ sounds impressive, doesn’t it? But so… Well… It’s really just the old stuff we know already.]Ā So we start with fxĀ =Ā Fx, and write it all out:

fx

What a monster! But,Ā hey! We can ‘simplify’ this by substituting stuff by (1) the t-, x-, y- and z-components of the four-velocity vector uμ and (2) the components of our tensor Fμν = [Fij] = [āˆ‡iAjĀ āˆ’Ā āˆ‡jAi] with i, j = t, x, y, z. We’ll also pop in the diagonal FxxĀ = 0 element, just to make sure it’s all there. We get:

fx 2

Looks better, doesn’t it? šŸ™‚ Of course, it’s just the same, really. This is just an exercise in symbolism. Let me insert the electromagnetic tensor we defined in our previous post, just as a reminder of what that Fμν matrix actually is:

electromagnetic tensor final

If you read my previous post, this matrix – or the concept of a tensor – has no secrets for you. Let me briefly summarize it, because it’s an important result as well. The tensor is (a generalization of) the cross-product inĀ four-dimensional space. We take two vectors:Ā aμ =Ā (at, ax, ay, az) and bμ =Ā (bt, bx, by, bz) and then we takeĀ cross-products of their components just like we did in three-dimensional space, so we write TijĀ = aibjĀ āˆ’Ā ajbi. Now, it’s easy to see that this combination implies that TijĀ = āˆ’ TjiĀ and that TiiĀ = 0, which is why we only have sixĀ independent numbers out of the 16 possible combinations, and which is why we’ll get a so-called anti-symmetric matrix when we organize them in a matrix. In three dimensions, the very same definition of the cross-product TijĀ gives us 9 combinations, and only 3 independent numbers, which is why we represented our ‘tensor’ as a vector too! In four-dimensional space we can’t do that: six things cannot be represented by a four-vector, so weĀ need to use thisĀ matrix, which is referred to as a tensor of the second rank in four dimensions. [When you start using words like that, you’ve come a long way, really. :-)]

[…]Ā OK. Back to our four-force. It’s easy to get a similar one-liner forĀ fyĀ and fzĀ too, of course, as well as for ft. But… Yes, ft… Is it the same thing really? Let me quickly copy Feynman’s calculation for ft:

ft

ItĀ does: remember that vƗB and v are orthogonal, and so their dot product is zero indeed. So, to make a long story short, the four equations – one for each component of the four-force vector fμ – can be summarized in the following elegant equation:

motion equation

Writing this all requires a few conventions, however. For example,Ā Fμν is a 4Ɨ4 matrix and so uν has to be written as a 1Ɨ4 vector. And the formula for theĀ fxĀ and ftĀ component also make it clear that we also want to use the +āˆ’āˆ’āˆ’ signature here, so the convention for the signs in the uνFμν product is the same as that for the scalar product aμbμ. So, in short, you really need to interpret what’s being written here.

A more important question, perhaps, is: what can we do with it? Well… Feynman’s evaluation of the usefulness of this formula is rather succinct: “Although it is nice to see that the equations can be written that way, this form is not particularly useful. It’s usually more convenient to solve for particle motions by using the F = q(E + vƗB) = (1āˆ’v2)āˆ’1/2Ā·d(m0v)/dtĀ equations, and that’s what we will usually do.”

Having said that, this formula really makes good on the promise I started my previous post with: we wanted a formula, someĀ mathematical construct, that effectively presents the electromagnetic force as oneĀ force, as one physical reality. So… Well… Here it is! šŸ™‚

Well… That’s it for today. Tomorrow we’ll talk about energy and about a veryĀ mysterious concept—the electromagnetic mass. That should be fun! So I’llĀ c u tomorrow! šŸ™‚

Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 20, 2020 as a result of a DMCA takedown notice from Michael A. Gottlieb, Rudolf Pfeiffer, and The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/

Relativistic transformations of fields and the electromagnetic tensor

Pre-script (dated 26 June 2020): Our ideas have evolved into a full-blown realistic (or classical) interpretation of all things quantum-mechanical. In addition, I note the dark force has amused himself by removing some material. So no use to read this. Read my recent papers instead. šŸ™‚

Original post:

We’re going to do a very interesting piece of math here. It’s going to bring a lot of things together. The key idea is to present aĀ mathematical construct that effectively presents the electromagnetic force as oneĀ force, as one physical reality. Indeed, we’ve been saying repeatedly that electromagnetism is oneĀ phenomenon only but we’ve been writingĀ it always as something involvingĀ twoĀ vectors: he electric field vector E and the magnetic field vector B. Of course, Lorentz’ force law F = q(E + vƗB) makes it clear we’re talking one force only but… Well… There is a way of writing it all up that is much more elegant.

I have to warn you though: this post doesn’t add anything to theĀ physicsĀ we’ve seen so far: it’s all math, really and, to a large extent, math only. So if you read this blog because you’re interested in the physics only, then you may just as well skip this post. Having said that, theĀ mathematical concept we’re going to present is that of the tensorĀ and… Well… You’ll have to get to know that animal sooner or later anyway, so you may just as well give it a try right now, and see whatever you can get out of this post.

The concept of a tensor further builds on the concept of the vector, which we liked so much because it allows us to write the laws of physics as vector equations, which do notĀ change whenĀ going from one reference frame to another. In fact, we’ll see that a tensor can be described as a ‘special’ vectorĀ cross productĀ (to be precise, we’ll show that a tensor is a ‘more general’ cross product, really). So the tensor and vector concepts areĀ veryĀ closely related, but then… Well… If you think about it, the concept of a vector and the concept of a scalar are closely related, too! So we’re just moving up the value chain, so to speak: from scalar fields to vector fields to… Well… Tensor fields! And in quantum mechanics, we’ll introduce spinors, and so we also have spinor fields!Ā Having said that, don’t worry about tensor fields. Let’s first try to understand tensorsĀ tout court.Ā šŸ™‚

So… Well… Here we go.Ā Let me start with it all by reminding you of the concept of a vector, and why we like to use vectors and vector equations.

The invariance of physics and the use of vector equations

What’s a vector? You may think, naively, that any one-dimensional array of numbers is a vector. But… Well… No! In math, we may, effectively, refer to anyĀ one-dimensionalĀ array of numbersĀ as a ‘vector’, perhaps, but in physics, a vector does represent something real, something physical, and so a vector is only a vector if it transforms like a vectorĀ underĀ theĀ transformationĀ rules that apply when going from one another frame of reference, i.e. one coordinate system, to another. Examples of vectors in three dimensions are: the velocity vector v, or the momentum vector p = mĀ·v, or the position vector r.

Needless to say, the same can be said of scalars: mathematicians may define a scalar as just any real number, but it’sĀ notĀ in physics. A scalar in physics refers to something real, i.e.Ā a scalar field, like the temperature (T) inside of a block of material. In fact, think about your first vector equation: it may have been the one determining the heat flow (h), i.e. h =Ā āˆ’ĪŗĀ·āˆ‡T = (āˆ’ĪŗĀ·āˆ‚T/āˆ‚x, āˆ’ĪŗĀ·āˆ‚T/āˆ‚y, āˆ’ĪŗĀ·āˆ‚T/āˆ‚z). It immediately shows how scalar and vector fields are intimately related.

Now, when discussing the relativistic framework of physics, we introduced vectors inĀ fourĀ dimensions, i.e.Ā four-vectors.Ā The most basic four-vector is the spacetime four-vector R = (ct, x, y, z), which is often referred to as an event, but it’s just aĀ point in spacetime, really. So it’s a ‘point’ with a time as well as a spatial dimension, so it also has t in it, besides x, y and z. It is also known as the position four-vectorĀ but, again, you should think of a ‘position’ that includes time! Of course, we can re-write R as R = (ct, r), with r = (x, y, z), so here we sort of ‘break up’ the four-vector in a scalar and a three-dimensional vector, which is something we’ll do from time to time, indeed. šŸ™‚

We also have aĀ displacement four-vector, which we can write asĀ Ī”R = (cĀ·Ī”t, Ī”r). There are other four-vectors as well, including theĀ four-velocity, theĀ four-momentum and theĀ four-forceĀ four-vectors, which we’ll discuss later (in the last section of this post).

So it’s just like using three-dimensional vectors in three-dimensional physics, or ‘Newtonian’ physics, I should say: the use of four-vectors is going toĀ allow us to write the laws of physics usingĀ vector equations, butĀ in four dimensions, rather than three, so we get the ‘Einsteinian’ physics, the realĀ physics, so to speak—or the relativistically correct physics, I should say. And so these four-dimensional vector equations will also notĀ change whenĀ going from one reference frame to another, and so our four-vector will be vectors indeed, i.e. they willĀ transformĀ like a vectorĀ underĀ theĀ transformationĀ rules that apply when going from one another frame of reference, i.e. one coordinate system, to another.

What transformation? Well… In Newtonian or Galilean physics, we had translations and rotations and what have you, but what weĀ are interested in right now areĀ ‘Einsteinian’ transformations of coordinate systems, so these have to ensure that allĀ of the laws of physics that we know of, including the principle of relativity,Ā still look the same.Ā You’ve seen these transformation rules. We don’t call them the ‘Einsteinian’ transformation rules, but the LorentzĀ transformation rules, because it was a Dutch physicist (Hendrik Lorentz) who first wrote them down. So these rules are veryĀ differentĀ from the Newtonian or Galilean transformation rules which everyone assumed to be valid until the Michelson-Morley experiment unequivocally established that the speed of lightĀ did notĀ respect the Galilean transformation rules. VeryĀ different? Well… Yes. In their mathematical structure, that is. Of course, when velocities are low, i.e.Ā non-relativistic, then they yield the same result,Ā approximately, that is. However,Ā I explained that in my post on special relativity, and so I won’t dwell on that here.

Let me just jot down both sets of rules assumingĀ that the two reference frames move with respect to each other along the x- axis only, so the y- and z-component of u is zero.

Capture

The Galilean or Newtonian rules are the simple rules on the right. Going from one reference frame to another (let’s call them S and S’ respectively) is just a matter of adding or subtracting speeds: if my car goes 100 km/h, and yours goes 120 km/h, then youĀ will see my car falling behind at a speed of (minus) 20 km/h. That’s it. We could alsoĀ rotateĀ our reference frame, and our NewtonianĀ vector equationsĀ would still look the same. As Feynman notes, smilingly, it’s what a lot of armchair philosophers think relativity theory is all about, but so it’s got nothing to do with it. It’s plain wrong!

In any case, back to vectors and transformations.Ā The key to the so-calledĀ invarianceĀ of the laws of physics is the use of vectors and vector operators that transform like vectors.Ā For example, if we defined A and B as (Ax, Ay, Az) and (Bx, By, Bz), then we knew that the so-called inner productĀ A•BĀ would look the same in all rotated coordinate systems, so we can write: A•B =Ā A’•B’. So we know that if we have a product like that on both sides of an equation, we’re fine: the equation will have the same formĀ in all rotated coordinate systems. Also, the gradient, i.e. ourĀ vector operatorĀ āˆ‡Ā = (āˆ‚/āˆ‚x, āˆ‚/āˆ‚y, āˆ‚/āˆ‚z), when applied to a scalar function, gave three quantities that also transform like a vector under rotation. We also defined a vectorĀ crossĀ product, which yielded aĀ vector (as opposed to the inner product, i.e. the vectorĀ dotĀ product, which yields a scalar):

cross product

So how does this thing behave under a Galilean transformation? Well… You may or may not remember that we used this cross-product to define theĀ angular momentum L, which was a cross product of the radius vector r and the momentum vector p = mv, as illustrated below. The animation also gives the torque Ļ„, which is, loosely speaking, a measure of the turning force: it’s the cross product of r and F, i.e. the force on the lever-arm.

Torque_animation

The components of L are:

momentum angular

Now, we find that these three numbers, orĀ objectsĀ if you want, transform inĀ exactly the same wayĀ as the components of a vector.Ā However, as Feynman points out, that’s a matter of ‘luck’ really. It’s something ‘special’. Indeed,Ā you may or may not remember that we distinguished axialĀ vectors fromĀ polar vectors. L is an axial vector, while r and p are polar vectors, and so we find that, in three dimensions, the cross product of two polar vectors will always yields an axial vector. Axial vectors are sometimes referred to asĀ pseudovectors, which suggests that they are ‘not so real’ as… Well… Polar vectors, which are sometimes referred to as ‘true’ vectors. However, it doesn’t matter when doing these Newtonian or Galilean transformations: pseudo or true, both vectors transform like vectors. šŸ™‚

But so… Well… We’re actually getting a bit of a heads-up here: if we’d be mixing (or ‘crossing’) polar and axial vectors, or mixing axial vectors only, so if we’d define something involvingĀ LĀ andĀ pĀ (rather than r and p), or something involvingĀ LĀ andĀ Ļ„, then we may notĀ be so lucky, and then we’d have to carefully examine our cross-product, or whatever other product we’d want to define, because its components mayĀ notĀ behave like a vector.

Huh? Whatever other product we’d want to define? Why are you saying that?Ā Well…Ā We actuallyĀ can think of other products. For example, if we haveĀ two vectors a = (ax, ay,Ā az) and b = (bx, by, bz), then we’ll haveĀ nine possible combinations of their components, which we can write as TijĀ = aibj. So that’s like Lxy, LyzĀ and LzxĀ really. Now, you’ll say: “No. It isn’t. We don’t have nine combinations here. Just three numbers.” Well… Think about it: we actually doĀ haveĀ nineĀ LijĀ combinations too here, as we can write: LijĀ = riĀ·pj – rjĀ·pi. It just happens that, with this definition, only threeĀ of these combinations LijĀ are independent. That’s because the other six numbers are either zero or the opposite. Indeed, it’s easy to verify thatĀ LijĀ = –LjiĀ , and LiiĀ  = 0. So… Well… It turns out that the three components of our LĀ = rƗp ‘vector’ are actually a subset of a set of nineĀ LijĀ numbers.Ā So… Well… Think about it. We cannot just do whatever we want with our ‘vectors’. We need to watch out.

In fact, I do not want to get too much ahead of myself, but I can already tell you that the matrix with these nineĀ TijĀ = aibjĀ combinations is what is referred to as the tensor. To be precise, it’s referred to as a tensor of the second rank in three dimensions. The ‘second rank’, aka as ‘degree’ or ‘order’ refers to the fact that we’ve got two indices, and the ‘three dimensions’ is because we’re using three-dimensional vectors. We’ll soon see that the electromagnetic tensor is also of the second rank, but it’s a tensor in four dimensions. In any case, I should not get ahead of myself. Just note what I am saying here: the tensor is like a ‘new’ product of two vectors, a new type of ‘cross’ product really (because we’re mixing the components, so to say), but it doesn’t yield a vector: it yields aĀ matrix. For three-dimensional vectors, we get a 3Ɨ3 matrix. For four-vectors, we’ll get a 4Ɨ4 matrix. And so the full truth about our angular momentum vector L, is the following:

  1. There is a thing which we call the angular momentum tensor. It’s a 3Ɨ3 matrix, so it has nine elements which are defined as: LijĀ = riĀ·pj – rjĀ·pi. BecauseĀ of this definition, it’s an antisymmetric tensor of the second order in three dimensions, so it’s got only three independentĀ components.
  2. The three independent elements are the components of our ‘vector’ L, and picking them out and calling these three components a ‘vector’ is actually a ‘trick’ that only works in three dimensions. They really just happen toĀ transform like a vector under rotation or under whatever Galilean transformation! [By the way, do you know understand why I was saying that we can look at a tensor as a ā€˜more general’ cross product?]
  3. In fact, in four dimensions, we’ll use a similar definition and define 16 elementsĀ FijĀ as FijĀ = āˆ‡iAjĀ āˆ’Ā āˆ‡jAi, using the two four-vectors āˆ‡Ī¼Ā and Aμ (so we have 4Ɨ4 = 16 combinations indeed), out of whichĀ only sixĀ will be independent for the very same reason: we have an antisymmetric vector combination here, FijĀ = āˆ’FjiĀ and FiiĀ = 0. šŸ™‚ However, because we cannotĀ represent six independent things by four things, we doĀ notĀ get some other four-vector, and so that’s why we cannot apply the same ‘trick’ in four dimensions.

However, here I amĀ getting way ahead of myself and so… Well… Yes. Back to the main story line. šŸ™‚ So let’s try to move to the next level of understanding, which is… Well…

Because of guys like Maxwell and Einstein, we now knowĀ that rotations are part of the Newtonian world, in which time and space are neatly separated, and that things are notĀ so simple in Einstein’s world, which is the real world, as far as we know, at least! Under a Lorentz transformation, the new ā€˜primed’ space and time coordinates are a mixture of the ā€˜unprimed’ ones. Indeed, the new x’Ā is a mixture of x and t, and the new t’Ā is a mixture of x and t as well. [Yes, please scroll all the way up and have a look at the transformation on the left-hand side!]

So youĀ don’t have that under a Galilean transformation: in the Newtonian world, space and time are neatly separated, and time is absolute, i.e. it is the same regardless of the reference frame. In Einstein’s world – our world – that’s not the case: time is relative, orĀ localĀ as Hendrik Lorentz termed it quite appropriately,Ā and so it’s space-time – i.e. ā€˜some kind of union of space and time’ as Minkowski termed it – that transforms.

So that’s why physicists useĀ four-vectorsĀ toĀ keep track of things. These four-vectors always have three space-like components, but they also include one so-calledĀ time-like component.Ā It’s the only way to ensure thatĀ the laws of physics are unchanged when moving with uniform velocity.Ā Indeed, any true law of physics we write down must be arranged so that the invariance of physics (as a “fact of Nature”, as Feynman puts it) is built in, and so that’s why we use Lorentz transformations and four-vectors.

In the mentioned post, I gave a few examples illustrating how the Lorentz rules work. Suppose we’re looking at some spaceship that is moving at half the speed of light (i.e. 0.5c) and that, inside the spaceship, some object is also moving at half the speed of light, as measured in the reference frame of the spaceship, then we get the rather remarkable result that, from ourĀ point of view (i.e. ourĀ reference frame as observer on the ground), that object is notĀ going as fast as light, as Newton or Galileo – and most present-day armchair philosophers šŸ™‚ – would predict (0.5cĀ +Ā 0.5cĀ = c). We’d see it move at a speed equal to vĀ =Ā 0.8c. Huh?Ā How do we know that? Well… We can derive a velocity formula from the Lorentz rules:

Capture

So now you can just put in the numbers now:Ā vxĀ = (0.5c + 0.5c)/(1 + 0.5Ā·0.5) = 0.8c. See?

Let’s do another example. Suppose we’re looking at a light beam inside the spaceship, so something that’s traveling at speed c itself in the spaceship. How does that look to us? The Galilean transformation rules say its speed should be 1.5c, but that can’t be true of course, and the Lorentz rules save us once more: vxĀ = (0.5cĀ + c)/(1 + 0.5Ā·1) = c, so it turns out that the speed of light doesĀ notĀ depend on the reference frame: it looks the same – both to the man in the ship as well as to the man on the ground. As Feynman puts it: “This is good, for it is, in fact, what the Einstein theory of relativity was designed to do in the first place—so it hadĀ betterĀ work!” šŸ™‚

So let’s now apply relativity to electromagnetism. Indeed, that’s what this post is all about! However, before I do so, let me re-write the Lorentz transformation rules forĀ cĀ = 1. We can equate the speed of light to one, indeed, when measure time and distance in equivalent units. It’s just a matter of ditching our seconds for meters (so our time unit becomes the time that light needs to travel a distance of one meter), or ditching our meters for seconds (so our distance unit becomes the distance that light travels in one second). You should be familiar with this procedure. If not, well… Check out my posts on relativity. So here’s the same set of rules for cĀ = 1:

Lorentz rules

They’re much easier to remember and work with, and so that’s good, because now we need toĀ look at how these rules work with four-vectors and the various operations and operators we’ll be defining on them. Let’s look at that step by step.

Electrodynamics in relativistic notation

Let me copy the UniversalĀ Set of Equations and Their Solution once more:

frame

The solution for Maxwell’s equations is given in terms of the (electric) potential Φ and the (magnetic) vectorĀ potential A. I explained that in my post on this, so I won’t repeat myself too much here either. The only point you should note is that this solution is the result of a special choice of Φ andĀ A, which we referred to as the Lorentz gauge.Ā We’ll touch upon this condition once more, so just make a mental note of it.

Now, E and B do not correspond to four-vectors: theyĀ dependĀ on x, y, z and t, but they haveĀ threeĀ components only:Ā Ex, Ey,Ā Ez, and Bx, By, and BzĀ respectively. So we have sixĀ independent terms here, rather thanĀ four things that, somehow, we could combine into some four-vector. [Does this ring a bell? It should. :-)] Having said that, it turns out that weĀ canĀ combine Φ andĀ AĀ into a four-vector, which we’ll refer to as the four-potentialĀ and which we’llĀ will write as:

Aμ = (Φ, A) = (Φ, Ax, Ay, Az) = (At, Ax, Ay, Az) with At = Φ.

So that’s a four-vector just likeĀ R = (ct, x, y, z).

How do we know that Aμ is a four-vector? Well… Here I need to say a few things about those Lorentz transformation rules and, more importantly, about the required condition ofĀ invarianceĀ under a Lorentz transformation. So, yes, here we need to dive into the math.

Four-vectors and invariance under Lorentz transformations

When you were in high-school, you learned how toĀ rotateĀ your coordinate frame. You also learned that the distance of a point from the origin does not change under a rotation, so you’d writeĀ r’2Ā = x’2Ā + y’2Ā + z’2Ā =Ā r2Ā =Ā x2Ā + y2Ā + z2, and you’d say that r2Ā is an invariant quantity under a rotation.Ā Indeed, transformations leave certain things unchanged. From the Lorentz transformation rules itself, it is easy to see that

cĀ·t’2 – x’2 – y’2 –z ‘2 = cĀ·t2 –x2 – y2Ā  – z2, or,

if cĀ = 1, thatĀ t’2 – x’2 – y’2 – z’2 = t2 – x2 – y2Ā  – z2,

is anĀ invariantĀ under a Lorentz transformation. We found the same for the so-calledĀ spacetimeĀ interval Ī”s2Ā  =Ā Ī”r2 – cĪ”t2, which we write as Ī”s2Ā  =Ā Ī”r2 – Ī”t2Ā as we chose our time or distance units such that cĀ = 1. [Note that, from now on, we’ll assume that’s the case, so cĀ = 1 everywhere. We can always change back to our old units when we’re done with the analysis.] Indeed, such invariance allowed us to define spacelike,Ā timelikeĀ and lightlikeĀ intervals using the so-called light cone emanating from a single event and traveling in all directions.

You should note that, for four-vectors, we do not have a simple sum of three terms. Indeed, we don’t write x2Ā + y2Ā + z2Ā but t2 – x2 – y2Ā  – z2. So we’ve got a +āˆ’āˆ’āˆ’ thing here or, it’s just another convention, we could also work with a āˆ’+++ sum of terms. The convention is referred to as the signature, and we will use the so-calledĀ metric signature here, which is +āˆ’āˆ’āˆ’. Let’s continue the story.Ā Now, all four-vectors aμ = (at, ax, ay, az) have this property that:

at2 – ax2 – ay2 – az2 = at2 – ax2 – ay2Ā  – az2.

[The primed quantities are, obviously, the quantities as measured in the other reference frame.]Ā So. Well… Yes. šŸ™‚ But… Well… Hmm… We can sayĀ that our four-potential vector is a four-vector, butĀ so we still have toĀ proveĀ that. So we need to prove that Φ’2 – Ax2 – Ay2 – Az2 = Φ2 – Ax2 – Ay2Ā  – Az2Ā for ourĀ four-potential vectorĀ Aμ = (Φ, A). So… Yes… How can we do that? The proof isĀ notĀ so easy, but you need to go through it as it will introduce some more concepts and ideas you need to understand.

In my post on the Lorentz gauge, I mentioned that Maxwell’s equations can be re-written in terms of Φ andĀ A, rather than in terms of E and B. The equations are:

Equations 2

The expression look rather formidable, but don’t panic: just look at it. Of course, you need to be familiar with the operators that are being used here, so that’s the Laplacian āˆ‡2Ā and the divergence operator āˆ‡ā€¢ that’s being applied to the scalar Φ and the vector A. I can’t re-explain this. I am sorry. Just check my posts on vector analysis.Ā You should also look at the third equation: that’s just the Lorentz gauge condition, which we introduced when derivingĀ these equations from Maxwell’s equations. Having said that, it’s the first and second equation which describe Φ and A as a function of the charges and currents in space, and so that’s what matters here. So let’s unfold the first equation. It says the following:

potential formula

In fact, if we’d be talkingĀ freeĀ or empty space, i.e. regions where there are no charges and currents, then the right-hand side would be zero and this equation would then represent a wave equation, so some potential Φ that is changing in time and moving out at the speed c. Here again, I am sorry I can’t write about this here: you’ll need to check one of my posts on wave equations. If you don’t want to do that, you should believe me when I say that, if you see an equation like this:

f8then the functionĀ ĪØ(x, t) must be some function

solution

Now, that’s a function representing a wave traveling at speed c, i.e. the phase velocity. Always? Yes.Ā Always! It’s got to do with the x āˆ’ ct and/or x +Ā ctĀ  argument in the function. But, sorry, I need to move on here.

The unfolding of the equation with Φ makes it clear that we have four equations really. Indeed, the second equation is three equations: one for Ax, one forĀ Ay, and one forĀ AzĀ respectively. The four quantities on the right-hand side of these equations are ρ, jx, jyĀ and jzĀ respectively, divided by ε0, which is a universal constant which does notĀ change when going from one coordinate system to another.Ā Now, the quantities ρ, jx, jyĀ and jzĀ transform like a four-vector. How do we know that? It’s just the charge conservation law. We used it when solving the problem of the fields around a moving wire, when we demonstrated theĀ relativityĀ of the electric and magnetic field. Indeed, the relevant equations were:

Lorentz j and rho

You can check that against the Lorentz transformation rules forĀ cĀ = 1. They’re exactly the same, but so we chose t = 0, so the rules are even simpler. Hence, the (ρ,Ā jx, jy, jz) vector is, effectively, aĀ four-vector, and we’ll denote it by jμ = (ρ, j).Ā I nowĀ need to explain something else. [And, yes,Ā I know this is becoming a veryĀ long story but… Well… That’s how it is.]

It’s about our operatorsĀ āˆ‡, āˆ‡ā€¢, āˆ‡Ć— and āˆ‡2Ā , so that’s the gradient, theĀ divergence, curlĀ and LaplacianĀ operator respectively: they all have a four-dimensional equivalent. Of course, that won’t surprise you. 😦 Let me just jot all of them down, so we’re done with that, and then I’ll focus on the four-dimensional equivalent of the LaplacianĀ Ā āˆ‡ā€¢āˆ‡ =Ā āˆ‡2Ā , which is referred to as theĀ D’Alembertian, and which is denoted byĀ ā–”2, because that’s the one we need to prove that our four-potential vector is a realĀ four-vector. [I know: ā–”2Ā is a tiny symbol for a pretty monstrous thing, but I can’t help it: my editor tool is pretty limited.]

Four-vectors

Now, we’re almost there. Just hang in for a little longer. It should be obvious that we can re-write those two equations with Φ, A, ρ and j, as:

Formula d'alembertian 2

Just to make sure, let me remind you that Aμ = (Φ, A) and thatĀ jμ = (ρ, j). Now, our new D’Alembertian operator is just an operator—a prettyĀ formidableĀ operator but, still, it’s an operator, and so itĀ doesn’t change when the coordinate system changes, so the conclusion is that,Ā IFĀ jμ = (ρ, j) is a four-vector – which it is – and, therefore, transforms like a four-vector,Ā THENĀ the quantities Φ, Ax, Ay, and AzĀ must also transformĀ like a four-vector, which means they areĀ (the components of) a four-vector.

So… Well… Think about it, but not too long, because it’s just an intermediate result we had to prove. So that’s done. But we’re not done here. It’s just the beginning, actually. :-/ Let me repeat our intermediate result:

Aμ = (Φ, A) is a four-vector. We call it the four-potential vector.

OK. Let’s continue. Let me firstĀ draw your attention to that expression with the D’Alembertian above. Which expression? This one:

Formula d'alembertian 2

What about it? Well… You should note thatĀ the physics of that equation is just the same as Maxwell’s equations. So it’s one equation only, but it’s got it all.

It’s quite a pleasure to re-write it in such elegant form. Why? Think about it: it’s a four-vector equation: we’ve got a four-vector on the left-hand side, and a four-vector on the right-hand side. Therefore, this equation is invariant under a transformation. So, therefore,Ā it directly shows the invariance of electrodynamics under the Lorentz transformation.

Huh? Yes. You may think about this a little longer. šŸ™‚

To wrap this up, I should also note that we can also express the gauge condition using our new four-vector notation. Indeed, we can write it as:

Lorentz condition

It’s referred to as the Lorentz condition and it is, effectively, a condition for invariance, i.e. it ensures that the four-vector equation above does stay in the form it is in for all reference frames. Note that we’re re-writing it using the four-dimensional equivalent of the divergence operatorĀ āˆ‡ā€¢, but so we don’t have a dot between āˆ‡Ī¼Ā and Aμ. In fact, the notation is pretty confusing, and it’s easy to think we’re talking some gradient, rather than the divergence. So let me therefore highlight the meaning of both once again. It looks the same, but it’s two veryĀ different things: the gradient operates on a scalar, while the divergence operates on a (four-)vector. Also note the +āˆ’āˆ’āˆ’ signature is only there for theĀ gradient, not for the divergence!

example

You’ll wonder why they didn’t use some • orĀ āˆ— symbol, and the answer: I don’t know. I know it’s hard to keep inventing symbols for all these different ‘products’ – the āŠ— symbol, for example, is reserved forĀ tensorĀ products, which we won’t get into – but… Well… I think they could have done something here. 😦

In any case… Let’s move on. Before we do, please note that we can also re-write our conservation law for electric charge using our new four-vector notation. Indeed, you’ll remember that we wrote that conservation law as:

conservation law

Using our new four-vector operator āˆ‡Ī¼, we can re-write that as āˆ‡Ī¼jμ = 0. So all of electrodynamics can be summarized in the two equations only—Maxwell’s law and the charge conservation law:

all

OK. We’re now ready to discuss the electromagnetic tensor. [I know… This is becoming an incredibly long and incredibly complicated piece but, ifĀ you get through it, you’ll admitĀ it’s really worth it.]

The electromagnetic tensor

The whole analysis above was done in terms of the Φ and A potentials. It’s time to get back to our field vectorsĀ E and B. We know we can easily get them from Φ and A, using the rules we mentioned as solutions:

E and B solutions

These two equations shouldĀ notĀ look as yet another formula. They are essential, and you should be able to jot them down anytime anywhere. They should be on your kitchen door, in your toilet and above your bed. šŸ™‚Ā For example, the second equation gives us theĀ components of the magnetic field vector B:

B field components

Now, look at these equations. The x-component is equal to a couple of terms that involve only y– and z-components. The y-component is equal to something involving only xĀ and z.Ā Finally, the z-component only involves x and y. Interesting. Let’s define a ‘thing’ we’ll denote by FzyĀ and define as:

F definition

So now we can write: BxĀ = Fzy,Ā ByĀ = Fxz, andĀ BzĀ = Fxy. Now look at our equation for E. It turns out the components of E are equal to things likeĀ Fxt, FytĀ and Fzt! Indeed,Ā FxtĀ =Ā āˆ‚Ax/āˆ‚tĀ āˆ’Ā āˆ‚At/āˆ‚x = Ex!

But… Well… No. 😦 The sign is wrong!Ā ExĀ = āˆ’āˆ‚Ax/āˆ‚tāˆ’āˆ‚At/āˆ‚x, so we need to modify our definition of Fxt. When the t-component is involved, we’ll define our ‘F-things’ as:

time f

So we’ve got a plus instead of a minus. It looks quite arbitrary but, frankly, you’ll have to admit it’s sort of consistent with our +āˆ’āˆ’āˆ’ signatureĀ for our four-vectors and, in just a minute, you’ll see it’s fully consistent with our definition of the four-dimensional vector operatorĀ āˆ‡Ī¼Ā = (āˆ‚/āˆ‚t, āˆ’āˆ‚/āˆ‚x,Ā āˆ’āˆ‚/āˆ‚y, āˆ’āˆ‚/āˆ‚z). So… Well… Let’s go along with it.

What about the Fxx, Fyy, FzzĀ and FttĀ terms? Well…Ā FxxĀ =Ā āˆ‚Ax/āˆ‚x āˆ’Ā āˆ‚Ax/āˆ‚x = 0, and it’s easy to see that FyyĀ and FzzĀ are zero too. ButĀ Ftt? Well… It’s a bit tricky but, applying our definitions carefully, we see that FttĀ must be zero too. In any case, theĀ FttĀ = 0 will become obvious as we will be arranging these ‘F-things’ in a matrix, which is what we’ll do now.Ā [Again: does this ring a bell? If not, it should. :-)]

Indeed, we’ve got sixteen possible combinations here, which Feynman denotes as Fμν, which is somewhat confusing, because Fμν usually denotes theĀ 4Ɨ4 matrixĀ representing all of these combinations. So let me use the subscripts i and j instead, and define FijĀ as:

FijĀ =Ā āˆ‡iAjĀ āˆ’Ā āˆ‡jAi

with āˆ‡iĀ being the t-, x-, y- orĀ z-component of āˆ‡Ī¼Ā =Ā (āˆ‚/āˆ‚t, āˆ’āˆ‚/āˆ‚x,Ā āˆ’āˆ‚/āˆ‚y,Ā āˆ’āˆ‚/āˆ‚z) and, likewise, AiĀ being the t-, x-, y- orĀ z-component of Aμ =Ā (Φ, Ax,Ā Ay,Ā Az). Just check it: FzyĀ = āˆ’āˆ‚Ay/āˆ‚z +Ā āˆ‚Az/āˆ‚y = āˆ‚Az/āˆ‚y āˆ’ āˆ‚Ay/āˆ‚z = Bx, for example, andĀ FxtĀ =Ā āˆ’āˆ‚Ī¦/āˆ‚x āˆ’Ā āˆ‚Ax/āˆ‚tĀ = Ex. So theĀ +āˆ’āˆ’āˆ’ conventionĀ works. [Also note that it’s easier now to see that FttĀ = āˆ‚Ī¦/āˆ‚t āˆ’ āˆ‚Ī¦/āˆ‚t = 0.]

We can now arrange the FijĀ in a matrix. This matrix is antisymmetric, because FijĀ = – Fji, and its diagonal elements are zero. [For those of you who love math: note that the diagonal elements of an antisymmetric matrix are always zero because of the FijĀ = – FjiĀ constraint: just use k = i = j in the constraint.]

Now that matrix is referred to as the electromagnetic tensor and it’s depicted below (we pluggedĀ cĀ back in, remember thatĀ B’s magnitude is 1/c times E’s magnitude).

electromagnetic tensor final

So… Well… Great ! We’re done! Well… Not quite. šŸ™‚

We can get this matrix in a number of ways. The least complicated way is, of course, just to calculate all FijĀ components and them put them in a [Fij] matrix using theĀ iĀ as the row number and theĀ jĀ as the column number. You need to watch out with the conventions though, and soĀ i and j startĀ onĀ t and end on z. šŸ™‚

The other way to do it is to write theĀ āˆ‡Ī¼Ā = (āˆ‚/āˆ‚t, āˆ’āˆ‚/āˆ‚x,Ā āˆ’āˆ‚/āˆ‚y,Ā āˆ’āˆ‚/āˆ‚z) operator as a 4Ɨ1 column vector, which you then multiply with the four-vector Aμ written as a 4Ɨ1 row vector. So āˆ‡Ī¼Aμ is then aĀ 4Ɨ4 matrix, which we combine with its transpose, i.e.Ā (āˆ‡Ī¼Aμ)T, as shown below. So what’s written below is (āˆ‡Ī¼Aμ) āˆ’ (āˆ‡Ī¼Aμ)T.

matrix

If you google, you’ll see there’s more than one way to go about it, so I’d recommend you just go through the motions and double-check the whole thing yourself—and please do let me know if you find any mistake! In fact, the Wikipedia article on the electromagnetic tensor denotes the matrix above asĀ Fμν, rather than asĀ Fμν, which is the sameĀ tensorĀ but in its so-calledĀ covariantĀ form, but so I’ll refer you to that article as I don’t want to make things even more complicated here! As said, there’s differentĀ conventionsĀ around here, and so you need to double-check what is what really. šŸ™‚

Where are we heading with all of this? The next thing is to look at theĀ LorentzĀ transformation of theseĀ FijĀ =Ā āˆ‡iAjĀ āˆ’Ā āˆ‡jAiĀ components, because then we know how our E and B fields transform. Before we do so, however, we should note the more general results and definitions which we obtained here:

1. The Fμν matrix (a matrix is just a multi-dimensional array, of course) is a so-called tensor. It’s a tensor of the second rank, because it has two indices in it. We think of it as a very special ‘product’ of two vectors, not unlike the vector cross product aĀ Ć— b, whose components were also defined by a similar combination of the components of a and b. Indeed, we wrote:

cross product

So one should think of a tensor as “another kind of cross product” or, preferably, and as Feynman puts it, as a “generalization of the cross product”.

2.Ā In this case, the four-vectors are āˆ‡Ī¼Ā =Ā (āˆ‚/āˆ‚t, āˆ’āˆ‚/āˆ‚x,Ā āˆ’āˆ‚/āˆ‚y,Ā āˆ’āˆ‚/āˆ‚z) and Aμ =Ā (Φ, Ax,Ā Ay,Ā Az). Now, you will probably say that āˆ‡Ī¼Ā is an operator, not a vector, and you are right. However, we know that āˆ‡Ī¼Ā behaves like a vector, and so this is just a special case. The point is: because the tensor is based on four-vectors, the Fμν tensor is referred to as a tensor of the second rank in four dimensions. In addition, because of the FijĀ = – FjiĀ result, Fμν is an asymmetric tensor of the second rank in four dimensions.

3.Ā Now, the whole point is to examine how tensors transform. We know that the vector dot product, aka the inner product, remains invariantĀ under a Lorentz transformation, both in three as well as in four dimensions, but what about the vector cross product, and what about the tensor? That’s what we’ll be looking at now.

The Lorentz transformation of the electric and magnetic fields

Cross products are complicated, and tensors will be complicated too. Let’s recall our example in three dimensions, i.e. the angular momentum vectorĀ L, which was a cross product of the radius vector r and the momentum vector p = mv, as illustrated below (the animation also gives the torque Ļ„, which is, loosely speaking, a measure of the turning force).

Torque_animation

The components of L are:

momentum angular

Now, this particular definition ensures that LijĀ turns out to be an antisymmetric object:

three-vector

So it’s a similar situation here. We have nineĀ possible combinations, but only threeĀ independent numbers. So it’s a bit like our tensor in four dimensions: 16 combinations, but only 6 independent numbers.

Now, it so happens that that these three numbers, orĀ objectsĀ if you want, transform inĀ exactly the same wayĀ as the components of a vector.Ā However, as Feynman points out, that’s a matter of ‘luck’ really. In fact, Feynman points out that, when we have two vectors a = (ax, ay,Ā az) and b = (bx, by, bz), we’ll haveĀ nineĀ products TijĀ = aibjĀ which will also form aĀ tensorĀ of the second rank (cf. the two indices) but which, in general, will not obey the transformation rules we got for theĀ angular momentumĀ tensor, whichĀ happenedĀ to be an antisymmetric tensor of the second rank in three dimensions.

To make a long story short, it’s not simple in general, and surely not here: with E and B, we’ve gotĀ sixĀ independent terms, and so we cannotĀ represent six things by four things, so the transformation rules for E and B will differ from those for a four-vector. So what areĀ they then?

Well… Feynman first works out the rules for the general antisymmetric vector combination GijĀ = aibjĀ āˆ’ ajbi, withĀ aiĀ and bjĀ the t-, x-, y- or z-component of the four-vectors aμ = (at, ax, ay, az) and bμ = (bt, bx, by, bz) respectively. The idea is to first get some general rules, and then replace GijĀ = aibjĀ āˆ’ ajbiĀ by FijĀ =Ā āˆ‡iAjĀ āˆ’Ā āˆ‡jAi, of course! So let’s apply the Lorentz rules, which – let me remind you – are the following ones:

Lorentz rules

So we get:

set 1

The rest is all very tedious: you just need to plug these things into the variousĀ GijĀ = aibjĀ āˆ’ ajbiĀ formulas. For example, for G’tx, we get:

G1

Hey! That’s justĀ G’tx, so we find that G’txĀ =Ā Gtx! What about the rest? Well… ThatĀ yields something different. Let me shorten the story by simply copying Feynman here:

resulsts

So… Done!

So what?

Well… Now we just substitute. In fact, thereĀ areĀ two alternative formulations of the Lorentz transformations of E and B. They are given below (note the units are such thatĀ c = 1):

result 1 result 2

In addition, there is a third equivalent formulation which is more practical, and also simpler, even if it puts theĀ c‘s back in. It re-defines the field components, distinguishing only two:

  1. The ‘parallel’ components E||Ā and B||Ā along theĀ x-direction ( because they are parallel to the relative velocity of the S and S’ reference frames), and
  2. The ‘perpendicular’ or ‘total transverse’ componentsĀ E⊄ and B⊄, which are the vector sums of the y- and z-components.

So that gives us four equations only:

result 3

And, yes, we areĀ done now. This is the Lorentz transformation of the fields. I am sure it has left you totally exhausted. Well… If not… […] It sure left me totally exhausted. šŸ™‚

To lighten things up, let me insert an image of how the transformed field E actually looks like. The first image is the reference frame of a charge itself: we have a simple Coulomb field. The second image shows the charge flying by. Its electric field is ‘squashed up’. To be precise, it’s just like the scale ofĀ xĀ is squashed up by a factorĀ ((1āˆ’v2/c2)1/2. Let me refer you to Feynman for the detail of the calculations here.

field

OK. So that’s it. You may wonder: what about that promise I made? Indeed, when I started this post, I said I’dĀ present aĀ mathematical construct that presents the electromagnetic force as oneĀ force only, as one physical reality, but so we’re back writing all of it in terms of twoĀ vectors—the electric field vector E and the magnetic field vector B. Well… What can I say? IĀ didĀ present the mathematical construct: it’s the electromagnetic tensor. So it’s that antisymmetric matrix really, which one can combine with aĀ transformation matrixĀ embodying the Lorentz transformation rules. So, IĀ didĀ what I promised to do. But you’re right: IĀ amĀ re-presenting stuff in the old style once again.

The second objection that you may have—in fact, that youĀ shouldĀ have, is that all of this has been rather tedious. And you’re right.Ā The whole thing just re-emphasizes the value of using the four-potential vector. It’s obviouslyĀ muchĀ easier to takeĀ thatĀ vector from one reference frame to another – so we just apply the Lorentz transformation rules to Aμ = (Φ, A) and get Aμ‘ = (Φ’, A’) from it – and then calculate E’ and B’ from it, rather than trying to remember those equations above.Ā However, that’s not the point, or…

Well… It is and it isn’t. We wanted to get away from thoseĀ twoĀ vectors E and B, and show thatĀ electromagnetism is reallyĀ oneĀ phenomenon only, and so that’s where the concept of the electromagnetic tensor came in. There were two objectives here: the first objective was to introduce you to the concept of tensors, which we’ll need in the future. The second objective was to show you that, while Lorentz’ force law – F = q(E + vƗB) makes it clear we’re talking one force only, there is a way of writing it all up that is much more elegant.

I’ve introduced the concept of tensors here, so the first objective should have been achieved. As for the second objective, I’ll discuss that in my next post, in which I’ll introduce the four-velocity vector μμ as well as the four-force vectorĀ fμ. It will explain the following beautiful equation of motion:

motion equation

NowĀ thatĀ looks very elegant and unified, doesn’t it? šŸ™‚

[…] Hmm… No reaction. I know… You’re tired now, and you’re thinking: yet another way of representing the same thing? Well… Yes! So…

OK… Enough for today. Let’s follow up tomorrow.

Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 20, 2020 as a result of a DMCA takedown notice from Michael A. Gottlieb, Rudolf Pfeiffer, and The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 20, 2020 as a result of a DMCA takedown notice from Michael A. Gottlieb, Rudolf Pfeiffer, and The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 20, 2020 as a result of a DMCA takedown notice from Michael A. Gottlieb, Rudolf Pfeiffer, and The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/

Electric circuits (2): Kirchoff’s rules, and the energy in a circuit

My previous post was long and tedious, and all it did was presenting the three (passive) circuit elements as well as the concept of impedance. It show the inner workings of these little devices are actually quite complicated.Ā Fortunately, the conclusions were very neat and short: for all circuit elements, we have a very simple relationship between (a) the voltage acrossĀ the terminals of the element (V) and (b) the current that’s goingĀ throughĀ the circuit element (I). We found they are always in some ratio, which is referred to as the impedance, which we denoted by Z:

Z =Ā V/I ⇔ V = Iāˆ—Z

So it’s a ‘simple’ ratio, indeed. But… Well… Simple and not simple. It’s a ratio of two complex numbers and, therefore, it’s aĀ complexĀ number itself. That’s why I use the āˆ— symbol when re-writing the Z =Ā V/IĀ formula as V = Iāˆ—Z, so it’s clear we’re talking a product of two complex numbers). This ‘complexity’ is best understood by thinking of the voltage and the current asĀ phase vectorsĀ (orĀ phasorsĀ as engineers call them). Indeed, instead of using the sinusoidal functions we are used to, so that’s

  • V = V0Ā·cos(ωt + ĪøV),
  • I = I0Ā·cos(ωt + ĪøI), and
  • Z = Z0Ā·cos(ωt + Īø) = (V0/I0)Ā·cos(ωt + ĪøV āˆ’Ā ĪøI),

we preferred theĀ complex orĀ vectorĀ notation, writing:

  • VĀ =Ā |V|ei(ωt +Ā ĪøV) = V0Ā·ei(ωt +Ā ĪøV)
  • IĀ =Ā |I|ei(ωt +Ā ĪøI)Ā = I0Ā·ei(ωt +Ā ĪøI)
  • Z = |ZIei(ωt +Ā Īø)Ā = Z0Ā·ei(ωt +Ā Īø)Ā = (V0/I0)Ā·ei(ωt +Ā ĪøV āˆ’Ā ĪøI)

For the three circuit elements, we found the followingĀ solutionĀ for ZĀ in terms of the previously definedĀ propertiesĀ of the respective circuit elements, i.e. their resistanceĀ (R),Ā capacitanceĀ (C), and inductanceĀ (L) respectively:

  1. For a resistor, we have Z(resistor) = ZRĀ = R
  2. For an capacitor, we have Z(capacitor) = ZCĀ = 1/iωC = –i/(ωC)
  3. For an inductor, we haveĀ Z(inductance) = ZL= iωL

We also explained what these formulas meant, using graphs like the ones below:

  1. The graph on the left-hand side gives you the ratio of theĀ peakĀ voltage andĀ peakĀ current for the three devices as a function of C, L, R and ω respectively.
  2. The graph on the right-hand side shows you the relationship between theĀ phaseĀ of the voltage and the current for a capacitor and for an inductor. [For a resistor, the phases are the same, so no need for a graph. Also note that the lag of theĀ phaseĀ of the current vis-Ć”-vis the voltage phase is 90 degrees for an inductor, while it’s 270 degrees for a capacitorĀ (which amounts to the current leading the voltage with a 90° phase difference).]

Capture800px-VI_phase

The inner workings of our circuit elements are all wonderful and mysterious, and so we spent a lot of time writing about them. That’s finished now. The summary about describes all of them in very simple terms, relating the voltage and currentĀ phasorĀ through the concept of impedance, which is just a ratio—albeit aĀ complexĀ ratio.

As the graphs above suggest, we can build all kinds of crazy circuits now, and the idea of resonanceĀ as we’ve learned it when studying the behavior of waves will be particularly relevant when discussing circuits that are designed to filter certain frequenciesĀ or, the opposite, to amplifyĀ some. We won’t go that far in this post, however, as I just want to explain the basic rules one needs to know when looking at a circuit, i.e.Ā Kirchoff’s circuit laws.Ā There are two of them:

1. Kirchoff’s Voltage Law (KCL): The sum of the voltage drops around any closed path is zero.

The principle is illustrated below. It doesn’t matter whether or not we have other circuits feeding into this one: Kirchoff’s Voltage LawĀ (KCL) remains valid.

KVR

We can write this law using the concept of circulation once again or, what you’ll probably like more, just using plain summation:Ā integral

KVL

2. Kirchoff’s Current Law (KCL): The sum of the currents into any node is zero.

This law is written and illustrated as follows:

KCL

KCL illus

KCR

This Law requires some definition of aĀ node, of course. Feynman defines a node as any set of terminals such as a, b, c, d in the illustration above which are connected. So it’s a set of connected terminals.

Now, I’ll refer you Feynman for some practicalĀ examples. The circuit below is one of them. It looks complicated but it all boils down to solving a set of linear equations. So… Well… That’s it, really. We’re done! We should do the exercises, of course, but then we’re too lazy for that, I guess. šŸ™‚ So we’re done!

example

Well… Almost. I also need to mention how one can reduceĀ complicated circuits by combining parallel impedances, using the following formula:

parallel Z

And then another powerful idea is the idea ofĀ equivalentĀ circuits. The rules for this are as follows:

  1. Any two-terminal network of passive elements is equivalent to (and, hence, can be replaced by) anĀ effectiveĀ impedance (Zeff).
  2. Any two-terminal network of passive elements is equivalent to (and, hence, can be replaced by) aĀ generator in series with an impedance.

These two principles are illustrated below: (a) is equivalent to (b) in each diagram.

P2P1

The related formulas are:

  1. I = Ɛ/Zeff
  2. VnĀ = ƐeffĀ āˆ’ Ināˆ—Zeff

Last but not least, I need to say something about the energyĀ in circuits. As we noted in our previous post, the impedance will consist of a real and an imaginary part. We write:

Z = R + iĀ·X

This gives rise to the following powerful equivalence:Ā any impedance is equivalent to a series combination of a pure resistance and a pure reactance, as illustrated below (the ≔ sign stands for equivalence):

energy

Of course, because this post risks becoming too short šŸ™‚ I need to insert some more formulas now. IfĀ Z = R + iĀ·X is the impedance of the whole circuit, then the whole circuit can be summarized in the following equation:

Ɛ = Iāˆ—ZĀ = Iāˆ—(R + iĀ·X)

Now, if we bring the analysis back to the realĀ parts of this equation, then we may write our current as I = I0Ā·cos(ωt). This implies we chose a t = 0 point soĀ ĪøIĀ = 0. [Note that this is somewhat different than what we usually do: we usually chose our t = 0 point such thatĀ ĪøVĀ = 0, but it doesn’t matter.] The real emf is then going to be the real part of Ɛ = Iāˆ—ZĀ = Iāˆ—(R + iĀ·X), so we’ll write it as ʐ (no bold-face), and it’s going to be the real part of that expression above, which we can also write as:

Ɛ = Iāˆ—ZĀ = I0Ā·ei(ωt)Ā āˆ—(R + iĀ·X)

So ʐ is the real part of this Ɛ and, you should check, it’s going to be equal to:

ʐ =Ā I0Ā·RĀ·cos(ωt) āˆ’ I0Ā·XĀ·sin(ωt)

The two terms in this equation represent the voltage drops across the resistance R and the reactance X in that illustration above. […] Now that I think of it, in line with the -or and -ance convention for circuit elements and their properties, should we, perhaps, sayĀ resistorĀ andĀ reactor in this case? šŸ™‚ […] OK. That’s a bad joke. [I don’t seem to have good ones, isn’t it?] šŸ™‚

Jokes aside, we see thatĀ the voltage drop across the resistance is in phase with the current (because it’s a simple cosine function of ωt as well), while the voltage drop across the purely reactive part is out of phase with the current (as you know, the sine and cosine are the same function, but with a phase difference of π/2 indeed).

You’ll wonder where are we going with this, so let me wrap it all up. You know the power is the emf times the current, and so let’s integrate this thing over one cycle to get the average rate (and then I mean aĀ time rate of change) of the energy that getsĀ lostĀ in the circuit. So we need to solve the following integral:

integral 2

This may look like a monster, but if you look back at your notes from your math classes, you should be able to figure it out:

  1. The first integral is (1/2)I02Ā·R..
  2. The second integral is zero.

So what? Well… Look at it! It means that the (average) energy loss in a circuit with impedance Z = R + iĀ·X onlyĀ depends on the real part of Z, which is equal to I02Ā·R/2. That’s, of course, how we want it to be: ideal inductances and capacitorsĀ storeĀ energy when being powered, and give whatever they stored when ‘charging’ back to the circuit when the current reverses direction.

So it’s a nice result, because it’s consistent with everything. Hmm… Let’s double-check though… Is it also consistent with the power equation for a resistor which, remember, is written as: P = VĀ·I = IĀ·RĀ·I = I2Ā·R. […] Well… What about the 1/2 factor?

Well… Think about it. I isĀ a sine or a cosine here, and so we want the average value of its square, so that’s 〈cos2(ωt)〉 = 1/2.

Done!Ā šŸ™‚

Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 17, 2020 as a result of a DMCA takedown notice from Michael A. Gottlieb, Rudolf Pfeiffer, and The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 17, 2020 as a result of a DMCA takedown notice from Michael A. Gottlieb, Rudolf Pfeiffer, and The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 20, 2020 as a result of a DMCA takedown notice from Michael A. Gottlieb, Rudolf Pfeiffer, and The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/

Electric circuits (1): the circuit elements

OK. No escape. It’s part of physics. I am not going to go into the nitty-gritty of it all (because this is a blog about physics, not about engineering)Ā but it’s good to review the basics, which are, essentially, Kirchoff’s rules. Just for the record, GustavĀ Kirchhoff was a German genius who formulated these circuit laws while he was still a student, when he was like 20 years old or so. He did it as aĀ seminar exerciseĀ 170 years ago, and then turned it into doctoral dissertation. Makes me think of that Dire Straits song—That’s the way you do it—Them guys ain’t dumb. šŸ™‚

So this post is, in essence, just an ‘explanation’ of Feynman’s presentation of Kirchoff’s rules, so I am writing this post basically for myself, so as to ensure I am not missing anything. To be frank, Feynman’s use of notation when working with complex numbers is confusing at times and so, yes, I’ll do some ‘re-writing’ here.Ā The nice thing about Feynman’s presentation of electrical circuits is that he sticks to Maxwell’s Laws when describing all ideal circuit elements, so he keeps using line integrals of the electric field EĀ around closed paths (that’s what a circuit is, indeed) to describe the so-calledĀ passive circuit elements, and he also recapitulates the idea of theĀ electromotive forceĀ when discussing the so-calledĀ activeĀ circuit element, so that’s the generator. That’s nice, because it links it all with what we’ve learned so far, i.e. theĀ fundamentalsĀ as expressed in Maxwell’s set of equations. Having said that, I won’t make that link here in this post, because I feel it makes the whole approach rather heavy.

OK. Let’s go for it. Let’s first recall the concept ofĀ impedance.

The impedance concept

There are three idealĀ (passive) circuit elements: the resistor, the capacitor and the inductor. RealĀ circuit elements usually combine characteristics of all of them, even if they are designed to work like ideal circuit elements. Collectively, these ideal (passive) circuit elements are referred to as impedances, because… Well… Because they have someĀ impedance. In fact, you should note that, if we reserve the terms ending with -ance for theĀ propertyĀ of the circuit elements, and those ending on -or for the objects themselves, then we should call them impedors. However, that term does not seem to have caught on.

You already know what impedance is. I explained it before, notably in my post on the intricacies related to self- and mutual inductance. Impedance basically extends the concept ofĀ resistance, as we know it fromĀ direct current (DC)Ā circuits, toĀ alternating currentĀ (AC) circuits. To put it simply, when AC currents are involved – so when the flow of charge periodically changes reverses direction – then it’s likely that, because of the properties of the circuit, the current signal will lag the voltage signal, and so we’ll have some phase differenceĀ telling us by how much. So, resistance is just a simple realĀ number R – it’sĀ the ratioĀ between (1) the voltage that is being applied across the resistor and (2) the current through it, so we write R = V/I – and it’s got aĀ magnitudeĀ only, butĀ impedance isĀ a ‘number’ that has both a magnitude as well as phase, so it’s aĀ complexĀ number, or a vector.

In engineering, such ‘numbers’ with a magnitude as well as a phase are referred to as phasors. A phasor represents voltages, currents and impedances as a phase vector (note the boldĀ italics: they explain how we got the pha-sorĀ term). It’s just a rotatingĀ vector really.Ā So a phasor has a varying magnitude (A) and phase (φ) , which is determined by (1) some maximum magnitude A0, (2) some angular frequency ω and (3) some initial phaseĀ (Īø). So we can write the amplitudeĀ A as:

A = A(φ) = A0Ā·cos(φ) =Ā A0Ā·cos(ωt + Īø)

As usual, Wikipedia has a nice animation for it:

Unfasor

In case you wonder why I am using aĀ cosineĀ rather than aĀ sineĀ function, the answer is that it doesn’t matter: the sine and the cosine are the same function except for a Ļ€/2Ā phase difference: just rotate the animation above by 90 degrees, or think about the formula: sinφ = cos(Ļ†āˆ’Ļ€/2). šŸ™‚

So A = A0Ā·cos(ωt + Īø) is the amplitude. It could be the voltage, or the current, or whatever realĀ variable.Ā The phase vectorĀ itself is represented by a complex number, i.e. a two-dimensional number, so to speak, which we can write as all of the following:

A = A0Ā·eiφ = A0Ā·cosφ + iĀ·A0Ā·sinφ = A0Ā·cos(ωt+Īø) + iĀ·A0Ā·sin(ωt+Īø)

=Ā A0Ā·ei(ωt+Īø)Ā = A0Ā·eiĪøĀ·eiωtĀ = A0Ā·eiωtĀ with A0Ā = A0Ā·eiĪø

That’s just Euler’s formula, and I am afraid I have to refer you to my page on theĀ essentialsĀ if you don’t get this. I know what you are thinking: why do we need the vector notation? Why can’t we just be happy with the A = A0Ā·cos(ωt+Īø) formula? The truthful answer is: it’s just to simplify calculations: it’s easier to work with exponentials than with cosines or sines. For example, writingĀ ei(ωt + Īø)Ā = eiĪøĀ·eiωtĀ is easier than writing cos(ωt + Īø) = … […] Well? […]Ā Hmm… šŸ™‚

See!Ā You’re stuck already. You’d have to use the cos(α+β) = cosα·cosĪ²Ā āˆ’ sinα·sinβ formula: you’d get the same results (just do it for the simple calculation of theĀ impedanceĀ below) but it takes a lot more time, and it’s easier to make mistake. Having said why complex number notation is great, I also need to warn you. There are a few things you have to watch out for. One of these things is notation. The other is the kind of mathematical operations we can do: it’s usually alright but we need to watch out with the i2Ā = –1 thing when multiplyingĀ complex numbers. However, I won’t talk about that here because it would only confuse you even more. šŸ™‚

Just for the notation, let me note that Feynman would write A0Ā as A0Ā with the little hatĀ orĀ caretĀ symbol (∧) on top of it, so as to indicate the complexĀ coefficient isĀ notĀ a variable. So he writes A0Ā asĀ Ć‚0Ā =Ā A0Ā·eiĪø.Ā However, I find that confusing and, hence, I prefer using bold-type for any complex number, variable or not. The disadvantage is that we need to remember that the coefficient in front of the exponential is not a variable: it’s a complex number alright, but not a variable. Indeed, do look at thatĀ A0Ā = A0Ā·eiĪøĀ equality carefully: A0Ā is a specificĀ complex number that captures theĀ initial phase Īø. So it’s not the magnitude of the phasor itself, i.e. |A| =Ā A0. In fact, magnitude, amplitude, phase… We’re using a lot confusing terminology here, and so that’s why you need to ‘get’ the math.

The impedance is not a variable either. It’s some constant. Having said that, this constant will depend on theĀ angular frequency ω. So… Well… Just think about this as you continue to read. šŸ™‚ So the impedance is someĀ number, just like resistance, but it’s a complexĀ number. We’llĀ denote it by ZĀ and, using Euler’s formula once again, we’ll write it as:

Z =Ā |Z|eiĪøĀ = V/I = |V|ei(ωt +Ā ĪøV)/|I|ei(ωt +Ā ĪøI)Ā = [|V|/|I|]Ā·ei(ĪøVĀ āˆ’Ā ĪøI)

So, as you can see, it is,Ā literally, some complexĀ ratio, just like R = V/I was some realĀ ratio: it is a complex ratio because itĀ has a magnitude and a direction, obviously. Also please do note that, as I mentioned already, the impedance is, in general, some function of the frequency ω, as evidenced by the ωt term in the exponential, but so we’re not looking at ω as a variable: V and I are variables and, as such, they depend on ω, but so you should look at ω as some parameter. I know I should, perhaps, notĀ be so explicit on what’s going on, but I want to make sure you understand.

So what’s going on? The illustration below (credit goes to Wikipedia, once again) explains. It’s a pretty generic view of a very simple AC circuit. So we don’t care what the impedance is: it might be an inductor or a capacitor, or a combination of both, but we don’t care: we just call it an impedance, or an impedorĀ if you want. šŸ™‚ The point is: if we apply an alternating current, then the current and the voltage will both go up and down, but the current signal will lag the voltage signal, and some phase factor Īø tells us by how much, so Īø will be the phase difference.

General_AC_circuit

Now, we’re dividing one complex number by another in that Z = V/I formula above, and dividing one complex number by another is not allĀ thatĀ straightforward, so let me re-write that formula for Z above as:

V = Iāˆ—ZĀ = Iāˆ—|Z|eiĪø

Now, while that V = Iāˆ—Z formulaĀ resembles the V = IĀ·R formula, you should note the bold-face type for V and I, and theĀ āˆ— symbol I am using here for multiplication. TheĀ bold-face for V and I implies they’re vectors, or complex numbers. As for theĀ āˆ— symbol, that’s to make it clear we’re not talking a vector cross product AƗB here, but a product of two complex numbers. [It’s obviously notĀ a vector dot product either, because a vector dot product yields a real number, not some other vector.]

Now we write V and IĀ as you’d expect us to write them:

  • VĀ =Ā |V|ei(ωt +Ā ĪøV) = V0Ā·ei(ωt +Ā ĪøV)
  • IĀ =Ā |I|ei(ωt +Ā ĪøI)Ā = I0Ā·ei(ωt +Ā ĪøI)

ĪøVĀ andĀ ĪøIĀ are, obviously, the so-called initialĀ phase of the voltage and the current respectively. These ‘initial’ phases are notĀ independent: we’re talking a phaseĀ differenceĀ really, between the voltage and the current signal, and it’s determined by the properties of the circuit. In fact, that’s the whole point here: theĀ impedanceĀ is a property of the circuit and determines how theĀ current signal varies as a function of the voltage signal. In fact, we’ll often choose the t = 0 point such that ĪøVĀ and so then we need to find ĪøI. […]Ā OK. Let’s get on with it. Writing out all of the factors in the V = Iāˆ—ZĀ = Iāˆ—|Z|eiĪøĀ equation yields:

VĀ =Ā |V|ei(ωt +Ā ĪøV)Ā =Ā Iāˆ—ZĀ = |I|ei(ωt +Ā ĪøI)āˆ—|Z|eiĪøĀ = |I||Z|ei(ωt +Ā ĪøIĀ +Ā Īø)Ā 

Now, this equation must hold for all values of t, so we can equate the magnitudes and phases and, hence, the following equalities must hold:

  1. |V| =Ā |I||Z| ⇔ |Z| =Ā |V|/|I|
  2. ωt + ĪøVĀ =  ωt + ĪøIĀ + θ ⇔ θ = ĪøVĀ āˆ’ ĪøI

Done!Ā 

Of course, you’ll complain once again about those complex numbers: voltage and current are something real, isn’t it? And so what isĀ really aboutĀ this complex numbers? Well… I can just say what I said already. You’re right. I’ve used the complex notation only to simplify the calculus, so it’s only theĀ real partĀ of those complex-valued functions that counts.

OK. We’re done with impedance. We can now discuss the impedors, including resistors (for which we won’t haveĀ such lag or phase difference, but the concept of impedance applies nevertheless).

Before I start, however, you should think about what I’ve done above: IĀ explainedĀ the concept of impedance, but I didn’t do much with it. The real-life problem will usually be that you get the voltage as a function of time, and then you’ll have to calculate the impedance of a circuit and, then, the current as a function of time. So I just showed the fundamental relations but, in real life, you won’t know what Īø andĀ ĪøIĀ could possibly be. Well… Let me correct that statement: we’ll give you formulas forĀ Īø as we discuss the various circuit elements and their impedance below, and so then you can use these formulas to calculateĀ ĪøI. šŸ™‚

Resistors

Let’s start with what seems to be the easiest thing: a resistor. A real resistor is actually not easy to understand, because it requires us to understand the properties ofĀ realĀ materials. Indeed, it may or may not surprise you, but the linear relation between the voltage and the current for realĀ materials is only approximate.Ā Also, the way resistors dissipateĀ energy is not easy to understand. Indeed, unlike inductors and capacitors, i.e. the other twoĀ passiveĀ components of an electrical circuit, a resistor does not storeĀ butĀ dissipatesĀ energy, as shown below.

Electric_load_animation_2

It’s a nice animation (credit for it has to go to Wikipedia once more), as it shows how energy is being used in an electric circuit. Note that the little moving plusesĀ are in line with the conventionĀ that a current is defined as the movement of positive charges, so we write I = dQ/dt instead ofĀ I =Ā āˆ’dQ/dt. That also explains the direction of the field line E, which has been added to show that the charges move with the field that is being generatedĀ by theĀ power source (which is not shown here). So, what we have here is that, on one side of the circuit, someĀ generator or voltage sourceĀ will create anĀ emfĀ pushing the charges,Ā and so the animation shows how someĀ load – i.e. the resistor in this case – willĀ consumeĀ their energy, so they lose their push (as shown by the change in color from yellow to black). SoĀ power, i.e.energy per unit time, is supplied, and is then consumed.

To increase the current in the circuit above, you need to increase the voltage, but increasing both amounts to increasing the powerĀ that’s being consumed in the circuit. Electric power is voltage times current, so P = VĀ·I (or vĀ·i, if I use the small letters that are used in the two animations below). Now, Ohm’s LawĀ (I = V/R)Ā says that, if we’d want to double the current, we’d need to double the voltage, and so we’re quadrupling the power then: P2Ā = V2Ā·I2Ā = (2Ā·V1)Ā·(2Ā·I1) = 4Ā·V1Ā·I1Ā = 22Ā·P1. So we have a square-cube law for the power, which we get by substituting V for RĀ·I or by substituting I for V/R, so we can write the power P as P = V2/R = I2Ā·R. This square-cube law says exactly the same: if you want to doubleĀ the voltageĀ orĀ the current, you’ll actually have to double both and, hence, you’ll quadruple the power.

But back to the impedance: Ohm’s Law is theĀ Z = V/I law for resistors, but we can simplify it because we know the voltage acrossĀ the resistorĀ and the current that’s going throughĀ areĀ in phase. Hence, ĪøVĀ andĀ ĪøIĀ are identical and, therefore, theĀ Īø = ĪøVĀ āˆ’ ĪøIĀ in Z =Ā |Z|eiĪøĀ is equal to zero and, hence, Z =Ā |Z|. Now,Ā |Z| =Ā |V|/|I| =Ā V0/I0. So the impedanceĀ ZĀ is just some real number R = V0/I0, which we can also write as:

R = V0/I0Ā = (V0Ā·ei(ωt + α))/(I0Ā·ei(ωt + α)) = V(t)/I(t), with α = ĪøVĀ = ĪøI

The equation above goes from R = V0/I0Ā to R = V(t)/I(t) = V/I. It’s note the same thing: the second equation says that,Ā at any point in time, the voltage and the current will be proportional to each other, with R or its reciprocal as the proportionality constant. In any case,Ā we have our formula for ZĀ here:

Z = R = V/I =Ā V0/I0

So that’s simple. Before we move to the next, let me note that the resistance of aĀ realĀ resistor may depend on itsĀ temperature, so in real-life applications one will want to keep its temperature as stable as possible. That’s why real-life resistors have power ratings and recommended operatingĀ temperatures. The image below illustrates how so-called heat-sink resistors can be mounted on a heat sink with a simple spring clip so as to ensureĀ the dissipated heat is transported away. These heat-sink resistors are rather small (10 by 15 mm only) but are rated for 35Ā watt – so that’s quite a lot for such small thing – ifĀ correctly mounted.

spring-clips-mounting-technigques

As mentioned, the linear relation between the voltage and the current is onlyĀ approximate, and the observed relation is also there only for frequencies that are not ‘too high’ because, if the frequency becomes very high, the free electrons will start radiating energy away, as they produce electromagnetic radiation. So one always needs to look at the tolerances of real-lifeĀ resistors, which may be ± 5%, ± 10%, or whatever.Ā In any case… On to the next.

Capacitors (condensers)

We talked at length about capacitors (aka condensers) in our post explaining capacitanceĀ or, the more widely used term, capacity: theĀ capacity of a capacitor is the observedĀ proportionality between (1) the voltage (V) across and (2) the charge (Q) on the capacitor, so we wrote it as:

C = Q/V

Now, it’s easy to confuse the C here with the C for coulomb, which I’ll also use in a moment, and so… Well… Just don’t! šŸ™‚ The meaning of the symbol is usually obvious from the context.

As for the explanation of this relation, it’s quite simple: a capacitor consists of two separateĀ conductors in space, with positive charge on one, and an equal and opposite (i.e. negative) charge on the other. Now, the logic of the superposition of fields implies that, if we double the charges, we will also double the fields, and so the work one needs to do to carry a unit charge from one conductor to the other is also doubled! So that’s why the potential difference between the conductors is proportional to the charge.

The C = Q/VĀ formula actually measures theĀ ability of the capacitor to store electric charge and, therefore, to store energy, so that’s why the term capacityĀ is really quite appropriate.Ā I’ll let youĀ googleĀ a few illustrations like the one below, that shows how a capacitor is actually being charged in a circuit. Usually, some resistance will be there in the circuit, so as to limit the current when it’s connected to the voltage source and, therefore, as you can see, the R times C factor (RĀ·C) determines how fast or how slow the capacitor charges and/or discharges. Also note that the current is equal to the time rate of change of the charge: I = dQ/dt.

images

In the above-mentioned post, we also give a few formulas for the capacity of specificĀ typesĀ of condensers. For example, for aĀ parallel-plate condenser, the formula was CĀ =  ε0A/d. We also mentioned its unit, which is isĀ coulomb/volt, obviously, but – in honor of Michael Faraday, who gave us Faraday’s Law, and many other interesting formulas – it’s referred to as theĀ farad: 1 F = 1 C/V. The C here isĀ coulomb, of course. Sorry we have to use C to denote two different things but, as I mentioned, the meaning of the symbol is usually clear from the context.

We also talked about how dielectrics actually work in that post, but we did notĀ talk about the impedance of a capacitor, so let’s do that now. The calculation is pretty straightforward. Its interpretation somewhat less so. But… Well… Let’s go for it.

It’s the current that’s charging the condenser (sorry I keep using both terms interchangeably), and we know that the current is the time rate of change of the charge (I = dQ/dt).Ā Now, you’ll remember that, in general, we’d write a phasor A as A = A0Ā·eiωtĀ with A0Ā = A0Ā·eiĪø, soĀ A0Ā is a complex coefficient incorporating the initial phase, which we wrote as ĪøVĀ andĀ ĪøIĀ for the voltage and for the current respectively. So we’ll represent the voltage and the current now using that notation, so we write: V = V0Ā·eiωtĀ and I = I0Ā·eiωt. So let’s now use that C = Q/V by re-writing it as Q = CĀ·V and, because C is some constant, we can write:

I = dQ/dt = d(CĀ·V)/dt = CĀ·dV/dt

Now, what’s dV/dt? Oh… You’ll say: V is the magnitude of V, so it’s equal toĀ |V| = |V0Ā·eiωt| = |V0|Ā·|eiωt| = |V0| = |V0Ā·eiĪø| =Ā |V0|Ā·|eiĪø| =Ā |V0| = V0. So… Well…Ā What?Ā V0Ā is some constant here! It’s theĀ maximumĀ amplitude of V, so… Well… It’s time derivative is zero: dV0/dt = 0.

Yes. Indeed. We did something very wrong here!Ā You really need to watch out with this complex-number notation, and you need to think about what you’re doing. V is not the magnitude of V but its (varying) amplitude. So it’s theĀ realĀ voltage V that varies with time: it’s equal to V0Ā·cos(ωt + ĪøV), which is the realĀ part of our phasorĀ V. Huh?Ā Yes.Ā Just hang in for a while. I know it’s difficult and, frankly, Feynman doesn’t help us very much here.Ā Let’s take one step back and so – you will see why I am doing this in a moment – let’s calculate the time derivative of our phasor V, instead of the time derivative of our realĀ voltageĀ V. So we calculate dV/dt, which is equal to:

dV/dtd(V0Ā·eiωt)/dt =Ā V0Ā·d(eiωt)/dt = V0Ā·(iω)Ā·eiωtĀ = iω·V0Ā·eiωtĀ = iω·V

Remarkable result, isn’t it? We take the time derivative of our phasor, and the result is the phasor itself multiplied with iω. Well… Yes. It’s a general property of exponentials, but still… Remarkable indeed! We’d get the same with I, but we don’t need that for the moment. What we doĀ need to do is go from ourĀ I =Ā CĀ·dV/dtĀ relation, which connects theĀ realĀ parts of I and V one to another, to theĀ I =Ā CĀ·dV/dtĀ relation, which relates theĀ (complex) phasors. So we write:

Ā I =Ā CĀ·dV/dt ⇔ I =Ā CĀ·dV/dt

Can we do that? Just like that? We just replace I and V by I and V? Yes, we can.Ā Why? Well… We know that I is the real part ofĀ I and so we can write I = Re(I)+ Im(I)Ā·i = I + Im(I)Ā·i, and then we can write the right-hand side of the equation as CĀ·dV/dt = Re(CĀ·dV/dt)+ Im(CĀ·dV/dt)Ā·i. Now, two complex numbers are equal if, and only if, their real and imaginary parts are the same, so… Well… Write it all out, if you want, using Euler’s formula, and you’ll see it all makes sense indeed.

So what do we get? TheĀ I =Ā CĀ·dV/dt gives us:

I =Ā CĀ·dV/dt = CĀ·(iω)Ā·V

That implies that I/V = CĀ·(iω) and, hence, we get – finally! – what we need to get:

Z = V/I = 1/(iωC)

This is a grand result and, while I am sorry I made you suffer for it, I think it did a good job here because, if you’d check Feynman on it, you’ll see he – or, more probably, his assistants, – just skate over this without bothering too much about mathematical rigor. OK.Ā All that’s left now is to interpret this ‘number’ Z = 1/(iωC). It is a purelyĀ imaginary number, and it’s a constant indeed, albeit a complex constant. ItĀ can be re-written as:

Z = 1/(iωC) = i-1/(ωC) = –i/(ωC) = (1/ωC)Ā·eāˆ’iĀ·Ļ€/2

[Sorry. I can’t be more explicit here. It’s just of the wonders of complex numbers: i-1 = –i. Just check one my posts on complex numbers for more detail.] Now, a –iĀ factor corresponds to a rotation ofĀ minusĀ 90 degrees, and so that gives you the true meaning of what’s usually said about a circuit with a capacitor:Ā the voltage across the capacitor willĀ lagĀ the current with a phase differenceĀ equal to π/2, as shown below. Of course, as it’s the voltage driving the current, we should say it’s the current that is lagging with a phase difference of 3Ļ€/2, rather than stating it the other way around! Indeed, i-1 = –i = –1Ā·i =Ā i2Ā·i = i3, so that amounts to three ‘turns’ of the phase in the counter-clockwise direction, which is the direction in which our ωt angle is ‘turning’.

800px-VI_phase

It is a remarkable result, though. The illustration above assumes the maximum amplitude of the voltage and the current are the same, soĀ |Z|Ā =Ā |V|/|I| = 1, but what if they are notĀ the same? What are the real bits then? I can hear you, indeed: “To hell with the bold-face letters: what’s V and I? What’s the realĀ thing?”

Well… V and I are the real bits ofĀ VĀ =Ā |V|ei(ωt+ĪøV) = V0Ā·ei(ωt+ĪøV)Ā and of IĀ =Ā |I|ei(ωt+ĪøI)Ā = I0Ā·ei(ωt+ĪøVāˆ’Īø)Ā =Ā I0Ā·ei(ωtāˆ’Īø)Ā = I0Ā·ei(ωt+Ļ€/2)Ā respectively so, assuming ĪøVĀ = 0 (as mentioned above, that’s just a matter of choosing a convenient t = 0 point), we get:

  • V = V0Ā·cos(ωt)
  • I = I0Ā·cos(ωt + Ļ€/2)

So the Ļ€/2 phase difference is there (you need to watch out with the signs, of course: Īø = āˆ’Ļ€/2, but so it’s the current that seems to leadĀ here)Ā but the V0/I0Ā ratio doesn’t have to be one, so the realĀ voltage and current could look like something below, where the maximumĀ amplitude of the current is only half of the maximumĀ amplitude of the voltage.

Capture

So let’s analyze this quickly: the V0/I0Ā ratio is equal to |Z| =Ā |V|/|I| = V0/I0Ā = 1/ωC = (1/ω)(1/C) (note that it’sĀ notĀ equal to V/I = V(t)/I(t), which is a ratio that doesn’t make sense because I(t) goes through zero as the current switches direction). So what? Well… It means the ratio is inversely proportional to both the frequency ω as well as the capacity C, as shown below. Think about this: if ω goes to zero, V0/I0Ā goes to āˆž, which means that, for a given voltage, the current must go to zero. That makes sense, because we’re talking DC current when ω → 0, and the capacitor charges itself and then that’s it: no more currents. Now, if C goes to zero, so we’re talking capacitors with hardly any capacity, we’ll also get tiny currents. Conversely, for large C, we’ll get huge currents, as the capacitor can take pretty much any charge you throw at it, so that makes for smallĀ V0/I0Ā ratios. The most interesting thing to consider is ω going to infinity, as the V0/I0Ā ratio is also quite small then. What happens? The capacitor doesn’t get the time to charge, and so it’s always in this state where it has large currents flowing in and out of it, as it can’t build the voltageĀ that would counter the electromotive force that’s being supplied by the voltage source.

graph 6OK. That’s it. Le’s discuss the last (passive) element.

Inductors

We’ve spoiled the party a bit with that illustration above, as it gives the phase difference for an inductor already:

Z = iωL = ωLĀ·eiĀ·Ļ€/2, with L the inductance of the coil

So, againĀ assuming thatĀ ĪøVĀ = 0, we can calculate I as:

IĀ =Ā |I|ei(ωt+ĪøI)Ā = I0Ā·ei(ωt+ĪøVāˆ’Īø)Ā =Ā I0Ā·ei(ωtāˆ’Īø)Ā = I0Ā·ei(ωtāˆ’Ļ€/2)Ā 

Of course, you’ll want to relate this, once again, to theĀ realĀ voltage and theĀ realĀ current, so let’s write the real parts of ourĀ phasors:

  • V = V0Ā·cos(ωt)
  • I = I0Ā·cos(ωt āˆ’ Ļ€/2)

Just to make sure you’re not falling asleep as you’re reading, I’ve made another graph of how things could look like. So now’s it’s the current signal that’sĀ lagging the voltage signalĀ with a phase differenceĀ equal toĀ Īø = Ļ€/2.

Capture

Also, to be fully complete, I should show you how the V0/I0Ā ratio now varies with L and ω. Indeed, here also we can write that |Z| =Ā |V|/|I| = V0/I0, but so here we find that V0/I0Ā =  ωL, so we have a simple linear proportionality here! For example, for a given voltage V0, we’ll have smallerĀ currents as ω increases, so that’s the opposite of what happens with our ideal capacitors. I’ll let you think about that… šŸ™‚

Capture

Now how do we get thatĀ Z = iωL formula? In my post on inductance, I explained what an inductor is: a coil of wire, basically. Its defining characteristic is thatĀ a changing current will cause a changingĀ magnetic field in it and, hence, some change in the flux of the magnetic field. Now, Faraday’s Law tells us that that will cause some circulation of the electric field in the coil, which amounts to an inducedĀ potential difference which is referred to as theĀ electromotive forceĀ (emf). Now, it turns out that the inducedĀ emfĀ is proportional to the change in current. So we’ve got another constant of proportionality here, so it’s like how we defined resistance, orĀ capacitance. So, in many ways, theĀ inductanceĀ is just another proportionality coefficient. If we denote it by L – the symbol is said to honor the Russian phyicistĀ Heinrich Lenz, whom you know from Lenz’ Law – then we define it as:

L = āˆ’Ę/(dI/dt)

The dI/dt factor is, obviously, the time rate of change of the current, and the negative sign indicates that the emfĀ opposesĀ the change in current, so it will tend to cause anĀ opposingĀ current. However, theĀ powerĀ of our voltage source will ensure the current does effectively change, so it will counter the ‘back emf’ that’s being generated by the inductor. To be precise, the voltage across the terminals of our inductor, which we denote by V, will be equal and opposite to ʐ, so we write:

V = āˆ’Ę = LĀ·(dI/dt)

Now, this very much resembles the I =Ā CĀ·dV/dt relation we had for capacitors, and it’s completely analogous indeed: we just need to switch the I and V, and C and L symbols. So we write:

Ā V = LĀ·dI/dt⇔ V = LĀ·dI/dt

Now, dI/dt is a similar time derivative as dV/dt. We calculate it as:

dI/dtd(I0Ā·eiωt)/dt =Ā I0Ā·d(eiωt)/dt = I0Ā·(iω)Ā·eiωtĀ = iω·I0Ā·eiωtĀ = iω·I

So we get what we want and have to get:

V = LĀ·dI/dt = iωLĀ·I

Now, Z = V/I, so Z =Ā iωL indeed!

Summary of conclusions

Let’s summarize what we found:

  1. For a resistor, we have Z(resistor) = ZRĀ = R = V/I =Ā V0/I0
  2. For an capacitor, we have Z(capacitor) = ZCĀ = 1/(iωC) = –i/(ωC)
  3. For an inductor, we haveĀ Z(inductance) = ZL= iωL

Note that the impedance of capacitors decreases as frequency increases, while for inductors, it’s the other way around. We explained that by making you think of the currents: for a given voltage, we’ll have large currents for high frequencies, and, hence, a smallĀ V0/I0Ā ratio. Can you think of what happens with an inductor? It’sĀ notĀ so easy, so I’ll refer you to the addendum below for some more explanation.

Let me also note that, as you can see, the impedance of (ideal) inductors and capacitors is a pure imaginary number, so that’s a complex number which has no real part. In engineering, the imaginary part of the impedance is referred to as theĀ reactance, so engineers will say that ideal capacitors and inductors have aĀ purely imaginary reactive impedance.Ā 

However, in real life, the impedance will usually have both a real as well as an imaginary part, so it will be some kind of mix, so to speak. The real part is referred to as the ‘resistance’ R, and the ‘imaginary’ part is referred to as the ‘reactance’ X. The formula for both is given below:

formula resistance and reactance

But here I have to end my post on circuit elements. It’s become quite long, so I’ll discuss Kirchoff’s rulesĀ in my next post.

Addendum: Why is V = āˆ’ ʐ?

Inductors are not easy to understand—intuitively, that is. That’s why I spent so much time writing on them in my other post on them, to which I should be referring you here. But let me recapitulate the key points. The key idea is that we’re pumping energy into an inductor when applying a current and, as you know, the time rate of change is power: P = dW/dt, so we’re talking power here too, which is voltageĀ timesĀ current: P = dW/dt = VĀ·I.Ā The illustration below shows what happens when anĀ alternating currentĀ is applied to the circuit with the inductor. So the assumption is that the current goes in one and then in the other direction, so I > 0, and then I < 0, etcetera. We’re also assuming some nice sinusoidal curve for the current here (i.e. the blue curve), and so we get what we get for U (i.e. the red curve), which is the energy that’s stored in the inductor really, as it tries to resist the changing current: the energy goes up and down between zero and some maximum amplitude that’s determined by the maximum current.

power 2

So, yes, building up current requires energy from some external source, which is used to overcome the ‘back emf’ in the inductor, and that energy is stored in the inductor itself. [If you still wonder why it’s stored in the inductor, think about the other question: where else would it be stored?]Ā HowĀ is stored? Look at the graph and think: it’s stored as kinetic energy of the charges, obviously. That explains why the energy is zero when the current is zero, and why the energy maxes out when the current maxes out. So, yes, it all makes sense! šŸ™‚

Let me give another example.Ā The graph below assumes the current builds up to some maximum. As it reaches its maximum, theĀ storedĀ energy will also max out. This example assumesĀ direct current, so it’s a DC circuit: the current builds up, but then stabilizes at some maximum that we can find by applying Ohm’s Law to the resistanceĀ of the circuit: I = V/R. Resistance? But we were talking an ideal inductor? We are. If there’s no other resistance in the circuit, we’ll have a short-circuit, so the assumption is that we doĀ have some resistance in the circuit and, therefore, we should also think of some energyĀ lossĀ to heat from the current in the resistance. If not, well… Your power source will obviously soon reach its limits. šŸ™‚

power

So what’s going on then?Ā We have someĀ changingĀ current in the coil but, obviously, some kind of inertia also: the coil itself opposes the change in current through the ‘back emf’. Now, it requires energy, or power, to overcome the inertia, so that’s the power that comes from our voltage source: it will offset the ‘back emf’, so we may effectively think of a little circuit with an inductorĀ and a voltage source, as shown below.

circuit with coil

But why do we write V = āˆ’Ā Ę? Our voltage source can have any voltage, can’t it? Yes. Sure. But so the coil will always provide anĀ emf that’s exactly theĀ oppositeĀ of this voltage. Think of it: we have someĀ voltage that’s being applied across the terminals of the inductor, and so we’ll have some current. A current that’s changing. And it’s that current will generate an emf that’s equal to Ɛ = –LĀ·(dI/dt). So don’t think of ʐ as some constant: it’s the self-inductance coefficient L that’s constant, but I (and, hence, dI/dt) and V are variable.

The point is: we cannot have any potential difference in a perfect conductor, which is what the terminals are: any potential difference, i.e. any electric field really, would cause huge currents. In other words, the voltage V and the emf Ɛ have to cancel each other out,Ā all of the time. If not, we’d have huge currents in the wires re-establishing the V = āˆ’Ę equality.

Let me use Feynman’s argument here. Perhaps that will work better. šŸ™‚ Our ideal inductor is shown below: it’s shielded by some metal box so as to ensure it does not interact with the rest of the circuit. So we have some current I, which we assume to be an AC current, and we know someĀ voltageĀ is needed toĀ causeĀ that current, so that’s the potential difference V between the terminals.

inductor

The total circulation of E – around the whole circuit – can be written as the sum of two parts:

Formula circulaton

Now, we know circulation of E can only be caused by someĀ changingĀ magnetic field, which is what’s going on in the inductor:

emf

So this change in the magnetic flux is what it causing the ‘back emf’, and so the integral on the left is, effectively, equal to ʐ, not minus Ɛ but +ʐ.Ā Now, the second integral is equal to V, because that’s the voltage V between the two terminals a and b. So the whole integral is equal toĀ 0 = ʐ +Ā V and, therefore, we have that:

VĀ = āˆ’Ā Ę = LĀ·dI/dt

Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/

Re-visiting the speed of light, Planck’s constant, and the fine-structure constant

Note: I have published a paper that is very coherent and fully explains what the fine-structure constant actually is. There is nothing magical about it. It’s not some God-given number. It’s a scaling constant – and then some more. But not God-given. Check it out: The Meaning of the Fine-Structure Constant. No ambiguity. No hocus-pocus.

Jean Louis Van Belle, 23 December 2018

Original post:

A brother of mine sent me a link to an article he liked. Now, because we share some interest in physics and math and other stuff, I looked at it and…

Well… I was disappointed. Despite the impressive credentials of its author – a retired physics professor – it was veryĀ poorly written. It made me realize how much badly written stuff is around, and I am glad I am no longer wasting my time on it. However, I do owe my brother some explanation of (a) why I think it was bad, and of (b) what, in my humble opinion, he should be wasting his time on. šŸ™‚ So what it is all about?

The article talks about physicists deriving the speed of light from “theĀ electromagnetic properties of the quantum vacuum.” Now, it’s theĀ term ‘quantum‘, in ‘quantum vacuum’, that made me read the article.

Indeed, derivingĀ the theoretical speed of light in empty space from the properties of theĀ classicalĀ vacuum – aka empty space – is a piece of cake: it was done by Maxwell himself as he was figuring out his equations back in the 1850s (see my post on Maxwell’s equations and the speed of light). And then he compared it to the measuredĀ value, and he saw it was right on the mark. Therefore, saying that the speed of light is a property of the vacuum, or of empty space, is like a tautology: we may just as well put it the other way around, and say that it’s the speed of light thatĀ definesĀ the (properties of the) vacuum!

Indeed, as I’ll explain in a moment: the speed of lightĀ determinesĀ both the electric as well as the magnetic constants μ0Ā and ε0, which are the (magnetic) permeability and the (electric) permittivity of the vacuum respectively. Both constants depend on the units we are working with (i.e. the units for electric charge, for distance, for time and for force – or for inertia, if you want, because force is defined in terms of overcoming inertia), but so they are just proportionality coefficients in Maxwell’s equations. So once we decide what units to use in Maxwell’s equations, then μ0Ā and ε0Ā areĀ just proportionality coefficientsĀ which we get from c. So they areĀ notĀ separate constants really – I mean, they are not separate from c – and all of the ‘properties’ of the vacuum, including these constants, are in Maxwell’s equations.

In fact, when Maxwell compared the theoreticalĀ value ofĀ cĀ with its presumed actual value, he didn’t compare c‘s theoretical valueĀ with the speed of light as measured by astronomers (like that 17th century Ole Roemer, to which our professor refers: he had a first go at it by suggesting some specific value for it based on his observations of the timing of the eclipses of one of Jupiter’s moons), but with c‘s value as calculated from the experimentalĀ values of μ0Ā and ε0!Ā So he knew veryĀ well what he was looking at. In fact, to drive home the point, it may also be useful to note that the Michelson-Morley experiment – which accuratelyĀ measured the speed of light – was done some thirty years later. So Maxwell had already left this world by then—very much in peace, because he had solved the mystery all 19th century physicists wanted to solve through his greatĀ unification: his set of equations covers it all, indeed: electricity, magnetism, light, and even relativity!

I think the article my brother likedĀ so much does a very lousy job in pointing all of that out, but that’s not why I wouldn’tĀ recommend it.Ā It got my attention because I wondered why one would try to derive the speed of light from the properties of the quantum vacuum. In fact, to be precise, I hoped the article would tell me what the quantumĀ vacuum actually is. Indeed, as far as I know, there’s only one vacuum—one ’empty space’: empty is empty, isn’t it? šŸ™‚ So I wondered: do we have a ‘quantum’ vacuum? And, if so, what is it, really?

Now, thatĀ is where the article is reallyĀ disappointing, I think. The professor dropsĀ a few names (like the Max Planck Institute, the University of Paris-Sud, etcetera), and then, promisingly, mentions ‘fleeting excitations of the quantum vacuum’ and ‘virtual pairs of particles’, but then he basically stops talking about quantum physics. Instead, he wanders off to share some philosophical thoughts on the fundamental physical constants. What makes it all worse is that even thoseĀ thoughts on the ‘essential’ constants are quite off the mark.

So… This post is just a ‘quick and dirty’ thing for my brother which, I hope, will be somewhat more thought-provoking than that article. More importantly, I hope that my thoughts will encourage him to try to grind through better stuff.

On Maxwell’s equations and the properties of empty space

Let me first say something about the speed of light indeed. Maxwell’s four equations mayĀ look fairlyĀ simple, but that’s only until one starts unpacking all those differential vector equations, and it’s only when going through all of their consequences that one starts appreciating their deep mathematical structure. Let me quickly copy how another bloggerĀ jotted them down: šŸ™‚

god-said-maxwell-equation

As I showed in my above-mentioned post, the speed of lightĀ (i.e. the speed with which an electromagnetic pulse or wave travels through space)Ā is just one of the many consequences of the mathematical structure of Maxwell’s set of equations. As such, the speed of light is a direct consequence of the ‘condition’, or the properties, of the vacuum indeed, as Maxwell suggested when he wrote thatĀ ā€œwe can scarcely avoid the inference that light consists in the transverse undulations of the same medium which is the cause of electric and magnetic phenomenaā€.

Of course, whileĀ Maxwell still suggests light needs some ‘medium’ here – so that’s a reference to the infamous aetherĀ theory – we now know that’s because he was a 19th century scientist, and so we’ve done away with the aether concept (because it’s a redundant hypothesis), and so now we also know there’s absolutely no reason whatsoever to try to “avoid the inference.” šŸ™‚ It’s all OK, indeed: light is some kind of “transverse undulation” of… Well… Of what?

We analyze light as traveling fields, represented by two vectors, E and B, whose direction and magnitude varies both in space as well as in time.Ā E and BĀ are field vectors, and represent the electric and magnetic field respectively. An equivalent formulation – more or less, that is (see my post on theĀ LiĆ©nard-Wiechert potentials) – for Maxwell’s equations when only one (moving) charge is involved is:

E

B

This re-formulation, which is Feynman’s preferred formula for electromagnetic radiation, is interesting in a number of ways. It clearly shows that, while we analyze the electric and magnetic field as separateĀ mathematicalĀ entities, they’re one and the same phenomenon really, as evidenced by the B = –erā€˜Ć—E/cĀ equation, which tells us the magnetic field from a single moving charge is always normal (i.e. perpendicular) to the electric field vector, and also that B‘s magnitude is 1/cĀ times the magnitude of E, soĀ |B| = B =Ā |E|/c = E/c. In short, B is fully determined by E, or vice versa: if we have one of the two fields, we have the other, so they’re ‘one and the same thing’ really—not in a mathematical sense, but in a real sense.

Also note that E and B‘s magnitude is just the same if we’re using natural units, so if we equateĀ cĀ with 1. Finally, as I pointed out in my post on the relativity of electromagnetic fields, if we would switch from one reference frame to another, we’ll have a different mix of E and B, but that different mix obviously describes the sameĀ physicalĀ reality. More in particular, if we’d be movingĀ with the charges, the magnetic field sort of disappears to re-appear as an electric field. So the Lorentz forceĀ F = FelectricĀ +Ā FmagneticĀ = qE + qvƗBĀ is one force really, and its ‘electric’ and ‘magnetic’ component appear the way they appear in our reference frame only. In some other reference frame, we’d have the same force, but its components would look different, even if they, obviously, would and should add up to the same.Ā [Well… Yes and no… You know there’s relativistic corrections to be made to the forcesĀ to, but that’s a minor point, really. The force surely doesn’t disappear!]

All of this reinforces what you know already: electricity and magnetism are part and parcel of one and the same phenomenon,Ā the electromagnetic force field, and Maxwell’s equations are the mostĀ elegant way of ‘cutting it up’. Why elegant? Well… Click the Occam tab. šŸ™‚

Now, after having praised Maxwell once more, I must say that Feynman’s equations above have another advantage. In Maxwell’s equations, we see two constants, the electric and magnetic constant (denoted by μ0Ā and ε0Ā respectively), andĀ Maxwell’s equations imply that the productĀ of theĀ electric and magnetic constant is the reciprocal of c2: μ0·ε0Ā = 1/c2. So here we see ε0Ā and cĀ only, so no μ0, so that makes it even more obvious that the magnetic and electric constant are related one to another through c.

[…] Let me digress briefly: why do we have c2Ā in μ0·ε0Ā = 1/c2, instead of justĀ c? That’s related to the relativistic nature of the magnetic force: think about that B = E/c relation. Or, better still, think about the Lorentz equation F = FelectricĀ +Ā FmagneticĀ = qE + qvƗB = q[EĀ + (v/c)Ɨ(EƗerā€˜)]: the 1/c factor is there because the magnetic force involves some velocity, and any velocity is always relative—and here I don’t mean relative to the frame of reference but relative to the (absolute) speed of light! Indeed, it’s the v/c ratio (usually denoted by β = v/c) that enters all relativistic formulas. So the left-hand side of the μ0·ε0Ā = 1/c2Ā equation is best written as (1/c)Ā·(1/c), with one of the two 1/c factors accounting for the fact that the ‘magnetic’ force is a relativistic effect of the ‘electric’ force, really, and the other 1/cĀ factor giving us the proper relationship between the magnetic and the electric constant.Ā To drive home the point, I invite you to think about the following:

  • μ0Ā is expressed in (VĀ·s)/(AĀ·m), while ε0Ā is expressed in (AĀ·s)/(VĀ·m), so the dimension in which the μ0·ε0Ā product is expressed is [(VĀ·s)/(AĀ·m)]Ā·[(AĀ·s)/(VĀ·m)] = s2/m2, so that’s the dimension of 1/c2.
  • Now, this dimensional analysis makes it clear that we can sort of distributeĀ 1/c2Ā over the two constants. All it takes is re-defining the fundamental unitsĀ we use to calculate stuff, i.e. the units for electric charge, for distance, for time and for force – or for inertia, as explained above. But so we could, if we wanted, equate both μ0Ā as well as ε0Ā with 1/c.
  • Now, if we would then equate c with 1, we’d have μ0Ā = ε0Ā = cĀ = 1. We’d have to define our units for electric charge, for distance, for time and for force accordingly, but it could be done, and then we could re-write Maxwell’s set of equations using these ‘natural’ units.

In any case, the nitty-gritty here is less important: the point is that μ0Ā and ε0Ā are also related through the speed of light and, hence, they are ‘properties’ of the vacuum as well. [I may add that this is quite obvious if you look at their definition, but we’re approaching the matter from another angle here.]

In any case, we’re done with this. On to the next!

On quantum oscillations, Planck’s constant, and Planck unitsĀ 

The second thought I want to develop is about the mentioned quantum oscillation. What is it? Or what could it be? An electromagnetic wave is caused by a moving electric charge. What kind of movement? Whatever: the charge could move up or down, or it could just spin around some axis—whatever, really. For example, if it spins around some axis, it will have a magnetic moment and, hence, the field is essentially magnetic, but then, again,Ā E and B are related and so it doesn’t really matter if the first cause is magnetic or electric: that’s just our way of looking at the world: in another reference frame, one that’s moving with the charges,Ā the field would essential be electric.Ā So the motion can be anything: linear, rotational, or non-linear in some irregular way. It doesn’t matter: any motion canĀ always be analyzed as the sum of a number of ‘ideal’ motions.Ā So let’s assume we have some elementaryĀ charge in space, and it moves and so it emits some electromagnetic radiation.

So nowĀ we need to think about that oscillation. The key question is: how small can it be? Indeed, in one of my previous posts, I tried to explain some of the thinking behind the idea of the ‘Great Desert’, as physicists call it. The whole idea is based on our thinking about the limit: what is the smallest wavelength that still makes sense? So let’s pick up that conversation once again.

The Great Desert lies between theĀ 1032Ā and 1043 Hz scale. 1032Ā Hz corresponds to aĀ photonĀ energy of Eγ = hĀ·f = (4Ɨ10āˆ’15Ā eVĀ·s)Ā·(1032Ā Hz) = 4Ɨ1017Ā eV = 400,000 tera-electronvolt (1 TeV = 1012Ā eV). I use the γ (gamma) subscript in my Eγ symbol for two reasons: (1) to make it clear that I am not talking the electric field E here but energy, and (2) to make it clear we are talking ultra-high-energy gamma-rays here.

In fact, γ-rays of this frequency and energy are theoretical only. Ultra-high-energy gamma-rays are defined as rays with photon energies higher than 100 TeV, which is the upper limit forĀ very-high-energy gamma-rays, which have been observed as part of the radiation emitted by so-called gamma-ray burstsĀ (GRBs):Ā flashes associated with extremely energetic explosions in distant galaxies. Wikipedia refers to them as the ‘brightest’ electromagnetic events know to occur in the Universe. These rays are not to be confused withĀ cosmicĀ rays, which consist of high-energy protonsĀ andĀ atomic nuclei stripped of their electron shells. Cosmic rays aren’t rays really and, because they consist of particles with a considerable rest mass, their energy is even higher. The so-called Oh-My-GodĀ particle, for example, which is the most energetic particle ever detected, had an energy ofĀ 3Ɨ1020Ā eV, i.e.Ā 300 million TeV. But it’s not a photon: its energy is largely kinetic energy, with the rest mass m0Ā counting for a lot in the mĀ in the E =Ā mĀ·c2Ā formula. To be precise: the mentioned particle was thought to be anĀ ironĀ nucleus, and it packed the equivalent energy of a baseball traveling at 100 km/h!Ā 

But let me refer you to another source for a good discussion on these high-energy particles, so I can get get back to the energy of electromagnetic radiation. When I talked about the Great Desert in that post, I did so using the Planck-Einstein relation (E = hĀ·f), which embodies the idea of the photon being valid always and everywhere and, importantly,Ā at every scale. I also discussed the Great Desert using real-life light being emitted by real-life atomic oscillators. Hence, I may have given the (wrong) impression that the idea of a photon as a ā€˜wave train’ is inextricably linked with these real-life atomic oscillators, i.e. to electrons going from one energy level to the next in some atom. Let’s explore these assumptions somewhat more.

Let’s start with the second point. Electromagnetic radiation is emitted by any accelerating electric charge, so the atomic oscillator model is an assumption that shouldĀ notĀ be essential. And it isn’t. For example, whatever is left of the nucleus after alpha or beta decay (i.e. a nuclear decayĀ process resulting in the emission of an α- or β-particle) it likely to be in an excited state, and likely to emit a gamma-ray for about 10āˆ’12 seconds, so that’s a burst that’s about 10,000 times shorterĀ than the 10–8Ā seconds it takes for the energy of a radiating atom to die out. [As for the calculation of that 10–8Ā sec decay time – so that’s like 10 nanoseconds – I’ve talked about this before but it’s probably better to refer you to the source, i.e. one of Feynman’s Lectures.]

However, what we’re interested in is not the energy of theĀ photon, but the energy of one cycle. In other words, we’re not thinking of the photon as some wave train here, but what we’re thinking about is the energy that’s packed into a space corresponding toĀ one wavelength. What can we say about that?

As you know, that energy will depend both on the amplitudeĀ of the electromagnetic wave as well as its frequency. To be precise, the energy is (1) proportional to theĀ squareĀ of the amplitude, and (2) proportional to the frequency. Let’s look at the first proportionality relation. It can be written in a number of ways, but one way of doing it is stating the following: if we know the electric field, then the amount of energy that passes per square meter per second through a surface that is normal to the direction in which the radiation is going (which we’ll denote by S – the s from surface – in the formula below), must be proportional to the average of the square of the field. So we write SĀ āˆĀ āŒ©E2〉, and so we should think about theĀ constant of proportionalityĀ now. Now, let’s not get into the nitty-gritty, and so I’ll just referĀ to Feynman for the derivation of the formula below:

S = ε0c·〈E2〉

So the constant of proportionality is ε0c. [Note that, in light of what we wrote above, we can also write this asĀ S = (1/μ0Ā·c)·〈(cĀ·B)2〉 = (c/μ0)·〈B2〉, so that underlines once again that we’re talking one electromagneticĀ phenomenon only really.] So that’sĀ a nice and rather intuitive result in light of all of the other formulas we’ve been jotting down. However, it isĀ a ‘wave’ perspective. The ‘photon’ perspective assumes that, somehow, the amplitude is given and, therefore, the Planck-Einstein relation only captures the frequency variable:Ā Eγ = hĀ·f.

Indeed, ‘more energy’ in the ‘wave’ perspective basically means ‘more photons’, but photons are photons: they have a definiteĀ frequencyĀ and a definiteĀ energy, and both are given by that Planck-Einstein relation.Ā So let’s look at that relation by doing a bit of dimensionalĀ analysis:

  • Energy is measured in electronvolt or, using SI units, joule: 1 eV ā‰ˆĀ 1.6Ɨ10āˆ’19Ā J. Energy is force times distance: 1 joule = 1 newtonĀ·meter, which means that a larger force over a shorter distance yields the same energy as a smaller force over a longer distance. The oscillations we’re talking about here involve very tinyĀ distances obviously. But the principle is the same: we’re talking some moving chargeĀ q, and theĀ power – which is the time rate of change of the energy – that goes in or out at any point of time is equal to dW/dt =Ā FĀ·v, with W the work that’s being done by the charge as it emits radiation.
  • I would also like to add that, as you know, forces are related to theĀ inertiaĀ of things. Newton’s Law basically defines a force as that what causes a mass to accelerate: F = mĀ·a =Ā mĀ·(dv/dt) = d(mĀ·v)/dt = dp/dt, with p theĀ momentumĀ of the object that’s involved. When charges are involved, we’ve got the same thing: aĀ potential differenceĀ will cause some currentĀ to change, and one of the equivalents of Newton’s LawĀ F = mĀ·a = mĀ·(dv/dt) in electromagnetism is V = LĀ·(dI/dt). [I am just saying this so you get a better ‘feel’ for what’s going on.]
  • Planck’s constant is measured inĀ electronvoltĀ·secondsĀ (eVĀ·s) or in, using SI units, inĀ jouleĀ·secondsĀ (JĀ·s), so its dimension is that of (physical) action, which is energy times time: [energy]Ā·[time]. Again, a lot of energy during a short time yields the same energy as less energy over a longer time. [Again, I am just saying this so you get a better ‘feel’ for these dimensions.]
  • The frequency f is the number of cycles per time unit, so that’s expressedĀ per second, i.e. inĀ herzĀ (Hz) = 1/second = sāˆ’1.

So… Well… It all makes sense: [xĀ joule]Ā = [6.626Ɨ10āˆ’34Ā joule]Ā·[1 second]Ɨ[fĀ cycles]/[1 second].Ā But let’s try to deepen our understanding even more: what’s the Planck-Einstein relationĀ reallyĀ about?

To answer that question, let’s think some more about the wave function. As you know, it’s customary to express the frequency as an angular frequency ω, as used in the wave function A(x, t) = A0Ā·sin(kx āˆ’ ωt).Ā The angular frequency is the frequency expressed inĀ radiansĀ per second. That’s because we need an angleĀ in our wave function, and so we need to relate x and t to some angle.Ā The way to think about this is as follows: one cycle takes a time T (i.e. the periodĀ of the wave)Ā which is equal to T = 1/f. Yes: one second divided by the number of cycles per second gives you the time that’s needed for one cycle. One cycle is also equivalent to our argument ωt going around the full circle (i.e. 2Ļ€), so we write:Ā  ω·T = 2Ļ€ and, therefore:

ω = 2Ļ€/T = 2π·f

Now we’re ready to play with the Planck-Einstein relation. WeĀ knowĀ it gives us the energy ofĀ oneĀ photon really, but what ifĀ we re-write our equation Eγ = hĀ·fĀ as Eγ/fĀ = h? The dimensions in this equation are:

[xĀ joule]Ā·[1 second]/[fĀ cyles] = [6.626Ɨ10āˆ’34Ā joule]Ā·[1 second]

⇔ xĀ =Ā 6.626Ɨ10āˆ’34Ā jouleĀ per cycle

So that means that the energy per cycle is equal toĀ 6.626Ɨ10āˆ’34Ā joule,Ā i.e. the value of Planck’s constant.

Let me rephrase trulyĀ amazing result, so you appreciate it—perhaps: regardless of the frequency of the light (or our electromagnetic wave, in general) involved, the energy per cycle, i.e. per wavelength or per period, is alwaysĀ equal to 6.626Ɨ10āˆ’34Ā joule or, using the electronvolt as the unit, 4.135667662Ɨ10āˆ’15Ā eV. So, in case you wondered,Ā thatĀ is the trueĀ meaning of Planck’s constant!

Now, if we have the frequency f, we also have the wavelength Ī», because the velocity of the wave is the frequency times the wavelength:Ā cĀ = λ·f and, therefore, Ī» = c/f.Ā So if we increase the frequency, the wavelength becomes smaller and smaller, and so we’re packing the same amount of energy – admittedly, 4.135667662Ɨ10āˆ’15Ā eV is a very tiny amount of energy – into a space that becomes smaller and smaller. Well… What’s tiny, and what’s small? All is relative, of course. šŸ™‚ So that’s where the Planck scale comes in. IfĀ we pack that amount of energy into some tiny little space of the Planck dimension, i.e. a ā€˜length’ of 1.6162Ɨ10āˆ’35Ā m, then it becomes a tiny black hole, and it’s hard to think about how that would work.

[…] Let me make a small digression here. I said it’s hard to think about black holes but, of course, it’s not because it’s ‘hard’ that we shouldn’t try it. So let me just mention a few basic facts. For starters, black holes do emit radiation! So they swallow stuff, but they also spit stuff out. More in particular, there is the so-called Hawking radiation,Ā as Roger Penrose and Stephen Hawking discovered.

Let me quickly make a few remarks on that: Hawking radiation is basically a form of blackbody radiation, so all frequencies are there, as shown below: theĀ distributionĀ of the various frequencies depends on the temperature of the black body, i.e. the black hole in this case. [The black curve is the curve that Lord Rayleigh and Sir James Jeans derived in the late 19th century, usingĀ classicalĀ theory only, so that’s the one that does notĀ correspond to experimental fact, and which led Max Planck to become the ‘reluctant’ father of quantum mechanics. In any case, that’s history and so I shouldn’t dwell on this.]

600px-Black_body

The interesting thing about blackbody radiation, includingĀ Hawking radiation, is that it reduces energy and, hence, the equivalent mass of our blackbody. SoĀ Hawking radiation reduces the mass and energy of black holes and is therefore also known as black hole evaporation. So black holes that lose more mass than they gain through other means are expected to shrink and ultimately vanish. Therefore, there’s all kind of theories that say why micro black holes, like that Planck scale black hole we’re thinking of right now, should be muchĀ larger net emitters of radiation than large black holes and, hence, whey they should shrink and dissipate faster.

Hmm… Interesting… What do we do with all of this information? Well… Let’s think about it as we continue our trek on this long journey to reality over the next year or, more probably, years (plural). šŸ™‚

The key lesson here is that space and time are intimately related because of the idea of movement, i.e. the idea of something having some velocity, and that it’s not so easy to separate the dimensions of time and distance in any hard and fast way. As energy scales become larger and, therefore, our natural time and distance units become smaller and smaller, it’s the energy concept that comes to the fore.Ā It sort of ‘swallows’ all other dimensions, and it does lead to limiting situations which are hard to imagine. Of course, that just underscores the underlying unity of Nature, and the mysteries involved.

So… To relate all of this back to the story that our professor is trying to tell, it’s a simple story really. He’s talking aboutĀ two fundamental constants basically, c and h, pointing out thatĀ cĀ is a property of empty space, andĀ h is related to something doing something. Well… OK. That’s really nothing new, and surely notĀ ground-breaking research. šŸ™‚

Now, let me finish my thoughts on all of the above by making one more remark. If you’ve read a thing or two about this – which you surely have – you’ll probably say: this is not how people usually explain it. That’s true, they don’t. Anything I’ve seen about this just associates theĀ 1043 Hz scale with the 1028Ā eV energy scale, using the same Planck-Einstein relation. For example, the Wikipedia article on micro black holes writes that “the minimum energy of a microscopic black hole is 1019Ā GeV [i.e. 1028Ā eV], which would have to be condensed into a region on the order of the Planck length.” So that’s wrong. I want to emphasize this point because I’ve been led astray by it for years.Ā It’s not the totalĀ photon energy, butĀ the energy per cycle that counts. Having said that, it is correct, however, and easy to verify, that theĀ 1043 Hz scale corresponds to a wavelength of the Planck scale: Ī» =Ā c/fĀ = (3Ɨ108Ā m/s)/(1043Ā sāˆ’1) = 3Ɨ10āˆ’35Ā m. The confusion between the photon energy and the energy per wavelengthĀ arises because of the idea of a photon: it travels at the speed of light and, hence, because of theĀ relativistic length contraction effect, it is said to be point-like, to have no dimension whatsoever. So that’s why we think of packing all of its energy in some infinitesimally small place. But you shouldn’t think like that. The photon is dimensionless in ourĀ reference frame: in its own ‘world’, it is spread out, so itĀ isĀ a wave train. And it’s in its ‘own world’ that the contradictions start… šŸ™‚

OK. Done!

My third and final point is about what our professor writes on the fundamental physical constants, and more in particular on what he writes on the fine-structure constant. In fact, I could just refer you toĀ my own post on it, but that’s probably a bit too easy for me and a bit difficult for you šŸ™‚ so let me summarize that post and tell you what you need to know about it.

The fine-structure constant

The fine-structure constant α is a dimensionlessĀ constant whichĀ also illustrates the underlying unity of Nature, but in a way that’sĀ much more fascinatingĀ than the two or three things the professor mentions. Indeed, it’s quite incredible how this number (α =Ā 0.00729735…, but you’ll usually see it written as its reciprocal, which is a number that’s close to 137.036…) links charge with the relative speeds, radii, and the mass of fundamental particles and, therefore, how this number also these concepts with each other. And, yes, the fact that it is, effectively, dimensionless, unlike h or c, makes it even more special.Ā Let me quickly sum up what the very same number α all stands for:

(1) α is the square of the electron charge expressed in Planck units: α = eP2.

(2) α is the square root of the ratio of (a) the classical electron radius and (b) the Bohr radius: α = √(reĀ /r). You’ll see this more often written asĀ reĀ = α2r. Also note that this is an equation that doesĀ notĀ depend on the units, in contrast to equation 1 (above), and 4 and 5 (below), which require you to switch to Planck units. It’s the square of a ratio and, hence, the units don’t matter. They fall away.

(3) α is the (relative) speed of an electron: α = v/c. [The relative speed is the speed as measured against the speed of light. Note that the ā€˜natural’ unit of speed in the Planck system of units is equal to c. Indeed, if you divide one Planck length by one Planck time unit, you get (1.616Ɨ10āˆ’35Ā m)/(5.391Ɨ10āˆ’44Ā s) =Ā cĀ m/s. However, this is another equation, just like (2), that does notĀ depend on the units: we can express v and c in whatever unit we want, as long we’re consistent and express both in the same units.]

(4) α is also equal to the product of (a) the electron mass (which I’ll simply write as meĀ here) and (b) the classical electron radius reĀ (if both are expressed in Planck units): α =Ā meĀ·re. Now IĀ thinkĀ that’s, perhaps, theĀ mostĀ amazing of all of the expressions for α. [If you don’t think that’s amazing, I’d really suggest you stop trying to study physics. :-)]

Also note that, from (2) and (4), we find that:

(5) The electron mass (in Planck units) is equal me = α/re = α/α2r = 1/αr. So that gives us an expression, using α once again, for the electron mass as a function of the Bohr radius r expressed in Planck units.

Finally, we can also substitute (1) in (5) to get:

(6) The electron mass (in Planck units) is equal to me = α/re  = eP2/re. Using the Bohr radius, we get me = 1/αr = 1/eP2r.

So… As you can see, this fine-structure constant really links allĀ of the fundamental properties of the electron: its charge, its radius, its distance to the nucleus (i.e. the Bohr radius), its velocity, its mass (and, hence, its energy),…

So… Why is what it is?

Well… We all marvel at this, but what can we say about it, really? I struggle how to interpret this, just as much – or probably much more šŸ™‚ – as the professor who wrote the article I don’t like (because it’s so imprecise, and that’s what made me write all what I am writing here).

Having said that, it’s obvious that it points toĀ a unity beyond these numbers and constants that I am only beginningĀ to appreciate for what it is: deep, mysterious, and very beautiful.Ā But so I don’t think that professor does a good job at showing how deep, mysterious and beautiful it all is. But then that’s up to you, my brother and you, my imaginary reader, to judge, of course. šŸ™‚

[…] I forgot to mention what I mean with ‘Planck units’. Well… Once again, I should refer you to one of my other posts. But, yes, that’s too easy for me and a bit difficult for you. šŸ™‚ So let me just note we get those Planck units by equating not less thanĀ five fundamental physical constants to 1, notably (1) the speed of light, (2) Planck’s (reduced) constant, (3) Boltzmann’s constant, (4) Coulomb’s constant and (5) Newton’s constant (i.e. the gravitational constant). Hence, we have a set of five equations here (cĀ = ħ = kBĀ = keĀ = G = 1), and so we can solve that to get the five Planck units, i.e. the Planck length unit, the Planck time unit, the Planck mass unit, the Planck energy unit, the Planck charge unit and, finally (oft forgotten), the Planck temperature unit. Of course, you should note that all mass and energy units areĀ directlyĀ related because of the mass-energy equivalence relationĀ E = mc2, which simplifies to E = m if cĀ is equated to 1. [I could also say something about the relation between temperature and (kinetic) energy, but I won’t, as it would only further confuse you.]

OK. Done! šŸ™‚

Addendum:Ā How to think about space and time?

If you read the argument on the Planck scale and constant carefully, then you’ll note that it does notĀ depend on the idea of an indivisible photon. However, it does depend on that Planck-Einstein relation being valid always and everywhere. Now, the Planck-Einstein relation is, in its essence, a fairly basic result from classical electromagnetic theory: itĀ incorporatesĀ quantum theory – remember: it’s the equation that allowed Planck to solve the black-body radiation problem, and so it’s why they call Planck the (reluctant) ‘Father of Quantum Theory’ – but it’sĀ notĀ quantum theory.

So the obvious question is: can we make this reflection somewhat more general, so we can think of theĀ electromagneticĀ force as an example only. In other words:Ā can we apply the thoughts above to any force and any movement really?

The truth is: I haven’t advanced enough in my little study to give the equations for the other forces. Of course, we could think of gravity, and I developed some thoughtsĀ on how gravity waves might look like, but nothing specific really. And then we have the shorter-range nuclear forces, of course: the strong force, and the weak force. The laws involved areĀ very different. The strong force involves colorĀ charges, and the way distances work is entirely different. So it would surely be some differentĀ analysis. However, the results should be the same. Let me offer some thoughts though:

  • We know that the relative strength of the nuclear force is much larger, because it pulls like charges (protons) together, despite the strong electromagnetic force that wants to push them apart! So the mentioned problem of trying to ‘pack’ some oscillation in some tiny little space should be worse with the strong force. And the strong force is there, obviously, at tiny little distances!
  • Even gravity should become important, because if we’ve got a lot of energy packed into some tiny space, its equivalent mass will ensure the gravitational forces also become important. In fact, that’s what the whole argument was all about!
  • There’s also all this talk about the fundamental forces becoming one at the Planck scale. I must, again, admit my knowledge is not advanced enough to explain how that would be possible, but I must assume that, if physicists are making such statements, the argument must be fairly robust.

So…Ā Whatever charge or whatever force we are talking about, we’ll be thinking of waves or oscillations—or simply movement, but it’s always a movement in a force field, and so there’s power and energy involved (energy is force times distance, and power is the time rate of change of energy). So, yes, we should expect the same issues in regard to scale. And so that’s what’s captured byĀ h.

As we’re talking the smallest things possible, I should also mention that there are also other inconsistencies in the electromagnetic theory, which should (also) have their parallel for other forces. For example, the idea of aĀ point chargeĀ is mathematically inconsistent, as I show in my post on fields and charges. Charge, any charge really, must occupyĀ someĀ space. It cannotĀ all be squeezed into one dimensionless point. So the reasoningĀ behind the Planck time and distance scale is surely valid.

In short, the whole argument about the Planck scale and those limits is veryĀ valid. However, does it imply our thinking about the Planck scale is actually relevant? I mean: it’s not because we canĀ imagineĀ how things might look like  – they may lookĀ like those tiny little black holes, for example – that these things actuallyĀ exist.Ā GUT or string theorists obviously think they are thinking about somethingĀ real. But, frankly, Feynman had a point when he said what he said about string theory, shortly before his untimely death in 1988: “I don’t like that they’re not calculating anything. I don’t like that they don’t check their ideas. I don’t like that for anything that disagrees with an experiment, they cook up an explanation—a fix-up to say, ‘Well, it still might be true.'”

It’s true that the so-called Standard Model doesĀ notĀ look very nice. It’s not like Maxwell’s equations. It’s complicated. It’s got various ‘sectors’: the electroweak sector, the QCD sector, the Higgs sector,… So ‘it looks like it’s got too much going on’, as a friend of mine said when he looked at a new design for mountainbike suspension. šŸ™‚ But, unlike mountainbike designs, there’s no real alternative for the Standard Model. So perhaps we should just accept it is what it is and, hence, in a way, accept Nature as we canĀ see it. So perhaps we should just continue to focus on what’s here, before we reach the Great Desert, rather than wasting time on trying to figure out how things might look like on the other side, especially because we’ll never be able to test our theories about ‘the other side.’

On the other hand, we can see where the Great Desert sort of startsĀ (somewhere near the 1032Ā Hz scale), and so it’s only natural to think it should alsoĀ stop somewhere. In fact, we knowĀ where it stops: it stops at the 1043Ā Hz scale, because everything beyond that doesn’t make sense. The question is: is there actually there? Like fundamental strings or whatever you want to call it. Perhaps we should just stop where the Great Desert begins. And what’s the Great Desert anyway? Perhaps it’s a desert indeed, and so then there is absolutelyĀ nothingĀ there. šŸ™‚

Hmm… There’s not all that much one can say about it. However, when looking at the history of physics, there’s one thing that’s really striking. Most of what physicists can think of,Ā in the sense that it made physical sense, turned out toĀ exist. Think of anti-matter, for instance. Paul Dirac thought it might exist, that it made sense to exist, and so everyone started looking for it, and Carl Anderson found in a few years later (in 1932). In fact, it had been observed before, but people just didn’t pay attention, so they didn’t want to see it, in a way. […] OK. I am exaggerating a bit, but you know what I mean. The 1930s are full of examples like that. There was a burst of scientific creativity, as the formalism of quantum physics was being developed, and the experimental confirmations of the theory just followed suit.

In the field of astronomy, or astrophysics I should say, it was the same with black holes.Ā No one could really imagine the existence of black holes until the 1960s or so: they were thought of a mathematical curiosity only, a logical possibility. However, the circumstantial evidence now is quite large and so… Well… It seems a lot of what we can think ofĀ actually has some existence somewhere. šŸ™‚

So… Who knows?Ā […] I surely don’t. And so IĀ need to get back toĀ the grind and work my way through the rest of Feynman’s LecturesĀ and the related math. However, this was a nice digression, and so I am grateful to my brother he initiated it. šŸ™‚

Some content on this page was disabled on June 17, 2020 as a result of a DMCA takedown notice from Michael A. Gottlieb, Rudolf Pfeiffer, and The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/

Self-inductance, mutual inductance, and the power and energy of inductors

As Feynman puts it, studying physics is not always about the ‘the great and esoteric heights’. In fact, you usuallyĀ have to come down from then fairly quickly – studying physics is similar to mountain climbing in that regard šŸ™‚ – and study ‘relatively low-level subjects’, such as electrical circuits, which is what we’ll do in this and the next post.

As I’ve introduced some key concepts in a previous post already, let me recapitulate the basics, which include the concept of theĀ electromotive force, which is basically theĀ voltage, i.e. the potential difference, that’s produced in a loop or coil of wire as the magnetic flux changes. I also talked about theĀ impedanceĀ in a AC circuit. Finally, we also discussed the powerĀ andĀ energiesĀ involved. Important results from this previous discussion include (but are not limited to):

  1. A constant speed AC generator will create an alternating current with the emf, i.e. the voltage, varying as V0Ā·sin(ωt).
  2. If we only have resistors as circuit elements, and the resistance in the circuit adds up to R, then the electric current in the circuit will be equal to I = ʐ/R = V/R = (V0/R)Ā·sin(ωt). So that’s Ohm’s Law, basically.
  3. The power that’s produced and consumed in an AC circuit is the product of the voltage and the current, so P = Ɛ·I = VĀ·I. We also showed this electrical power is equal to theĀ mechanicalĀ power dW/dt that makes the generator run.
  4. Finally, we explained the concept of impedance (denoted by Z) using Euler’s formula:Ā Z =Ā |Z|eiĪø, mentioning that, if other circuit elements than resistors are involved, such as inductors,Ā then it’s quite likely that the current signal will lag the voltage signal, with the phase factor Īø telling us by how much.

It’s now time to introduce those ‘other’ circuit elements. So let’s start with inductors here, and the concept ofĀ inductanceĀ itself. There’s a lot of stuff to them, and so let’s go over it point by point.

The concept of self-inductance

In its simplest form, an inductor is just a coil, but they come in all sizes and shapes. If you want to see how they might look like, justĀ googleĀ some images of micro-electronic inductors, and then, just to see the range of applications, some images of inductors used for large-scale industrial applications. If you do so, you’re likely to see images ofĀ transformersĀ too, because transformers work on the principle ofĀ mutualĀ inductance, and so they involveĀ twoĀ coils, i.e. two inductors.

Contrary to what you might expect, the concept of self-inductance (or inductance tout court) is quite simply: a changing current will cause a changingĀ magnetic field and, hence, some emf. Now, it turns out that the inducedĀ emfĀ is proportional to the change in current. So we’ve got another constant of proportionality here, so it’s like how we defined resistance, orĀ capacitance. So, in many ways, theĀ inductanceĀ is just another proportionality coefficient. If we denote it by L – the symbol is said to honor the Russian phyicistĀ Heinrich Lenz, whom you know from Lenz’ Law – then we define it as:

L = āˆ’Ę/(dI/dt)

The dI/dt factor is, obviously, the time rate of change of the current, and the negative sign indicates that the emfĀ opposesĀ the change in current, so it will tend to cause anĀ opposingĀ current. That’s why the emf involved is often referred to as a ‘back emf’. So that’sĀ Lenz’ LawĀ basically. As you might expect, the physicists came up with yet another derived unit, theĀ Henry, to honor yet another physicist, Joseph Henry, an American scientist who was a contemporary of Michael Faraday and independently discovered pretty much the same as Faraday: oneĀ henryĀ (H) equals one voltĀ·second per ampere: 1 H = 1 VĀ·s/A.

The concept of mutual inductance

FeynmanĀ introduces the topic of inductance with a two-coil set-up, as shown below, noting that a current in coil 1 will induce some emf in coil 2, which he denotes byĀ M12. Conversely, a current in coil 2 will induce some emf in coil 1, which he denoted by M21. M12Ā and M21Ā are also constants: they depend on the geometry of the coils, including the length of the solenoid (l), its surface area (S) and the number of loop turns of the coils (N1Ā and N2).

mutual inductance

The next step in the analysis is then to acknowledge that each coil should also produce a ‘back emf’ in itself, which we can denote by M11Ā and M22Ā respectively, but then these constants are, of course, equal to the self-inductanceĀ of the coils so, taking into account the convention in regard to the sign of the self-inductance, we write:

L1Ā = āˆ’ M11Ā and L1Ā =Ā āˆ’ M22

You will now wonder: what’s theĀ totalĀ emf in each coil, taking into account that we do not only have mutual inductance but also self-inductance? Frankly, when I was a kid, and my father tried to tell me one or two things about this, it confused me very much. I could not imagine what happened in one coil, let alone in twoĀ coils. I had this vision of a current producing some ‘back-current’, and then the ‘back-current’ producing ‘back-current’ again, and so I could not imagine how one could solve this problem. So the image in my head was very much like that baking powder box which Feynman evokes when talking about the method of imagesĀ to find the electric fields in situations with an easy geometry, so that’s the picture of a baking powder box which has on its label a picture of a baking powder box which has… Well… Etcetera. Unfortunately, my father didn’t push us to study math and, therefore, I knewĀ that one could solve such problems mathematically – we’re talking a converging series here – but I did notĀ know how, and that’s why I found it all veryĀ confusing.

Now I understand there’s one current only, and one potential difference only, and that the formulas doĀ notĀ involve some infinite series of terms. But… Well… I am not ashamed to say these problems are still testing the (limited) agility of my mind. The first thing to ‘get’ is that we’re talking a backĀ emf, and so that’s not a current but a potential difference. In fact, as I explained in my post on the electromotive force, the term ‘force’ in emf is actually misleading, and may lead to that same erroneous vision that I had as a kid: forces generating counter-forces, that generate counter-forces, that generate counter-forces, etcetera. It’s not like that: we have some current – oneĀ current – in a coil and we’ll have someĀ voltage – oneĀ voltage – across the coil. If the coil would be a resistorĀ instead of a coil, we’d find that the ratio of this voltage and the current would be some constant R = V/I. Now here we’re talking a coil indeed, so that’s aĀ differentĀ circuit element, and we find some other ratio, L =Ā āˆ’V/(dI/dt) = āˆ’Ę/(dI/dt). Why the minus sign? Well… As said, the inducedĀ emf will be such that it will tend to counter the current, and current flows fromĀ positiveĀ toĀ negativeĀ as per our convention.

But…Ā Yes?Ā So how does it work when we put this coil in some circuit, and how does the resistance of the inductor come into play? Relax. We’ve just been talking ideal circuit elementsĀ so far, and we’ve discussed only two: the resistor and the inductor. We’ll talk about voltage sources (or generators) and capacitors too, and then we’ll link all of these ideal circuit elements. In short, we’ll analyze some real-life electricalĀ circuitĀ soon, but first you need to understand the basics. Let me just note that an ideal inductor appears a zero-resistance conductor in a direct currentĀ (DC) circuit, so it’s a short-circuit really! Please try to mentallyĀ separate out those ‘ideal’ circuit components. Otherwise you’ll never be able to make sense of it all!

In fact, there’s a good reason why Feynman starts with explainingĀ mutual inductance before discussing a little circuit like the one below, which has an inductorĀ andĀ a voltage source. The two-coil situation above is effectively easier to understand, although it may not look like that at first. So let’s analyze that two-coil situation in more detail first. In other words, let me try to understand the situation that I didn’t understand as a kid. šŸ™‚

circuit with coil

Because of the law of superposition, we should add fluxes and changes in fluxes and, hence, we should also add the electromotive forces, i.e. the induced voltages. So, what we have here is that the total emf in coil 2 should be written as:

ʐ2Ā = M21Ā·(dI1/dt) +Ā M22Ā·(dI2/dt) =Ā M21Ā·(dI1/dt) – L2Ā·(dI2/dt)

What we’re saying here is that the emf, i.e. the voltage across the coil, will indeed depend on the change in current in the other coil, but also on the change in current of the coil itself. Likewise, the total emf in coil 1 should be written as:

ʐ1Ā = M12Ā·(dI2/dt) +Ā M11Ā·(dI1/dt) =Ā M12Ā·(dI2/dt) – L1Ā·(dI1/dt)

Of course, this does reduce to the simple L = āˆ’Ę/(dI/dt)Ā if there’s one coil only. But so you see where it comes from and, while we doĀ notĀ have some infinite series šŸ™‚ we do have a system of two equations here, and so let me say one or two things about it.

The first thing to note is that it is not so difficult to show thatĀ M21Ā is equal toĀ M12, so we can simplify and write that M21Ā = M12Ā = M. Now, while I said ‘not so difficult’, I didn’t mean it’s easy and, because I don’t want this post to become too long, I’ll refer you to Feynman for the proof of this M21Ā = M12Ā = M equation. It’s a general proof for any two coils or ‘circuits’ of arbitrary shape and it’s really worth the read. However, I have to move on.

The second thing to note is that this coefficientĀ M, which is referred to as theĀ mutual inductance now (so singular instead of plural) depends on the ‘circuit geometry’ indeed. For a simple solenoid, Feynman calculates it as

M = āˆ’(1/ε0c2)Ā·(N1Ā·N2)Ā·S/l,

withĀ lĀ the length of the solenoid, SĀ its surface area (S), and N1Ā and N2Ā the respective number of loop turns of the two coils. So, yes, only ‘geometry’ comes into play. [Note that’s quite obvious from the formula because a switch of the subscriptsĀ ofĀ N1Ā and N2Ā makes no difference, of course!]. Now, it’s interesting to note that M is the same for, let’s say, N1Ā = 100 and N2Ā = 10 and for N1Ā = 20 and N2Ā = 50. In fact, because you’re familiar with what transformers do, i.e. transforming voltages, you may think that’s counter-intuitive. It’s not. The thing with the number of coils does notĀ imply that ʐ1Ā and Ɛ2Ā remain the same. Our set of equations is Ɛ1Ā = MĀ·(dI2/dt) – L1Ā·(dI1/dt) and Ɛ2Ā = MĀ·(dI1/dt) – L2Ā·(dI2/dt), and so L1Ā and L2Ā clearly do vary asĀ N1Ā and N2Ā vary! So… Well… Yes. We’ve got a set of two equations with two independent variables (I1Ā and I2) and two dependent variables (ʐ1Ā and ʐ2). Of course, we could also phrase the problem the other way around: given two voltages, what are the currents? šŸ™‚

Of course, that makes us think of the power that goes in and out of a transformer. Indeed, you’ll remember that power is voltage times current. So what’s going on here in regard to that?

Well… There’s a thing with transformers, or with two-coil systems like this in general, that is referred to as coupling. The geometry of the situation will determineĀ how much flux from one coil is linked with the flux of the other coil. If most, or all of it, is linked, we say the two coils are ‘tightly coupled’ or, in the limit, that they are fully coupled. There’s a measure for that, and it’s called theĀ coefficient of coupling. Let’s first explore that concept of power once more.

Inductance, energy and electric power

It’s easy to see that we needĀ electric powerĀ to get some current going. Now, as we pointed out in our previous post, the power is equal to the voltage times the current. It’s also equal, of course, to the amount of work done per second, i.e. the time rate of change of the energy W, so we write:

dW/dt = Ɛ·I

Now, we defined the self-inductance as L = āˆ’Ę/(dI/dt) and, therefore, we know that Ɛ = āˆ’LĀ·(dI/dt), so we have:

dW/dt = āˆ’LĀ·IĀ·(dI/dt)

What is this? A differential equation? Yes and no. We’ve got not one but two functions of time here, W and I, and, while their derivatives with respect to time do appear in the equation, what we need to do is just integrate the two sides over time. We get: W =Ā āˆ’(1/2)Ā·LĀ·I2. Just check it by taking the time derivative of both sides. Of course, we can add any constant, to both sides in fact, but that’s just a matter of choosing some reference point. We’ll chose our constant to be zero, and also think about the energy that’sĀ storedĀ in the coil, i.e. U, which we define as:

U =Ā āˆ’W =Ā āˆ’(1/2)Ā·LĀ·I2Ā 

Huh?Ā What’s going on here? Well… It’s not an easy discussion, but let’s try to make sense of it. We have someĀ changingĀ current in the coil here but, obviously, some kind of inertia also: the coil itself opposes the change in current through the ‘back emf’. It requires energy, or power, to overcome the inertia. We may think of applying some voltage to offset the ‘back emf’, so we may effectively think of that little circuit with an inductorĀ and a voltage source. The voltage V we’d need to apply to offset the inertia would, obviously, be equal to the ‘back emf’, but with its sign reversed, so we have:

V = āˆ’Ā ĘĀ = LĀ·(dI/dt)

Now, it helps to think of what a current really is: it’s about electric charges that are moving at some velocity v because some force is being applied to them. As in any system, the power that’s being delivered is the dot product of the force and the velocity vectors (that ensures we only take the tangential component of the force into account), so if we have nĀ moving charges, the power that is being delivered to the circuit isĀ (FĀ·v)Ā·n. What is F? It’s obviously, qE, as the electric field is the force per unit charge, so E = F/q. But so we’re talking some circuitĀ here and we need to think ofĀ the power being delivered to some infinitesimal element ds in the coil, and so that’s (FĀ·v)Ā·nĀ·ds, which can be written as:Ā (FĀ·ds)Ā·nĀ·v. And then we integrate over the whole coil to find:power

Now, you may or may not remember that the emf (Ɛ) is actually defined as the line integral ∫ E·ds line, taken around the entire coil and, hence, noting that E = F/q, and that the current I is equal to I = q·n·v, we got our power equation. Indeed, the integrand or kernel of our integral becomes F·n·v·ds = q·E·n·v·ds = I·E·ds. Hence, we get our power formula indeed: P = V·I, with V the potential difference, i.e. the voltage across the coil.

I am getting too much into the weeds here. The point is: we’ve got a full and complete analog to the concept of inertia in mechanics here: instead of some forceĀ F causing someĀ massĀ m to change its velocity according to Newton’s Law, i.e. F = mĀ·a = mĀ·(dv/dt), we here have a potential difference V causing some current I to change according to the V = LĀ·(dI/dt) law.

This is very confusing but, remember, the same equations must have the same solutions! So, in an electric circuit, the inductance is really like what the mass is in a mechanics. Now, in mechanics, we’ll say that our mass has some momentum p = mĀ·v, and we’ll also say that its kinetic energy is equal to (1/2)mĀ·v2. We can do the same for our circuit: potential energy is continuously being converted into kinetic energy which, for our inductor, we write as U = (1/2)Ā·LĀ·I2.

Just think about by playing with one of the many online graphing tools. The graph below, for example, assumes the current builds up to some maximum. As it reaches its maximum, theĀ storedĀ energy will also max out. Now, you should not worry about the units here, or the scaleĀ of the graphs. The assumption is that I builds up from 0 to 1, and that L = 1, so that makes U what it is. Using a different constant for L, and/or different units for I, will change the scale of U too, but not its general shape, and that shape gives you the general idea.

powerThe example above obviously assumes some direct current, so it’s a DC circuit: the current builds up, but then stabilizes at some maximum that we can find by applying Ohm’s Law to the resistanceĀ of the circuit: I = V/R. Resistance? But we were talking an ideal inductor? We are. If there’s no other resistance in the circuit, we’ll have a short-circuit, so the assumption is that we doĀ have some resistance in the circuit and, therefore, we should also think of some energyĀ lossĀ to heat from the current in the resistance, but that’s not our worry here.

The illustration below is, perhaps, more interesting. Here we are, obviously, applying anĀ alternating current, and so the current goes in one and then in the other direction, so I > 0, and then I < 0, etcetera. We’re assuming some nice sinusoidal curve for the current here (i.e. the blue curve), and so we get what we get for U (i.e. the red curve): the energy goes up and down between zero and some maximum amplitude that’s determined by the maximum current.

power 2

So, yes, it is, after all, quite intuitive: building up a current does require energy from some external source, which is used to overcome the ‘back emf’ in the inductor, and that energy is stored in the inductor itself. [If you still wonder why it’s stored in the inductor, think about the other question: where else would it be stored?]Ā HowĀ is stored? Look at the graph and think: it’s stored as kinetic energy of the charges, obviously. That explains why the energy is zero when the current is zero, and why the energy maxes out when the current maxes out. So, yes, it all makes sense! šŸ™‚

Let’s now get back to that coupling constant.

The coupling constant

We can apply our reasoning to two coils. Indeed, we know that Ɛ1Ā = MĀ·(dI2/dt) – L1Ā·(dI1/dt) and Ɛ2Ā = MĀ·(dI1/dt) – L2Ā·(dI2/dt). So the power in the two-coils system is dW/dt = Ɛ1Ā·I1Ā + ʐ2Ā·I2, so we have:

dW/dt = MĀ·I1(dI2/dt) – L1Ā·I1Ā·(dI1/dt) + MĀ·I2Ā·(dI1/dt) – L2Ā·I2Ā·(dI2/dt)

= – L1Ā·I1Ā·(dI1/dt) – L2Ā·I2Ā·(dI2/dt) + MĀ·I1(dI2/dt)Ā·I2Ā·(dI1/dt)

Integrating both sides, and equating U with āˆ’W once more, yields:

U = (1/2)Ā·L1Ā·I12Ā + (1/2)Ā·L2Ā·I22Ā + MĀ·I1Ā·I2

[Again, you should just take the time derivative to verify this. If you don’t forget to apply the product rule for the MĀ·I1Ā·I2Ā term, you’ll see I am not writing too much nonsense here.] Now, there’s an interestingĀ algebraic transformation of this expression, and an equally interesting explanationĀ whyĀ we’d re-write the expression as we do. Let me copy it fromĀ Feynman so I’ll be using his fancier L and M symbols now. šŸ™‚

explanation coupling constant

So what? Well… Taking into account that inequality above, we can write the relation between M and the self-inductancesĀ L1Ā andĀ L2Ā using some constant k, which varies between 0 and 1 and which we’ll refer to as theĀ coupling constant:

formula coupling constant 2

We refer to k as the coupling constant, for rather obvious reasons: if it’s near zero, the mutual inductance will be very small, and if it’s near one, then the coils are said to be ‘tightly coupled’, and the ‘mutual flux linkage’ is then maximized. As you can imagine, there’s a whole body of literature out there relating this coupling constant to the behavior of transformers or other circuits where mutual inductance plays a role.

The formula for self-inductance

We gave the formula for theĀ mutualĀ inductance of two coils that are arranged as one solenoid on top of the other (cf. the illustration I started with):

M = āˆ’(1/ε0c2)Ā·(N1Ā·N2)Ā·S/l

It’s a very easy calculation, so let me quickly copy it from Feynman:

calculation solenoid

You’ll say: where is the M here? This is a formula for the emf! It is, but M is the constant of proportionality in front, remember? So there you go. šŸ™‚

Now, you would think that getting a formula for the self-inductance L of some solenoid would be equally straightforward. It turns out that that isĀ notĀ the case. Feynman needs two full pages and… Well… By now, you should now how ‘dense’ his writing really is: if it weren’t so dense, you’d be reading Feynman yourself, rather than my ‘explanations’ of him. šŸ™‚ So… Well.. If you want to see how it works, just click on the link hereĀ and scroll down to the last two pages of hisĀ exposé on self-inductance. I’ll limit myself to just jotting down the formula he does obtain when he’s through the whole argument:

solenoid formula

See why he uses a fancier L than ‘my’ L? ‘His’ L is the length of the solenoid. šŸ™‚ And, yes, r is the radius of the coil and n the number of turnsĀ per unit lengthĀ in the winding. Also note this formula is valid only if L >> R, so the effects at theĀ endĀ of the solenoid can be neglected. OK. Done. šŸ™‚

Well… That’s it for today! I am sorry to say but the next post promises to be as boring as this one because… Well… It’s on electric circuits again. 😦

Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 20, 2020 as a result of a DMCA takedown notice from Michael A. Gottlieb, Rudolf Pfeiffer, and The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/

Reconciling the wave-particle duality in electromagnetism

As I talked about Feynman’s equation for electromagnetic radiation in my previous post, I thought I should add a few remarks on wave-particle duality, but then I didn’t do it there, because my post would have become way too long. So let me add those remarks here. In fact, I’ve written about this before, and so I’ll just mention the basic ideas without going too much in detail.Ā Let me first jot down the formula once again, as well as illustrate the geometry of the situation:

formual

geometry

The gist of the matter is that light, in classical theory, is a traveling electromagnetic field caused by an accelerating electric charge and that, because light travels at speed c, it’s the acceleration at the retarded time t – r/c, i.e. a‘ = a(t – r/c), that enters the formula. You’ve also seen the diagrams that accompany this formula:

EM 1 EM 2

The two diagrams above show that the curve of the electric fieldĀ in spaceĀ is a ā€œreversedā€ plot of the acceleration as a function of time. As I mentioned before, that’s quite obvious from the mathematical behavior of a function with argument like the argument above, i.e. a function F(t – r/c). When we write t – r/c, we basically measureĀ distance units in seconds, instead of in meter. So we basically useĀ cĀ as theĀ scale for both time as well as distance. I explained that in a previous post, so please have a look there if you’d want so see how that works.

So it’s pretty straightforward, really. However, having said that, when I see a diagram like the one above, so all of these diagrams plotting anĀ E or BĀ waveĀ in space, I can’t help thinking it’s somewhat misleading: after all, we’re talking something traveling at the speed of light here and, therefore, its length – inĀ ourĀ frame of reference – should be zero. And it is, obviously. Electromagnetic radiation comes packed in point-like, dimensionless photons: the length of something that travels at the speed of lightĀ mustĀ be zero.

Now, I don’t claim to know what’s going onĀ exactly, but my thinking on it may not be far off the mark. We know that light is emitted and absorbed by atoms, as electrons go from one energy level to another, and the energy of theĀ photonsĀ of lightĀ corresponds to the differenceĀ between those energy levels (i.e. a few electron-volt only, typically: it’s given by the E = h·ν relation). Therefore, we can look at a photon as aĀ transient electromagnetic wave. It’s a very short pulse: the decay time for one such pulse ofĀ sodium light, i.e. one photon of sodium light,Ā isĀ 3.2Ɨ10–8Ā seconds. However, taking into account the frequency of sodium light (500 THz), that still makes for some 16 million oscillations, and a wave-train with a length of almost 10 meter. [Yes. Quite incredible, isn’t it?]Ā SoĀ the photon could look like the transient wave I depicted below, except… Well… This wavetrain is traveling at the speed of light and, hence, we will not see it as a ten-meter long wave-train. Why not? Well…Ā Because of the relativistic length contraction, it will effectively appear as a point-like particle to us.

Photon wave

So relativistic length contraction is whyĀ the wave and particle duality can be easily reconciled in electromagnetism: we can think of light as an irregular beam of point-like photons indeed, as one atomic oscillator after the other releases a photon, in no particularly organized way. So we can think of photons asĀ transient wave-trains, but we should remind ourselves that they are traveling at the speed of light, so they’ll look point-like to us.

Is such view consistent with theĀ results of the famous – of should I say infamous? – double-slit experiment. Well… Maybe. As I mentioned in one of my posts, it is rather remarkable that is actually hard to find actual double-slitĀ experiments that use actualĀ detectors near the slits, and even harder to find such experiments involving photons! Indeed, experiments involving detectors near the slits are usually experiments with ‘real’ particles, such as electrons, for example. Now, a lot of advances have been made in the set-up of these experiments over the past five years, and one of these experiments isĀ a 2010 experiment of an Italian teamĀ which suggests that it’s the interaction between the detector and the electron waveĀ that may cause the interference pattern to disappear. Now thatĀ throws some doubts on the traditional explanation of the results of the double-slit experiment.

The idea is shown below. The electron is depicted as an incoming plane wave which effectively breaks up as it goes through the slits. The slit on the left has no ā€˜filter’ (which you may think of as a detector) and, hence, the plane wave goes through as a cylindrical wave. The slit on the right-hand side isĀ covered by a ā€˜filter’ made of several layers of ā€˜low atomic number material’, so the electron goes through but, at the same time, the barrier creates a spherical wave as it goes through. The researchers note that ā€œthe spherical and cylindrical wave do not have any phase correlation, and so even if an electron passed through both slits, the two different waves that come out cannot create an interference pattern on the wall behind them.ā€ [I hope I don’t have to remind you that, while being represented as ā€˜real’ waves here, the ā€˜waves’ are, obviously, complex-valued psi functions.]

double-slit experiment

In fact, to be precise, the experimenters note that there still was an interference effect if the filter was thin enough. Let me quote the reason for that: ā€œThe thicker the filter, the greater the probability for inelastic scattering. When the electron suffers inelastic scattering, it is localized. This means that its wavefunction collapses and, after the measurement act, it propagates roughly as a spherical wave from the region of interaction, with no phase relation at all with other elastically or inelastically scattered electrons. If the filter is made thick enough, the interference effects cancels out almost completely.ā€

This does notĀ solve the ‘mystery’ of the double-slit experiment, but it throws doubt on how it’s usually being explained. The mystery in such experiments is that, when we put detectors, it isĀ eitherĀ theĀ detector at AĀ or the detector at B that goes off. They should never go off togetherā€”ā€at half strength, perhapsā€, as Feynman puts it. But so there areĀ doubts here now. Perhaps the electron doesĀ go through both slits at the same time! And soĀ that’s why I used italics when writing ā€œeven if an electron passed through both slitsā€: the electron, or the photon in a similar set-up, is not supposed to do that according to the traditional explanation of the results of the double-slit experiment! It’s one or the other, and the wavefunction collapsesĀ or reduces as it goes through.Ā 

However,Ā that’s where these so-called ā€˜weak measurement’ experiments now come in, like this 2010 experiment: it does not prove but indicates that interaction does not have to be that way. They strongly suggest that it is notĀ all or nothing, that our observations should not necessarily destroy the wavefunction. So, who knows, perhaps we will be able, one day, to show that the wavefunction does go through both slits, as it should (otherwise the interference pattern cannot be explained), and then we will have resolved the paradox.

I am pretty sure that, when that’s done, physicists will also be able to relate the image of a photon as a transient electromagnetic wave (cf. the diagram above), being emitted by an atomic oscillator for a few nanoseconds only (we gave the example for sodium light, for which the decay time was 3.2Ɨ10–8Ā seconds) with the image of a photon as a particle that can be represented by a complex-valued probability amplitude functionĀ (cf. the diagram below). I look forward to that day. I think it will come soon.

Photon wave

Here I should add two remarks. First, a lot hasĀ been said about the so-called indivisibility of a photon, but inelastic scattering implies that photons are notĀ monolithic: the photon loses energy to the electron and, hence, its wavelength changes. Now, you’ll say: the scattered photon is not the same photon as the incident photon, and you’re right. But… Well. Think about it. It does say something about the presumed oneness of a photon.

I15-72-Compton1

The other remark is on the mathematics of interference. Photons are bosons and, therefore, we have to add their amplitudes to get the interference effect. So you may try to think of an amplitude function, like ĪØ = (1/√2Ļ€)Ā·eiĪøĀ or whatever, and think it’s just a matter of ‘splitting’ this function before it enters the two slits and then ‘putting it back together’, so to say, after our photon has gone through the slits. [For the detailed math of interference in quantum mechanics, see my page on essentials.] Ā Well… No. It’s not that simple. The illustration with that plane wave entering the slits, and the cylindrical and/or spherical wave coming out, makes it obvious that something happens to our wave as it goes through the slit. As I said a couple of times already, the two-slit experiment is interesting, but the interference phenomenon – or diffraction as it’s called – involving one slit only is at least as interesting. So… Well… The analysis isĀ notĀ that simple. Not at all, really. šŸ™‚

Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/

The LiĆ©nard–Wiechert potentials and the solution for Maxwell’s equations

In my post on gauges and gauge transformations in electromagnetics, I mentioned the full and complete solution for Maxwell’s equations, using the electric and magnetic (vector) potential Φ and A. Feynman frames it nicely, so I should print it and put it on the kitchen door, so I can look at it everyday. šŸ™‚

frame

I should print the wave equation we derived in our previous post too. Hmm… Stupid question, perhaps, but why is there no wave equation above? I mean: in the previous post, we said the wave equation was the solution for Maxwell’s equation, didn’t we?Ā The answer is simple, of course: the wave equation is a solution for waves originating from some source and traveling through free space, so that’s a special case. Here we have everything. Those integrals ‘sweep’ all over space, and so that’sĀ realĀ space, which is full of moving charges and so there’s waves everywhere. So the solution above is far more general and captures it all: it’s the potential at every point in space, and at every point in time, taking into account whatever else is there, moving or not moving. In fact, it isĀ the general solutionĀ of Maxwell’s equations.

How do we find it? Well… I could copy Feynman’s 21stĀ LectureĀ but I won’t do that. The solution is based on the formula for Φ and A for a small blob of charge, and then the formulas above just integrate over all of space. That solution for a small blob of charge, i.e. a point charge really, was first deduced in 1898, by a French engineer: Alfred-Marie LiĆ©nard. However, his equations did not get much attention, apparently, because a German physicist, Emil Johann Wiechert, worked on the same thing and found the very same equations just two years later. That’s why they are referred to as theĀ LiĆ©nard-Wiechert potentials, so they both get credit for it, even if both of them worked it out independently. These are the equations:

electric potential

magnetic potential

Now, you may wonder why I am mentioning them, and you may also wonder how we get those integrals above, i.e. our general solution for Maxwell’s equations, from them. You can find the answer to your second question in Feynman’s 21st Lecture. šŸ™‚ As for the first question, I mention them because one can derive two other formulas for E and B from them. It’s the formulas that Feynman uses in his firstĀ Volume, when studying light:Ā E

B

Now you’ll probably wonder how we can get these twoĀ equations from theĀ LiĆ©nard-Wiechert potentials. They don’t look very similar, do they? No, they don’t. Frankly, I would like to give you the same answer as above, i.e. check it in Feynman’s 21st Lecture, but the truth is that the derivation is so long and tedious that even Feynman says one needs “a lot of paper and a lot of time” for that. So… Well… I’d suggest we just use all of those formulas and not worry too much about where they come from. If we can agree on that, we’re actually sort of finished with electromagnetism. All the chapters that follow Feynman’s 21stĀ LectureĀ are applicationsĀ indeed, so they do not add all that much to the coreĀ of the classicalĀ theory of electromagnetism.

So why did I write this post? Well… I am not sure. I guess I just wanted to sum things up for myself, so I can print it all out and put it on the kitchen door indeed. šŸ™‚ Oh, and now that I think of it, I should add one more formula, and that’s the formula for sphericalĀ waves (as opposed to theĀ planeĀ waves we discussed in my previous post). It’s a very simple formula, and entirely what you’d expect to see:

spherical wave

The S function is the source function, and you can see that the formula is a Coulomb-like potential, but with the retarded argument. You’ll wonder: what is ψ? Is it E or B or what? Well… You can just substitute: ψ can be anything. Indeed, Feynman gives a very general solution for any type ofĀ spherical wave here. šŸ™‚

So… That’s it, folks. That’s all there is to it. I hope you enjoyed it. šŸ™‚

Addendum: Feynman’s equation for electromagnetic radiation

I talked about Feynman’s formula for electromagnetic radiation before, but it’s probably good to quickly re-explain it here. Note that it talks about the electric field only, as the magnetic field is so tiny and, in any case, if we have E then we can find B. So the formula is:

E

The geometry of the situation is depicted below. We have some charge q that, we assume, is moving through space, and so it creates some field E at point P. TheĀ er‘Ā vector is the unit vector from P to Q, so it points at the charge. Well… It points to where the charge was at the time just a little while ago, i.e. at the timeĀ t – r‘/c. Why? Well… We don’t know where q is right now, because the field needs some time travel, we don’t know q right now, i.e. q at time t. It might be anywhere. Perhaps it followed some weird trajectory during the time r‘/c, like the trajectory below.

radiation formula

So our er‘Ā vector moves as the charge moves, and so it will also have velocity and, likely, some acceleration, but what we measure for its velocity and acceleration, i.e. the d(er)/dt and d2(er)/dt2Ā in that Feynman equation, is also theĀ retardedĀ velocity and theĀ retardedĀ acceleration. But look at the terms in the equation. The first two terms have a 1/r’2Ā in them, so these two effects diminish with the square of the distance. The first term is just Coulomb’s Law (note that the minus sign in front takes care of the fact that like chargesĀ repelĀ and so the E vector will point in the other way). Well… It is and it isn’t, because of the retarded time argument, of course. And so we have the second term, which sort of compensates for that. Indeed, the d(er)/dt is the time rate of change of erĀ and, hence, if r‘/c = Ī”t, then (r‘/c)Ā·d(er)/dt is a first-order approximation of Ī”er.

As Feynman puts it: “The second term is as though nature were trying to allow for the fact that the Coulomb effect is retarded, if we might put it very crudely. It suggests that we should calculate the delayed Coulomb field but add a correction to it, which is its rate of change times the time delay that we use. Nature seems to be attempting to guess what the field at the present time is going to be, by taking the rate of change and multiplying by the time that is delayed.”Ā In short, the first two terms can be written as E = āˆ’(q/4πε0)/r2Ā·[erĀ +Ā Ī”er] and, hence, it’s a sort of modified Coulomb Law that sort of tries to guess what the electrostatic field at P should be based on (a) what it is right now, and (b) howĀ q’s direction and velocity, as measured now, would change it.

Now, the third term has a 1/c2Ā factor in front but, unlike the other two terms, this effect doesĀ notĀ fall off with distance. So the formula belowĀ fully describes electromagnetic radiation, indeed, because it’sĀ the only important term when we get ‘far enough away’, with ‘far enough’ meaning that the parts that go as the square of the distance have fallen off so much that they’re no longer significant.

radiation formula 2Of course, you’re smart, and so you’ll immediately note that, as r increases, that unit vector keeps wiggling but that effect will also diminish. You’re right. It does, but in a fairly complicated way.Ā The acceleration ofĀ erĀ has two components indeed. One is the transverse or tangential piece, because the end of erĀ goes up and down, and the other is a radial piece because it stays on a sphere and so it changes direction. The radial piece is the smallest bit, and actually also varies as the inverse square ofĀ rĀ when rĀ is fairly large. The tangential piece, however, varies only inversely as the distance, so as 1/r. So, yes, the wigglings of erĀ look smaller and smaller, inversely as the distance, but the tangential piece is and remains significant, because it does not vary as 1/r2Ā but as 1/r only.Ā  That’s why you’ll usually see the law of radiation written in an even simpler way:

final law of radiation

This law reduces the whole effect to the component of the acceleration that is perpendicular to the line of sightĀ only. It assumes the distance is huge as compared to the distance over whichĀ the charge is moving and, therefore, that r‘ and r can be equated for all practical purposes. It also notes that the tangential piece is all that matters, and so it equatesĀ d2(er)/dt2Ā with ax/r. The whole thing is probably best illustrated as below: we have aĀ generatorĀ driving charges up and down in G – so it’s an antennaĀ really – and so we’ll measure a strong signal when putting the radiation detector D in position 1, but we’ll measure nothing in position 3. [The detector is, of course, another antenna, but with an amplifier for the signal.] But so here I am starting to talk about electromagnetic radiation once more, which wasĀ notĀ what I wanted to do here, if only because Feynman does a much better job at that than I could ever do. šŸ™‚radiator

Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 17, 2020 as a result of a DMCA takedown notice from Michael A. Gottlieb, Rudolf Pfeiffer, and The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 20, 2020 as a result of a DMCA takedown notice from Michael A. Gottlieb, Rudolf Pfeiffer, and The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/

Traveling fields: the wave equation and its solutions

Pre-script (dated 26 June 2020): Our ideas have evolved into a full-blown realistic (or classical) interpretation of all things quantum-mechanical. In addition, I note the dark force has amused himself by removing some material. So no use to read this. Read my recent papers instead. šŸ™‚

Original post:

We’ve climbed a big mountain over the past few weeks, post by post, šŸ™‚ slowly gaining height, and carefully checking out the various routes to the top. But we are there now: we finally fully understand how Maxwell’s equations actuallyĀ work. Let me jot them down once more:

Maxwell's equations

As for how real or unreal the E and B fields are, I gave you Feynman’s answer to it, so… Well… I can’t add to that. I should just note, or remind you, that we have a fully equivalent description of it all in terms of the electric and magnetic (vector) potential Φ and A, and so we can ask the same question about Φ and A. They explain realĀ stuff, so they’reĀ real in that sense. That’s what Feynman’s answer amounts to, and I am happy with it. šŸ™‚

What I want to do here is show how we can get from those equations to some kind of wave equation: an equation that describes how a field actuallyĀ travelsĀ through space. So… Well…Ā Let’s first look at that very particular wave function we used in the previous post to prove that electromagnetic waves propagate with speedĀ c, i.e. the speed of light. The fields were very simple: the electric field hadĀ a y-component only, and the magnetic field a z-component only. Their magnitudes, i.e. their magnitude where the field had reached, as it fills the space traveling outwards,Ā were given in terms of J, i.e. the surface current density going in the positive y-direction, and the geometry of the situation is illustrated below.

equation

sheet of charge

The fields were, obviously, zero where the fields had not reached as they were traveling outwards. And, yes, I know that sounds stupid. But… Well… It’s just to make clear what we’re looking at here. šŸ™‚

We also showed how the wave would look like if we would turn off itsĀ First CauseĀ after some time T, so if the moving sheet of charge would no longer move after time T. We’d have the followingĀ pulse traveling through space, a rectangular shapeĀ really:

wavefrontWe can imagine more complicated shapes for the pulse, like the shape shown below. J goes from one unit to two units at time t =Ā t1Ā and then to zero at t =Ā t2. Now, the illustration on the right shows the electric field as a function of x at the time t shown by the arrow. We’ve seen this before when discussing waves: if the speed of travel of the wave is equal toĀ c, then x is equal to x = cĀ·t, and the pattern is as shown below indeed: it mirrors what happenedĀ at the source x/c secondsĀ ago. So we write:

equation 2

12

This idea of using theĀ retardedĀ time t’ = t āˆ’ x/c in the argument of a wave function f – or, what amounts to the same, using x āˆ’ c/t – is key to understanding wave functions. I’ve explained this in veryĀ simple language in a post for my kids and, if you don’t get this, I recommend you check it out. What we’re doing, basically, isĀ converting something expressed in time units intoĀ something expressed in distance units, or vice versa, using theĀ velocityĀ of the wave as the scale factor, so time and distance are bothĀ expressed in the same unit, which may be seconds, or meter.

To see how it works, suppose we addĀ some timeĀ Ī”t to the argument of our wave function f, so we’re looking at f[xāˆ’c(t+Ī”t)] now, instead of f(xāˆ’ct). Now, f[xāˆ’c(t+Ī”t)]Ā = f(xāˆ’ctāˆ’cĪ”t), so we’ll get a different valueĀ for our function—obviously! But it’s easy to see that we canĀ restore our wave function FĀ to its former value by alsoĀ adding some distanceĀ Ī”x =Ā cĪ”t to the argument. Indeed, if we do so, we get f[x+Ī”xāˆ’c(t+Ī”t)] = f(x+cĪ”t–ctāˆ’cĪ”t) = f(x–ct). You’ll say: t āˆ’ x/cĀ is not the same asĀ x–ct. It is and it isn’t: any function of x–ct is also a function of t āˆ’ x/c, because we can write:

capture

Here, I need to add something about the directionĀ of travel. The pulse above travel in the positive x-direction, so that’s why we haveĀ x minusĀ ct in the argument. For a wave traveling in the negative x-direction, we’ll have a wave function y = F(x+ct).Ā In any case, I can’t dwell on this, so let me move on.

Now, Maxwell’s equations in free or empty space, where are thereĀ no charges nor currentsĀ to interact with, reduce to:

Maxwell in free space

Now, how can we relate this set of complicated equations to a simple wave function? Let’s do the exercise for our simple EyĀ and BzĀ wave. Let’s start by writing out the first equation, i.e.Ā āˆ‡Ā·E = 0, so we get:

f1

Now, our wave does notĀ vary in the y and z direction, so none of the components, including EyĀ and EzĀ depend on y or z. It only varies in theĀ x-direction, soĀ āˆ‚Ey/āˆ‚y and āˆ‚Ez/āˆ‚z are zero. Note that the cross-derivatives āˆ‚Ey/āˆ‚z and āˆ‚Ez/āˆ‚y are also zero: we’re talking a plane wave here, the field varies only with x. However, because āˆ‡Ā·E = 0,Ā āˆ‚Ex/āˆ‚x must be zero and, hence, ExĀ must be zero.

Huh?Ā What?Ā How is that possible? You just said that our field doesĀ vary in the x-direction! And now you’re saying it doesn’t it? Read carefully. I know it’s complicated business, but it all makes sense. Look at the function: we’re talking Ey, notĀ Ex. EyĀ does vary as a function of x, but our field does not have an x-component, so ExĀ = 0. We have no cross-derivative āˆ‚Ey/āˆ‚x in the divergence of E (i.e. inĀ āˆ‡Ā·E = 0).

Huh?Ā What?Ā Let me put it differently.Ā E has three components: Ex,Ā EyĀ andĀ Ez, and we have three space coordinates: x, y and z, so we have nine cross-derivatives. What I am saying is that allĀ derivatives with respect to y and z are zero. That still leaves us with threeĀ derivatives:Ā āˆ‚Ex/āˆ‚x, āˆ‚Ey/āˆ‚x, andĀ āˆ‚Ey/āˆ‚x. So… Because all derivatives in respect to y and z are zero, and because of the āˆ‡Ā·E = 0 equation, we knowĀ thatĀ āˆ‚Ex/āˆ‚x must be zero.Ā So, to make a long story short, I did notĀ say anything aboutĀ āˆ‚Ey/āˆ‚x or āˆ‚Ez/āˆ‚x. These may still be whatever they want to be, and they may vary in more or in less complicated ways. I’ll give an example of that in a moment.

Having said that, I do agree that I was a bit quick in writing that, because āˆ‚Ex/āˆ‚x = 0, ExĀ must be zero too. Looking at the math only,Ā ExĀ is not necessarilyĀ zero: it might be some non-zero constant. So… Yes. That’s a mathematical possibility. The static field from some charged condenser plate would be an example of a constantĀ ExĀ field. However, the point is that we’re notĀ looking at such static fields here: we’re talking dynamicsĀ here, and we’re looking at a particular type of wave: we’re talking a so-called planeĀ wave. Now, the wave front of a plane wave is… Well… A plane. šŸ™‚ SoĀ ExĀ is zero indeed. It’s a general result for planeĀ waves: the electric field of a planeĀ wave will always be at right angles to the direction of propagation.

Hmm… I can feel your skepticism here. You’ll say I am arbitrarily restricting the field of analysis… Well… Yes. For the moment. It’s not a reasonableĀ restriction though. As I mentioned above, the field of a plane wave may still vary in both the y- and z-directions, as shown in the illustration below (for which the credit goes to Wikipedia), which visualizes the electric field of circularly polarized light. In any case, don’t worry too much about. Let’s get back to the analysis. Just note we’re talking plane waves here. We’ll talk about non-plane waves i.e.Ā incoherentĀ light waves later. šŸ™‚

circular polarization

So we have plane waves and, therefore, a so-called transverse E field which we can resolve in two components: EyĀ and Ez. However, we wanted to study a very simply EyĀ field only. Why? Remember the objective of this lesson: it’s just to show how we go from Maxwell’s equations to the wave function, and so let’s keep the analysis simple as we can for now: we can make it more general later.Ā In fact, ifĀ we do the analysis now for non-zero EyĀ and zero Ez, we can do a similar analysis forĀ non-zero EzĀ and zero Ey, and the general solution is going to be some superposition of two such fields, so we’ll have a non-zeroĀ EyĀ andĀ Ez. Capito? šŸ™‚ So let me write out Maxwell’s second equation, and use the results we got above, so I’llĀ incorporate the zero values for the derivatives with respect to y and z, and also the assumption that EzĀ is zero. So we get:

f3[By the way: note that, out of the nine derivatives, the curl involves only the (six) cross-derivatives. That’s linked to the neat separation between the curl and the divergence operator. Math is great! :-)]

Now, because of the flux rule (āˆ‡Ć—EĀ = ā€“āˆ‚B/āˆ‚t), we can (and should) equate the three components of āˆ‡Ć—E above with the three components of ā€“āˆ‚B/āˆ‚t, so we get:

f4

[In case you wonder what it is that I am trying to do, patience, please! We’ll get where we want to get. Just hang in there and read on.] Now, āˆ‚Bx/āˆ‚t = 0 and āˆ‚By/āˆ‚t = 0 do not necessarily imply that BxĀ andĀ ByĀ are zero: there might be some magnets and, hence, we may have some constant static field. However, that’s a matter of choosing a reference point or, more simply, assuming that empty space is effectively empty, and so we don’t have magnets lying around and so we assume that BxĀ andĀ ByĀ are effectively zero. [Again, we can always throw more stuff in when our analysis is finished, but let’s keep it simple and stupid right now, especially because the BxĀ = ByĀ = 0 is entirely in line with the ExĀ = EzĀ = 0 assumption.]

The equations above tell us what we know already: the E and B fields are at right angles to each other. However, note, once again, that this is a more general result for all planeĀ electromagneticĀ waves, so it’s not only that very special caterpillar or butterfly field that we’re looking at it. [If you didn’t read my previous post, you won’t get the pun, but don’t worry about it. You need to understand the equations, not the silly jokes.]

OK. We’re almost there. Now we need Maxwell’s last equation. When we write it out, we get the following monstrously lookingĀ set of equations:

f5

However, because of all of the equations involving zeroes above šŸ™‚ only āˆ‚Bz/āˆ‚x is notĀ equal to zero, so the whole set reduced to only simple equation only:

f6

Simplifying assumptions are great, aren’t they? šŸ™‚ Having said that, it’s easy to be confused. You should watch out for the denominators: a āˆ‚x and a āˆ‚t are two veryĀ different things. So we have two equations now involving first-orderĀ derivatives:

  1. āˆ‚Bz/āˆ‚t = āˆ’āˆ‚Ey/āˆ‚x
  2. āˆ’c2āˆ‚Bz/āˆ‚x = āˆ’āˆ‚Ey/āˆ‚t

So what? Patience,Ā please!Ā šŸ™‚ Let’s differentiate the first equation with respect to x and the second with respect to t. Why? Because… Well… You’ll see. Don’t complain. It’s simple. Just do it. We get:

  1. āˆ‚[āˆ‚Bz/āˆ‚t]/āˆ‚x = āˆ’āˆ‚2Ey/āˆ‚x2
  2. āˆ‚[āˆ’c2āˆ‚Bz/āˆ‚x]/āˆ‚t = āˆ’āˆ‚2Ey/āˆ‚x2

So we can equate the left-hand sides of our two equations now, and what we get is a differential equation of the second order that we’ve encountered already, when we were studying wave equations. In fact, itĀ isĀ the wave equation for one-dimensional waves:

f7In case you want to double-check, I did a few posts on this, but, if you don’t get this, well… I am sorry. You’ll need to do some homework. More in particular, you’ll need to do some homework on differential equations. The equation above is basically some constraintĀ on the functional form of Ey. More in general, if we see an equation like:

f8

then the function ψ(x, t) must be some function

solution

So any function ψ like that will work. You can check it out by doing the necessary derivatives and plug them into the wave equation. [In case you wonder how you should go about this, Feynman actually does it for you in his Lecture on this topic, so you may want to check it there.]

In fact, the functionsĀ f(x āˆ’ c/t) andĀ g(x + c/t) themselves will also work as possible solutions. So we can drop one or the other, which amounts to saying that our ‘shape’ has to travel in some direction, rather than in both at the same time.Ā šŸ™‚ Indeed, from all of my explanations above, you know whatĀ f(x āˆ’ c/t) represents: it’s a wave that travels in the positive x-direction. Now, it may be periodic, but it doesn’tĀ haveĀ to be periodic. TheĀ f(x āˆ’ c/t) function could representĀ any constant ‘shape’Ā that’s traveling in the positive x-direction at speed c. Likewise, theĀ g(x + c/t) function could representĀ any constant ‘shape’Ā that’s traveling in the negativeĀ x-direction at speed c. As for super-imposing both…

Well… I suggest you check that post I wrote for my son, Vincent. It’s on the math of waves, but it doesn’t have derivatives and/or differential equations. It just explains how superimposition and all that works. It’s not very abstract, as it revolves around a vibrating guitar string. So, if you have trouble with all of the above, you may want to read that first. šŸ™‚ The bottom line is that we can get anyĀ wavefunction we want by superimposing simple sinusoidals that are traveling in one or the other direction, and so that’s what’s the more general solution really says. Full stop. So that’s what’s we’re doing really: we add very simple waves to get very more complicated waveforms. šŸ™‚

Now, I could leave it at this, but then it’s veryĀ easy to just go one step further, and that is to assume that EzĀ and, therefore, ByĀ are not zero. It’s just a matter of super-imposing solutions. Let me just give you the general solution. Just look at it for a while. If you understood all that I’ve said above, 20 seconds or so should be sufficient to say: “Yes, that makes sense. That’s the solution in two dimensions.” At least, I hope so! šŸ™‚

General solution two dimensions

OK. I should really stop now. But… Well… Now that we’ve got a general solution for all plane waves, why not be even bolder and think about what we could possibly say aboutĀ three-dimensionalĀ waves? So then ExĀ and, therefore, BxĀ would not necessarily be zero either. After all, light can behave that way. In fact, light is likely to beĀ non-polarizedĀ and, hence, ExĀ and, therefore, BxĀ are most probably not equal toĀ zero!

Now, you may think the analysis is going to be terribly complicated. And you’re right. It would beĀ if we’d stick to our analysis in terms of x, y and z coordinates. However, it turns out that the analysis in terms of vector equations is actually quite straightforward. I’ll just copy the Master here, so you can see His Greatness. šŸ™‚

waves in three dimensions

But what solution does an equation like (20.27) have? We can appreciate it’s actually three equations, i.e. one for each component, and so… Well… Hmm… What can we say about that? I’ll quote the Master on this too:

“How shall we find the general wave solution? The answer is that all the solutions of the three-dimensional wave equation can be represented as a superposition of the one-dimensional solutions we have already found. We obtained the equation for waves which move in the x-direction by supposing that the field did not depend on yĀ andĀ z. Obviously, there are other solutions in which the fields do not depend on xĀ andĀ z, representing waves going in the y-direction. Then there are solutions which do not depend on xĀ andĀ y, representing waves travelling in the z-direction. Or in general, since we have written our equations in vector form, the three-dimensional wave equation can have solutions which are plane waves moving in any direction at all. Again, since the equations are linear, we may have simultaneously as many plane waves as we wish, travelling in as many different directions. Thus the most general solution of the three-dimensional wave equation is a superposition of all sorts of plane waves moving in all sorts of directions.”

It’s the same thing once more:Ā we add very simple waves to get very more complicated waveforms. šŸ™‚

You must have fallen asleep by now or, else, be watching something else. Feynman must have felt the same.Ā After explaining all of the nitty-gritty above, Feynman wakes up his students. He does so by appealing to their imagination:

“Try to imagine what the electric and magnetic fields look like at present in the space in this lecture room. First of all, there is a steady magnetic field; it comes from the currents in the interior of the earth—that is, the earth’s steady magnetic field. Then there are some irregular, nearly static electric fields produced perhaps by electric charges generated by friction as various people move about in their chairs and rub their coat sleeves against the chair arms. Then there are other magnetic fields produced by oscillating currents in the electrical wiring—fields which vary at a frequency of 6060Ā cycles per second, in synchronism with the generator at Boulder Dam. But more interesting are the electric and magnetic fields varying at much higher frequencies. For instance, as light travels from window to floor and wall to wall, there are little wiggles of the electric and magnetic fields moving along at 186,000Ā miles per second. Then there are also infrared waves travelling from the warm foreheads to the cold blackboard. And we have forgotten the ultraviolet light, the x-rays, and the radiowaves travelling through the room.

Flying across the room are electromagnetic waves which carry music of a jazz band. There are waves modulated by a series of impulses representing pictures of events going on in other parts of the world, or of imaginary aspirins dissolving in imaginary stomachs. To demonstrate the reality of these waves it is only necessary to turn on electronic equipment that converts these waves into pictures and sounds.

If we go into further detail to analyze even the smallest wiggles, there are tiny electromagnetic waves that have come into the room from enormous distances. There are now tiny oscillations of the electric field, whose crests are separated by a distance of one foot, that have come from millions of miles away, transmitted to the earth from the Mariner II space craft which has just passed Venus. Its signals carry summaries of information it has picked up about the planets (information obtained from electromagnetic waves that travelled from the planet to the space craft).

There are very tiny wiggles of the electric and magnetic fields that are waves which originated billions of light years away—from galaxies in the remotest corners of the universe. That this is true has been found by ā€œfilling the room with wiresā€ā€”by building antennas as large as this room. Such radiowaves have been detected from places in space beyond the range of the greatest optical telescopes. Even they, the optical telescopes, are simply gatherers of electromagnetic waves. What we call the stars are only inferences, inferences drawn from the only physical reality we have yet gotten from them—from a careful study of the unendingly complex undulations of the electric and magnetic fields reaching us on earth.

There is, of course, more: the fields produced by lightning miles away, the fields of the charged cosmic ray particles as they zip through the room, and more, and more. What a complicated thing is the electric field in the space around you! Yet it always satisfies the three-dimensional wave equation.”

So… Well… That’s it for today, folks. šŸ™‚ We have some more gymnastics to do, still… But we’re really there. Or here, I should say: on top of the peak. What a view we have here! Isn’t it beautiful? It took us quite some effort to get on top of this thing, and we’re still trying to catch our breath as we struggle with what we’ve learned so far, but it’s really worthwhile, isn’t it? šŸ™‚

Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 20, 2020 as a result of a DMCA takedown notice from Michael A. Gottlieb, Rudolf Pfeiffer, and The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 20, 2020 as a result of a DMCA takedown notice from Michael A. Gottlieb, Rudolf Pfeiffer, and The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/

Maxwell’s equations and the speed of light

Pre-script (dated 26 June 2020): Our ideas have evolved into a full-blown realistic (or classical) interpretation of all things quantum-mechanical. In addition, I note the dark force has amused himself by removing some material, which messed up the lay-out of this post as well. So no use to read this. Read my recent papers instead. šŸ™‚

Original post:

We know how electromagnetic waves travel through space: they do so because of the mechanism described in Maxwell’s equation: a changing magnetic field causes a changing electric field, and a changing magnetic field causes a (changing) electric field, as illustrated below.

Maxwell interaction

So we needĀ someĀ First Cause to get it all startedĀ šŸ™‚ i.e. some current, i.e. some moving charge, but then the electromagnetic wave travels, all by itself, through empty space, completely detached from the cause. You know that by now – indeed, you’ve heard this a thousand times before – but, if you’re reading this, you want to know how it worksĀ exactly. šŸ™‚

In my post on the Lorentz gauge, I included a few links to Feynman’sĀ LecturesĀ that explain the nitty-gritty of this mechanism from various angles. However, they’re pretty horrendous to read, and so I just want to summarize them a bit—if only for myself, so as to remind myself what’s important and not. In this post, I’ll focus on the speed of light: why do electromagnetic waves – light – travel at the speed of light?

You’ll immediately say: that’s a nonsensical question. It’s light, so it travels at the speed of light. Sure, smart-arse!Ā Let me be more precise: how can we relate the speed of light to Maxwell’s equations? That is the question here. Let’s go for it.

Feynman deals with the matter of the speed of an electromagnetic wave, and the speed of light, in a rather complicated exposé on the fieldsĀ from someĀ infinite sheet of charge that is suddenly set into motion, parallel to itself, as shown below. The situation looks – and actually is – very simple, but the math is rather messy because of the rather exotic assumptions: infinite sheets and infinite acceleration are not easy to deal with. šŸ™‚ But so the whole point of theĀ exposé is just to proveĀ that the speed of propagation (v) of the electric and magnetic fields is equal to the speed of light (c), and it does a marvelous job at that. So let’s focus on that hereĀ only. So what I am saying is that I am going to leave out most of the nitty-gritty and just try to get to that v = cĀ result as fast as I possibly can. So, fasten your seat belt, please.

sheet of charge

Most of the nitty-gritty in Feynman’sĀ exposé is about how to determine the direction and magnitude of the electric and magnetic fields, i.e. E and B. Now, when the nitty-gritty business is finished, the grand conclusion is that both E and B travel out in both the positive as well as the negative x-direction at some speed vĀ and sort of ‘fill’ the entire space as they do. Now, the regionĀ they are filling extends infinitely far in both the y- and z-direction but, because they travel along the x-axis, there are no fields (yet) in the region beyond x = ± vĀ·t (t = 0 is the moment when the sheet started moving, and it moves in the positive y-direction). As you can see, the sheet of charge fills the yz-plane, and the assumption is that its speed goes from zero to u instantaneously, or very very quickly at least. So the E and B fields move out like a tidal wave, as illustrated below, and thereby ‘fill’ the space indeed, as they move out.

tidal wave

The magnitude of E and B is constant, but it’s not the same constant, and part of the exercise here is to determine the relationship between the two constants. As for their direction, you can see it in the first illustration: B points in the negative z-direction for x > 0 and in the positive z-direction for x < 0, while E‘s directionĀ is opposite to u‘s directionĀ everywhere, so EĀ points in the negative y-direction. As said, you should just take my word for it, because the nitty-gritty on this – which we doĀ notĀ want to deal with here – is all in Feynman and so I don’t want to copy that.

The crux of the argument revolves around what happens at the wavefront itself, as it travels out. Feynman relates flux and circulation there. It’s the typical thing to do: it’sĀ at the wavefront itselfĀ that the fieldsĀ change: before they were zero, and now they are equal to that constant. The fields do notĀ change anywhere else, so there’s no changing flux or circulation business to be analyzed anywhere else.Ā So we define two loops at the wavefront itself: Ī“1Ā and Ī“2. They are normal to each other (cf. the top and side view of the situation below), because the E and B fields are normal to each other. And so then we use Maxwell’s equations to check out what happens with the flux and circulation there and conclude what needs to be concluded. šŸ™‚

top view side view

We start with rectangle Ī“2. So one side isĀ in the region where there are fields, and one side is in the region where the fields haven’t reached yet. There is some magnetic flux through this loop, and it is changing, so there is an emf around it, i.e. some circulation of E. The flux changes because the area in which B exists increases at speed v.Ā Now, the time rate of change of the flux is, obviously, the width of the rectangle L times the rate of change ofĀ the area, so that’s (BĀ·LĀ·vĀ·Ī”t)/Ī”t = BĀ·LĀ·v, with Ī”t some differential time interval co-definingĀ how slow or how fast the field changes. Now, according to Faraday’s Law (see my previous post),Ā this will be equal to minus the line integral ofĀ EĀ around Ī“2, which is EĀ·L. So EĀ·L = BĀ·LĀ·v and, hence, we find:Ā E = vĀ·B.

Interesting! To satisfy Faraday’s equation (which is just one of Maxwell’s equations in integral rather than in differential form), E must equal B times v, with v the speed of propagation of our ‘tidal’ wave. Now let’s look at Ī“1. There we should apply:

IntegralNow the line integral is just BĀ·L, and the right-hand side is EĀ·LĀ·v, so, not forgetting that c2Ā in front—i.e. the squareĀ of the speed of light, as you know!—we get:Ā c2B =Ā EĀ·v, or E = (c2/v)Ā·B.Ā 

Now, the E = vĀ·B and E = (c2/v)Ā·B equations mustĀ bothĀ apply (we’re talking one wave and one and the same phenomenon) and, obviously, that’sĀ onlyĀ possible if v =Ā c2/v, i.e. if v = c. So the wavefront mustĀ travel at the speed of light! Waw !Ā That’s fast. šŸ™‚ Yes. […] Jokes aside, that’s the result we wanted here: we justĀ provedĀ that the speed of travel of an electromagnetic wave must beĀ equal to the speed of light.

As an added bonus, we also showed the mechanism of travel. It’s obvious from the equations we used to prove the result: it works through the derivatives of the fields with respect to time, i.e.Ā āˆ‚E/āˆ‚t andĀ āˆ‚B/āˆ‚t.

Done!Ā Great! Enjoy the view!

Well… Yes and no. If you’re smart, you’ll say: we got this result because of the c2Ā factor in that equation, so Maxwell had already put it in, so to speak. Waw! You really areĀ aĀ smart-arse, aren’t you? šŸ™‚

The thing is… Well… The answer is: no. Maxwell didĀ notĀ put it in. Well… Yes and no. Let me explain. Maxwell’s firstĀ equation was the electric flux lawĀ āˆ‡Ā·E = σ/ε0: the flux of E through a closed surface is proportional to the charge inside. So that’s basically an other way of writingĀ Coulomb’s Law,Ā and ε0Ā was just some constant in it, the electric constant. So it’s a constant of proportionality that depends on the unit in which we measure electric charge. The only reason that it’s there is to make the units come out alright, so if we’d measure charge not in coulombĀ (C)Ā in a unit equal to 1 C/ε0, it would disappear. If we’d do that, our new unit would be equivalent to the charge of some 700,000 protons. You can figure that magical number yourself by checking the values of the proton charge and ε0. šŸ™‚

OK. And then Faraday came up with the exactĀ laws for magnetism, and they involved current and some other constant of proportionality, and Maxwell formalized that by writing āˆ‡Ć—BĀ = μ0j, with μ0Ā the magneticĀ constant. It’s not a flux law but a circulation law: currents cause circulationĀ of B. We get the flux rule from it by integrating it. But currents are movingĀ charges, and so Maxwell knew magnetism was related to the same thing: electric charge. So Maxwell knew the two constants had to be related. In fact, when putting the full set of equations together – there are four, as you know – Maxwell figured out that μ0Ā times ε0Ā would haveĀ toĀ be equal to the reciprocal of c2, with cĀ the speed of propagation of the wave. So Maxwell knew that, whatever the unit of charge, we’d get two constants of proportionality, and electricĀ and aĀ magneticĀ constant, and that μ0·ε0Ā would be equal to 1/c2. However,Ā while he knew that,Ā at the time, light and electromagnetism were considered to be separate phenomena, and so Maxwell did not say that cĀ was the speed of light: the only thing his equations told him was thatĀ cĀ is the speed of propagation of thatĀ ‘electromagnetic’ wave that came out of his equations.

The rest is history.Ā In 1856, the great Wilhelm Eduard Weber – you’ve seen his name before, didn’t you? – did a whole bunch of experiments which measured the electric constant rather precisely, and Maxwell jumped on it and calculated all the rest, i.e. μ0, and so then he took the reciprocal of the square root of μ0·ε0Ā and – Bang! – he hadĀ c, the speed of propagation of the electromagnetic wave he was thinking of. Now, cĀ was some value of the order of 3Ɨ108Ā m/s, and so thatĀ happenedĀ to be the same as the speed of light, which suggested that Maxwell’sĀ c and the speed of light wereĀ actually one and the same thing!

Now, I am a smart-arse too šŸ™‚ and, hence, when I first heard this story, I actually wondered how Maxwell could possibly know the speed of light at the time: Maxwell died many years beforeĀ the Michelson-Morley experiment unequivocally established the value of the speed of light. [In case, you wonder: the Michelson-Morley experiment was done in 1887. So I check it. The fact is that the Michelson-Morley experiment concluded that the speed of light was anĀ absoluteĀ value and that, in the process of doing so, they got a rather precise value for it, but the value of cĀ itself has already been established, more or less, that is, by aĀ Danish astronomer, Ole Rƶmer, in 1676 ! He did so by carefully observing the timing of the repeating eclipses of Io, one of Jupiter’s moons. Newton mentioned his results in his Principia, which he wrote in 1687, dulyĀ noting that it takes about seven to eightĀ minutes for light to travel from the Sun to the Earth. Done! The whole story is fascinating, really, so you should check it outĀ yourself. šŸ™‚

In any case, to make a long story short, Maxwell was puzzled by this mysterious coincidence, but he was bold enough to immediately point to the right conclusion, tentativelyĀ at least, and so he told the Cambridge Philosophical Society, in the very same year, i.e. 1856, that “we can scarcely avoid the inference that light consists in the transverse undulations of the same medium which is the cause of electric and magnetic phenomena.”

So… Well… Maxwell still suggests light needs some medium here, so the ‘medium’ is a reference to the infamous aetherĀ theory, but that’s not the point: what he says here is what we all take for granted now: light is an electromagnetic wave.Ā So now we knowĀ there’s absolute no reason whatsoever to avoid the ‘inference’, but… Well… 160 years ago, it was quite a big deal to suggest something like that. šŸ™‚

So that’s the full story. I hoped you like it. Don’t underestimate what you just did: understanding an argument like this is like “climbing a great peak”, as Feynman puts it. So it is “a great moment” indeed. šŸ™‚Ā The only thing left is, perhaps, to explain the ‘other’ flux rules I used above. Indeed, you know Faraday’s Law:

emf

But that other one? Well… As I explained in my previous post, Faraday’s Law is the integralĀ form of Maxwell’s second equation:Ā āˆ’āˆ‚B/āˆ‚t =Ā āˆ‡Ć—E. The ‘other’ flux rule above – so that’s the one with the c2Ā in front and without a minus sign, is the integral form of Maxwell’s fourth equation: c2āˆ‡Ć—BĀ = j/ε0Ā +Ā āˆ‚E/āˆ‚t, taking into account that we’re talking a wave traveling in free space, so there are no charges and currents (it’s just a wave in empty space—whatever that means) and, hence, the Maxwell equation reduces to c2āˆ‡Ć—BĀ = āˆ‚E/āˆ‚t. Now, I could take you through the same gymnastics as I did in my previous post but, if I were you, I’d just apply the general principle that ”the same equations must yield the same solutions” and so I’d just switch E for B and vice versa in Faraday’s equation. šŸ™‚

So we’re done… Well… Perhaps one more thing. We’ve got these flux rules above telling us that theĀ electromagnetic wave will travel all by itself, through empty space, completely detached from its First Cause. But… […] Well… Again you may think there’s some trick here. In other words, you may think the wavefront has to remain connected to the First Cause somehow, just like the whip below is connected to some person whipping it. šŸ™‚

Bullwhip_effect

There’s no such connection. The whip is not needed. šŸ™‚ If we’d switch off the First Cause after some time T, so our moving sheet stops moving, then we’d have theĀ pulseĀ below traveling through empty space. As Feynman puts it: “The fields have taken off: they are freely propagating through space, no longer connected in any way with the source. The caterpillar has turned into a butterfly!Ā 

wavefront

Now, the last question is always the same: whatĀ areĀ those fields? What’s theirĀ reality? Here, I should refer you to one of the most delightful sections in Feynman’s Lectures. It’s on theĀ scientific imagination. I’ll just quote the introduction to it, but I warmly recommend you go and check it out for yourself: it hasĀ no formulasĀ whatsoever, and so you should understandĀ all of itĀ without any problem at all. šŸ™‚

“I have asked you to imagine these electric and magnetic fields. What do you do? Do you know how? How do I imagine the electric and magnetic field? What do I actually see? What are the demands of scientific imagination? Is it any different from trying to imagine that the room is full of invisible angels? No, it is not like imagining invisible angels. It requires a much higher degree of imagination to understand the electromagnetic field than to understand invisible angels. Why? Because to make invisible angels understandable, all I have to do is to alter their properties a little bit—I make them slightly visible, and then I can see the shapes of their wings, and bodies, and halos. Once I succeed in imagining a visible angel, the abstraction required—which is to take almost invisible angels and imagine them completely invisible—is relatively easy. So you say, ā€œProfessor, please give me an approximate description of the electromagnetic waves, even though it may be slightly inaccurate, so that I too can see them as well as I can see almost invisible angels. Then I will modify the picture to the necessary abstraction.ā€

I’m sorry I can’t do that for you. I don’t know how. I have no picture of this electromagnetic field that is in any sense accurate. I have known about the electromagnetic field a long time—I was in the same position 25Ā years ago that you are now, and I have had 25Ā years more of experience thinking about these wiggling waves. When I start describing the magnetic field moving through space, I speak of the EĀ andĀ BĀ fields and wave my arms and you may imagine that I can see them. I’ll tell you what I see. I see some kind of vague shadowy, wiggling lines—here and there is an E and aĀ BĀ written on them somehow, and perhaps some of the lines have arrows on them—an arrow here or there which disappears when I look too closely at it. When I talk about the fields swishing through space, I have a terrible confusion between the symbols I use to describe the objects and the objects themselves. I cannot really make a picture that is even nearly like the true waves. So if you have some difficulty in making such a picture, you should not be worried that your difficulty is unusual.

Our science makes terrific demands on the imagination. The degree of imagination that is required is much more extreme than that required for some of the ancient ideas. The modern ideas are much harder to imagine. We use a lot of tools, though. We use mathematical equations and rules, and make a lot of pictures. What I realize now is that when I talk about the electromagnetic field in space, I see some kind of a superposition of all of the diagrams which I’ve ever seen drawn about them. I don’t see little bundles of field lines running about because it worries me that if I ran at a different speed the bundles would disappear, I don’t even always see the electric and magnetic fields because sometimes I think I should have made a picture with the vector potential and the scalar potential, for those were perhaps the more physically significant things that were wiggling.

Perhaps the only hope, you say, is to take a mathematical view. Now what is a mathematical view? From a mathematical view, there is an electric field vector and a magnetic field vector at every point in space; that is, there are six numbers associated with every point. Can you imagine six numbers associated with each point in space? That’s too hard. Can you imagine even one number associated with every point? I cannot! I can imagine such a thing as the temperature at every point in space. That seems to be understandable. There is a hotness and coldness that varies from place to place. But I honestly do not understand the idea of a number at every point.

So perhaps we should put the question: Can we represent the electric field by something more like a temperature, say like the displacement of a piece of jello? Suppose that we were to begin by imagining that the world was filled with thin jello and that the fields represented some distortion—say a stretching or twisting—of the jello. Then we could visualize the field. After we ā€œseeā€ what it is like we could abstract the jello away. For many years that’s what people tried to do. Maxwell, AmpĆØre, Faraday, and others tried to understand electromagnetism this way. (Sometimes they called the abstract jello ā€œether.ā€) But it turned out that the attempt to imagine the electromagnetic field in that way was really standing in the way of progress. We are unfortunately limited to abstractions, to using instruments to detect the field, to using mathematical symbols to describe the field, etc. But nevertheless, in some sense the fields are real, because after we are all finished fiddling around with mathematical equations—with or without making pictures and drawings or trying to visualize the thing—we can still make the instruments detect the signals from Mariner II and find out about galaxies a billion miles away, and so on.

The whole question of imagination in science is often misunderstood by people in other disciplines. They try to test our imagination in the following way. They say, ā€œHere is a picture of some people in a situation. What do you imagine will happen next?ā€ When we say, ā€œI can’t imagine,ā€ they may think we have a weak imagination. They overlook the fact that whatever we are allowed to imagine in science must be consistent with everything else we know: that the electric fields and the waves we talk about are not just some happy thoughts which we are free to make as we wish, but ideas which must be consistent with all the laws of physics we know. We can’t allow ourselves to seriously imagine things which are obviously in contradiction to the known laws of nature. And so our kind of imagination is quite a difficult game. One has to have the imagination to think of something that has never been seen before, never been heard of before. At the same time the thoughts are restricted in a strait jacket, so to speak, limited by the conditions that come from our knowledge of the way nature really is. The problem of creating something which is new, but which is consistent with everything which has been seen before, is one of extreme difficulty.”

Isn’t that great? I mean: Feynman, one of the greatest physicists of all time, didn’t write what he wrote above when he was a undergrad student or so. No. He did so in 1964, when he was 45 years old, at the height of his scientific career! And it gets better, because Feynman then starts talking aboutĀ beauty. What is beauty in science? Well… Just click and check what Feynman thinks about it. šŸ™‚

Oh… Last thing. So what is the magnitude of the E and B field? Well… You can work it out yourself, but I’ll give you the answer. The geometry of the situation makes it clear that the electric field has a y-component only, and the magnetic field a z-component only. Their magnitudes are given in terms of J, i.e. the surface current density going in the positive y-direction:

equation

Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/

An introduction to electric circuits

In my previous post,I introduced electric motors, generators and transformers. They all work because of Faraday’s flux rule: a changing magnetic flux will produce some circulation of the electric field. The formula for the flux rule is given below:

emf

It is a wonderful thing, really, but not easy to grasp intuitively. It’s one of these equations where I should quote Feynman’s introduction to electromagnetism: “The laws of Newton were very simple to write down, but they had a lot of complicated consequences and it took us a long time to learn about them all. The laws of electromagnetism are not nearly as simple to write down, which means that the consequences are going to be more elaborate and it will take us quite a lot of time to figure them all out.”

Now, among Maxwell’s Laws, this is surely the most complicated one! However, that shouldn’t deter us. šŸ™‚ RecallingĀ Stokes’ Theorem helps to appreciate what the integral on the left-hand side represents:

Stokes theorem

We’ve got a line integral around some closed loopĀ Ī“ on the left and, on the right, we’ve got a surface integral over some surface S whose boundary is Ī“. The illustration below depicts the geometry of the situation. You know what it all means. If not, I am afraid I have to send you back to square one, i.e. my posts on vector analysis. Yep. Sorry. Can’t keep copying stuff and make my posts longer and longer. šŸ™‚

Diagram stokesTo understand the flux rule, you should imagine that the loop Ī“ is some loop of electric wire, and then you just replace C by E, the electric field vector. The circulation of E, which is caused by theĀ changeĀ in magnetic flux, is referred to as the electromotive forceĀ (emf), and it’s theĀ tangential force (EĀ·ds) per unit charge in the wire integrated over its entire length around the loop, which is denoted byĀ Ī“ here, and which encloses a surface S.

Now, you can go from the line integral to the surface integral by noting Maxwell’s Law:Ā āˆ’āˆ‚B/āˆ‚t =Ā āˆ‡Ć—E. In fact, it’s the same flux rule really, but in differential form. As for (āˆ‡Ć—E)n, i.e. the component of āˆ‡Ć—EĀ that is normal to the surface, you know that any vector multiplied with the normal unit vector will yield its normal component. In any case, if you’re reading this, you should already be acquainted with all of this. Let’s explore the concept of the electromotive force, and then apply it our first electric circuit. šŸ™‚

Indeed, it’s now time for a small series on circuits, and so we’ll start right here and right now, but… Well… First things first. šŸ™‚

The electromotive force: concept and units

The term ‘force’ in ‘electromotive force’ is actually somewhat misleading. There is a force involved, of course, but the emf is not a force. The emf is expressed in volts.Ā That’s consistent with its definition as the circulation of E: a force times a distance amounts to work, or energyĀ (oneĀ jouleĀ is one newtonĀ·meter), and because E is the force on aĀ unit charge, the circulation of E is expressed in jouleĀ per coulomb, so that’s a voltage: 1Ā volt = 1 joule/coulomb.Ā Hence, on the left-hand side of Faraday’s equation, we don’t have any dimension of time: it’s energy per unit charge, so it’s xĀ joule per coulombĀ . Full stop.

On the right-hand side, however, we have the time rate of change of the magnetic flux. through the surface S. The magnetic flux is a surface integral, and so it’s a quantity expressed in [B]Ā·m2, with [B] the measurement unit for the magnetic field strength. The time rate of change of the flux is then, of course, expressed in [B]Ā·m2Ā per second, i.e. [B]Ā·m2/s.Ā Now what is the unit for the magnetic field strength B, which we denoted by [B]?

Well… [B] is a bit of a special unit: it is notĀ measured as some force per unit charge, i.e. in newton per coulomb, like the electric field strengthĀ E. No. [B] is measured in (N/C)/(m/s). Why? Because the magnetic force is not F = qE butĀ FĀ = qvƗB. Hence, so as to make the units come out alright, we need to express B in (NĀ·s)/(CĀ·m), which is a unit known as the teslaĀ (1 T = NĀ·s/CĀ·m), so as to honor the Serbian-American genius Nikola Tesla. [I know it’s a bit of short and dumb answer, but the complete answer is quite complicated: it’s got to do with theĀ relativityĀ of the magnetic force, which I explained in another post: both the v in FĀ = qvƗB equationĀ as well as theĀ m/s unit in [B] should make you think: whoseĀ velocity? In which reference frame? But that’s something I can’t summarize in two lines, so just click the link if you want to know more. I need to get back to the lesson.]

Now that we’re talking units, I should note that the unit of flux also got a special name, theĀ weber, so as to honor one of Germany’s most famous physicists, Wilhelm Eduard Weber: as you might expect, 1 Wb = 1 TĀ·m2. But don’t worry about these strange names. Besides the units you know, like the jouleĀ and theĀ newton,Ā I’ll only use the volt,Ā which got its name to honor some other physicist, Alessandro Volta, the inventor of the electrical battery. Or… Well… I might mention theĀ wattĀ as well at some point… šŸ™‚

So how does it work? On one side, we have something expressedĀ per second – so that’s per unit time – and on the other we have something that’s expressedĀ per coulomb – so that’s per unit charge. The link between the two is theĀ power, so that’s theĀ time rate of doing work. It’s expressed inĀ joule per second. So… Well… Yes. Here we go: in honor of yet another genius, James Watt, the unit of power got its own special name too: theĀ watt. šŸ™‚ In the argument below, I’ll show that the power that is beingĀ generatedĀ by a generator, and that is beingĀ consumedĀ in the circuit (through resistive heating, for example, or whatever else taking energy outĀ of the circuit) is equal to theĀ emf times the current. For the moment, however, I’ll just assume you believe me. šŸ™‚

We need to look at the whole circuit now, indeed, in which our little generator (i.e. our loop or coil of wire) is just one of the circuit elements. The units come out alright: the power =Ā emfĀ·currentĀ product is expressed in voltĀ·coulomb/second = (joule/coulomb)Ā·(coulomb/second) = joule/second. So, yes, it looks OK. But what’s going on really? How does it work, literally?

A short digression: on Ohm’s Law and electric power

Well… Let me first recall the basic concepts involved which, believe it or not, are probably easiest to explain by briefly recalling Ohm’s Law, which you’ll surely remember from your high-school physics classes. It’s quite simple really: we have someĀ resistanceĀ in a little circuit, so that’s something thatĀ resistsĀ the passage of electric current, and then we also have a voltage source. Now, Ohm’s Law tells us that the ratio of (i) theĀ voltage VĀ across the resistance (so that’s between the two points marked as + andĀ āˆ’) and (ii) the currentĀ I will be some constant. It’s the same as saying that V and I are inversely proportional to each other.Ā  The constant of proportionality is referred to as theĀ resistanceĀ itself and, while it’s often looked at as a property of the circuit itself, we may embody it in a circuit element itself: a resistor, as shown below.

120px-OhmsLaw

So we write R = V/I, and the brief presentation above should remind you of theĀ capacityĀ of a capacitor, which was just another constant of proportionality. Indeed, instead of feeding a resistor (so all energyĀ gets dissipated away),Ā we could charge a capacitor with a voltage source, so that’s a energyĀ storageĀ device, and thenĀ we find that the ratio between (i) the charge on the capacitor and (ii) the voltage across the capacitor was a constant too, which we defined as the capacity of the capacitor, and so we wrote C = Q/V. So, yes, another constant of proportionality (there are many in electricity!).

In any case, the point is: to increase the current in the circuit above, you need to increase the voltage, but increasing both amounts to increasing the powerĀ that’s being consumed in the circuit, because theĀ power is voltage times current indeed, so P = VĀ·I (or vĀ·i, if I use the small letters that are used in the two animations below). For example, if we’d want to double the current, we’d need to double the voltage, and so we’re quadrupling the power: (2Ā·V)Ā·(2Ā·I) = 22Ā·VĀ·I. So we have a square-cube law for the power, which we get by substituting V for RĀ·I or by substituting I for V/R, so we can write the power P as P = V2/R = I2Ā·R. This square-cube law says exactly the same: if you want to doubleĀ the voltageĀ orĀ the current, you’ll actually have to double both and, hence, you’ll quadruple the power.Ā Now let’s look at the animations below (for which credit must go to Wikipedia).

Electric_power_source_animation_1 Electric_load_animation_2

TheyĀ show how energy is being used in an electric circuit in Ā terms of power. [Note that the little moving plusesĀ are in line with the conventionĀ that a current is defined as the movement of positive charges, so we write I = dQ/dt instead ofĀ I =Ā āˆ’dQ/dt. That also explains the direction of the field line E, which has been added to show that the power source effectively moves charges againstĀ the field and, hence, against the electric force.] What we have here is that, on one side of the circuit, someĀ generator or voltage sourceĀ will create anĀ emfĀ pushing the charges,Ā and then someĀ loadĀ willĀ consumeĀ their energy, so they lose their push. SoĀ power, i.e. energy per unit time, is supplied, and is then consumed.

Back to the emf…

Now, I mentioned that theĀ emf is a ratio of two terms: the numerator is expressed inĀ joule, and the denominator is expressed inĀ coulomb. So you might think we’ve got some trade-off here—something like: if weĀ doubleĀ the energy of halfĀ ofĀ the individual charges, then we still get the same emf. Or vice versa: we could, perhaps, double the number of charges and load them with only half the energy. One thing is for sure: we can’t both.

Hmm… Well… Let’s have a look at this line of reasoning by writing it down more formally.

  1. The time rate of change of the magnetic flux generates some emf, which we can and should think of as a property of the loop or the coil of wire in which it is being generated. Indeed, the magnetic flux through it depends on its orientation, its size, and its shape. So it’s really very much like the capacity of a capacitor or the resistance of a conductor. SoĀ we write: emf = Ī”(flux)/Ī”t. [In fact, the inducedĀ emfĀ tries to oppose the change in flux, so I should add the minus sign, but you get the idea.]
  2. For a uniform magnetic field, the flux is equal to the field strength B times the surface area S. [To be precise, we need to take the normal component of B, so the flux is BĀ·S = BĀ·SĀ·cosĪø.] Ā So the flux can change because of a change in B or because of a change in S, or because of both.
  3. The emf = Ī”(flux)/Ī”t formula makes it clear that a veryĀ slowĀ change in flux (i.e. the same Ī”(flux) over a much larger Ī”t) will generate little emf. In contrast, a very fastĀ change (i.e. the the same Ī”(flux) over a much smaller Ī”t) will produce a lot of emf. So, in that sense, emf is notĀ like the capacity or resistance, because it’s variable: it depends on Ī”(flux), as well as on Ī”t. However, you should still think of it as aĀ property of the loop or the ‘generator’ we’re talking about here.
  4. Now, the power that is being produced or consumed in the circuit in which our ‘generator’ is just one of the elements, is equal to the emfĀ times the current. The power is the time rate of change of the energy, and the energy is the workĀ that’s being done in the circuit (which I’ll denote by Ī”U), so we write: emfĀ·current =Ā Ī”U/Ī”t.
  5. Now, the current is equal to the time rate of change of the charge, so I = ΔQ/Δt. Hence, the emf is equal to emf = (ΔU/Δt)/I = (ΔU/Δt)/(ΔQ/Δt) = ΔU/ΔQ. From this, it follows that: emf = Δ(flux)/Δt = ΔU/ΔQ, which we can re-write as:

Δ(flux) = ΔU·Δt/ΔQ

What this says is the following. For a given amount of change in the magnetic flux (so we treatĀ Ī”(flux) as constant in the equation above), we could do more work on the same charge (Ī”Q) – we could double Ī”U by moving the same charge over a potential difference that’s twice as large, for example – but then Ī”t must be cut in half. So the same change in magnetic flux can doĀ twice as much work if the change happens in half of the time.

Now, does that mean the current is being doubled? We’re talking the same Ī”Q and half the Ī”t, so… Well? No. The Ī”t here measures the time of the flux change, so it’s not the dt in I = dQ/dt. For the current to change, we’d need to move the same charge faster, i.e. over a larger distance over the same time. We didn’t say we’d do that above: we only said we’d move the charge across a larger potential difference: we didn’t say we’d change the distanceĀ over which they are moved.

OK. That makes sense. But we’re not quite finished. Let’s first try something else, to then come back to where we are right now via some other way. šŸ™‚ Can we change Ī”Q? Here we need to look at the physics behind. What’s happening really is that the change in magnetic flux causes an induced current which consists of the free electrons in the Γ loop. So we have electrons moving in and out of our loop, and through the whole circuit really, but so there’s only so many free electrons per unit lengthĀ in the wire. However, if we would effectively double the voltage, then theirĀ speedĀ will effectively increase proportionally, so we’ll have more of them passing throughĀ per second. Now that effect surely impacts the current. It’s what we wrote above: all other things being the same, including the resistance, then we’ll also double the currentĀ as we double the voltage.

So where is thatĀ effect in the flux rule? The answer is: it isn’t there. The circulation of E around the loop is what it is: it’s some energy per unit charge. Not per unit time. So our flux rule gives us a voltage, which tells us that we’re going to have some pushĀ on the charges in the wire,Ā but it doesn’t tell us anything about the current. To know the current, we must know theĀ velocityĀ of the moving charges, which we can calculateĀ from the push if we also get some other information (such as the resistance involved, for instance), but so it’s not there in the formula of the flux rule. You’ll protest: there is a Ī”t on the right-hand side! Yes, that’s true. But it’s not theĀ Ī”t in the v =Ā Ī”s/Ī”t equation for our charges. Full stop.

Hmm… I may have lost you by now. If not, please continue reading. Let me drive the point home by asking another question. Think about the following: we can re-write thatĀ Ī”(flux) = Ī”UĀ·Ī”t/Ī”Q equation above asĀ Ī”(flux) = (Ī”U/Ī”Q)Ā·Ī”t equation. Now, does that imply that, withĀ the same change in flux, i.e. the same Ī”(flux), and, importantly, for the same Ī”t, we could double both Ī”U as well as Ī”Q? I mean: (2Ā·Ī”U)/(2Ā·Ī”Q) = Ī”U/Ī”Q and so the equation holds, mathematicallyĀ that is. […] Think about it.

You should shake your head now, and rightly so, because, while theĀ Ī”(flux) = (Ī”U/Ī”Q)Ā·Ī”t equation suggests that would be possible, it’s totally counter-intuitive. We’re changing nothing in the real world (what happens there is the same change of flux in the same amount of time), but so we’d get twice the energy and twice the charge ?! Of course, we could also put a 3 there, or 20,000, or minus a million. So who decides on what we get? You get the point: it is, indeed, notĀ possible. Again, what we can change is the speed of the free electrons, but not their number, and to change their speed, you’ll need to do more work, and so the realityĀ is thatĀ we’re always looking at the same Ī”Q, so if we want a larger Ī”U, then we’ll need a larger change in flux, or we a shorter Ī”t during which that change in flux is happening.

So what can we do? We can change the physics of the situation. We can do so in many ways, like we could change the length of the loop, or its shape. One particularly interesting thing to do would be toĀ increase the number of loops, so instead of one loop, we could have someĀ coil with, say, N turns, so that’s N of these Ī“ loops. So what happens then? In fact, contrary to what you might expect, theĀ Ī”Q still doesn’t change as it moves into the coil and then from loop to loop to get out and then through the circuit: it’s still the same Ī”Q. But the work that can be done by this current becomes much larger. In fact, two loops give us twiceĀ the emf of one loop, and N loops give us N times the emf of one loop. So then we can make the free electrons move faster, so they cover more distance in the same time (and you know work is force times distance), or we can move them across a larger potential difference over the same distance (and so then we move them against a larger force, so it also implies we’re doing more work). The first case is a larger current, while the second is a larger voltage. So what is it going to be?

Think about the physics of the situation once more: to make the charges move faster, you’ll need a larger force, so you’ll have a larger potential difference, i.e. a larger voltage. As for what happens to the current, I’ll explain that below. Before I do, let me talk some more basics.

In the exposĆ© below, we’ll talk about power again, and also about load. What is load? Think about what it is in real life: when buying a battery for a big car, we’ll want a big battery, so we don’t look at the voltage only (they’re all 12-volt anyway). We’ll look at howĀ manyĀ ampĆØres it can deliver, and for how long. The starter motorĀ in the car, for example, can suck up like 200 A, but for a very short time only, of course, as the car engine itself should kick in. So that’s why the capacity of batteries is expressed in ampĆØre-hours.

Now, how do we get such large currents, such large loads? Well… Use Ohm’s Law: to get 200 A at 12 V, the resistance of the starter motor will have to as low as 0.06 ohm. So large currents are associated with very lowĀ resistance. Think practical: a 240-volt 60 wattĀ light-bulb will suck in 0.25 A, and hence, its internal resistance, is about 960 Ī©. Also think of what goes on in your house: we’ve got a lot of resistors in parallel consuming power there. The formula for the total resistance is 1/RtotalĀ =Ā 1/R1Ā + 1/R2Ā + 1/R3Ā + … So more appliances is lessĀ resistance, so that’s what draws in the larger current.

The point is: when looking at circuits, emfĀ is one thing, but energy and power, i.e. the work done per second, are all that matters really. And so then we’re talking currents, but our flux rule does notĀ say how much current our generator will produce: that depends on the load. OK. We really need to get back to the lesson now.

A circuit with an AC generator

The situation is depicted below. We’ve got a coil of wire of, let’s say, N turns of wire, and we’ll use it to generate an alternating currentĀ (AC) in a circuit.

AC generatorCircuit

The coil is really like the loop of wire in that primitive electric motor I introduced in my previous post, but so now we use the motor as a generator. To simplify the analysis, we assume we’ll rotate our coil of wire in aĀ uniformĀ magnetic field, as shown by the field lines B.

motor

Now, our coil is not a loop, of course: the two ends of the coil are brought to external connections through some kind of sliding contacts, but that doesn’t change the flux rule: a changingĀ magnetic fluxĀ will produce someĀ emfĀ and, therefore, someĀ currentĀ in the coil.

OK. That’s clear enough. Let’s see what’s happening really. When we rotate our coil of wire, we change the magnetic flux through it. If S is the area of the coil, andĀ Īø is the angle between the magnetic field and the normal to the plane of the coil, then the flux through the coil will be equal to BĀ·SĀ·cosĪø. Now, if we rotate the coil at a uniform angular velocity ω, then Īø varies with time as Īø = ω·t. Now, each turn of the coil will have an emf equal to the rate of change of the flux, i.e. d(BĀ·SĀ·cosĪø)/dt. We’ve got N turns of wire, and so the total emf, which we’ll denote by Ɛ (yep, a new symbol), will be equal to:

Formula emfNow, that’s just a nice sinusoidal function indeed, which will look like the graph below.

graph (1)

When no current is being drawn from the wire, this Ɛ will effectively be the potential difference between the two wires. What happens really is that theĀ emfĀ produces a current in the coil which pushes some charges out to the wire, and so then they’re stuck there for a while, and so there’s aĀ potential differenceĀ between them, which we’ll denote by V, and that potential difference will be equal to ʐ. It hasĀ to be equal to ʐ because, if it were any different, we’d have an equalizing counter-current, of course. [It’s a fine point, so you should think about it.] So we can write:

formula VSo what happens when weĀ doĀ connect the wires to the circuit, so we’ve got that closed circuitĀ depicted above (and below)?

Circuit

Then we’ll have a current I going through the circuit, and Ohm’s Law then tells us that the ratio between (i) theĀ voltage across the resistance in this circuit (we assume the connections between the generator and the resistor itself are perfect conductors) and (ii) the current will be some constant, so we have R = V/I and, therefore:

Formula AC generator

[To be fully complete, I should note that, when otherĀ circuit elements than resistors are involved, like capacitors andĀ inductors, we’ll have aĀ phase difference between the voltage and current functions, and so we should look at the impedanceĀ of the circuit, rather than its resistance. For more detail, see the addendum below this post.]

OK. Let’s now look at the power and energy involved.

Energy and power in the AC circuit

You’ll probably have many questions about the analysis above. You should. I do. The most remarkable thing, perhaps, is that this analysis suggests that the voltage doesn’t drop as we connect the generator to the circuit. It should. Why not? Why do the charges at both ends of the wire simplyĀ dischargeĀ through the circuit? In real life, there surely is such tendency:Ā sudden large changes in loading will effectively produce temporary changes in the voltage. But then it’s like FeynmanĀ writes: “The emf will continue to provide charge to the wires as current is drawn from them, attempting to keep the wires always at the same potential difference.”

So how much current is drawn from them? As I explained above, that dependsĀ notĀ on the generator but on the circuit, and more in particular on theĀ load, so that’s the resistor in this case. Again, the resistance is the (constant) ratio of the voltage and the current: R = V/I. So think about increasing or decreasing the resistance. If theĀ voltageĀ remains the same, it implies the current must decrease or increase accordingly, because R = V/I implies that I = V/R. So the current is inversely proportionalĀ to R, as I explained above when discussing car batteries and lamps and loads. šŸ™‚

Now, I still have to prove that theĀ powerĀ provided by our generator is effectively equal to P = Ɛ·I but, if it is, it implies the power that’s being delivered will be inversely proportional to R. Indeed, when ʐ and/or V remain what they are as we insert a larger resistance in the circuit, then P = Ɛ·I = ʐ2/R, and so the power that’s being delivered would be inversely proportional to R. To be clear, we’d have a relation between P and R like the one below.

Capture

This is somewhat weird. Why? Well… I also have to show you that the power that goes into moving our coil in the magnetic field, i.e. the rate ofĀ mechanicalĀ work required to rotate the coil against the magnetic forces, is equal to theĀ electricĀ power Ɛ·I, i.e. the rate at which electrical energy is being delivered by the emf of the generator. However, I’ll postpone that for a while and, hence, I’ll just ask you, once again, to take me on my word. šŸ™‚ Now, if that’s true, so if the mechanical power equals the electric power, then that implies that a larger resistance will reduce theĀ mechanical power we need to maintain the angular velocity ω. Think of a practical example: if we’d double the resistance (i.e. we halve the load), and if the voltage stays the same, then the current would be halved, and the power would also be halved. And let’s think about the limit situations: as the resistance goes to infinity, the power that’s being delivered goes to zero, as the current goes to zero, while if the resistance goes to zero, both the current as well as the power would go to infinity!

Well… We actuallyĀ know that’s alsoĀ true in real-life: actual generators consume more fuel when the load increases, so when they deliver more power,Ā and much less fuel, so less power, when there’s no load at all. You’ll know that, at least when you’re living in a developing country with a lot of load shedding! šŸ™‚ And the difference is huge: no or just a little load will only consume 10% of what you need when fully loading it. It’s totally in line with what I wrote on the relationship between the resistance and the current that it draws in. So, yes, it does make sense:

AnĀ emf doesĀ produceĀ more current if the resistance in the circuitĀ is lowĀ (so i.e. when the load is high), and the stronger currents do represent greater mechanical forces.

That’s a veryĀ remarkable thing. It means that, if we’d put a largerĀ load on our little AC generator, it should require more mechanical workĀ to keep the coil rotating at the same angular velocity ω. But… What changes? The change in flux is the same, theĀ Ī”t is the same, and so what changes really?Ā What changes is the current going through the coil, and it’s not a change in that Ī”Q factor above, but a change in its velocity v.

Hmm… That all looks quite complicated, doesn’t it? It does, soĀ let’s get back to the analysis of what we have here, so we’ll simply assume that we have someĀ dynamic equilibriumĀ obeying that formula above, and so I and R are what they are, and we relate them to ʐ according to that equation above, i.e.:

Formula AC generator

Now let me prove those formulas on the powerĀ of our generator and in the circuit. We have all these charges in our coil that are receiving some energy. Now, theĀ rateĀ at which they receive energy is FĀ·v.

Huh? Yes.Ā Let me explain: theĀ workĀ that’s being done on a charge along some path is the line integral ∫ FĀ·ds along this path. But the infinitesimal distance ds is equal toĀ vĀ·dt, as ds/dt = vĀ (note that we write s and v as vectors, so the dot product with F gives us the component of F that is tangential to the path). So ∫ FĀ·ds = ∫ (FĀ·v)dt. So the time rate of change of the energy, which is the power, isĀ FĀ·v. Just take the time derivative of the integral. šŸ™‚

Now let’s assume we have n moving charges per unit length of our coil (so that’s in line with what I wrote about Ī”Q above), then the power being delivered to any element ds of the coil is (FĀ·v)Ā·nĀ·ds, which can be written as:Ā (FĀ·ds)Ā·nĀ·v. [Why? Because v and ds have the same direction: the direction of both vectors is tangential to the wire, always.] Now all we need to do to find out how much power is being delivered to the circuit by our AC generatorĀ is integrate this expression over the coil, so we need to find:

power

However, the emf (Ɛ) is defined as the line integral ∫ E·ds line, taken around the entire coil, and E = F/q, and the current I is equal to I = q·n·v. So the power from our little AC generator is indeed equal to:

Power = Ɛ·I

So that’s done. Now I need to make good on my other promise, and that is to show that Ɛ·I product is equal to theĀ mechanicalĀ power that’s required to rotate the coil in the magnetic field. So how do we do that?

We know there’s going to be some torque because of the current in the coil. It’s formula is given byĀ Ļ„ = μ×B. What magnetic field? Well… Let me refer you to my post on the magnetic dipole and its torque: it’sĀ notĀ the magnetic fieldĀ causedĀ by the current, but the external magnetic field, so that’s the B we’ve been talking about here all along. So… Well… I am not trying to fool you here. šŸ™‚ However, the magnetic moment μ was notĀ defined by that external field, but by the current in the coil and its area. Indeed, μ‘s magnitude was the current times the area, so that’s NĀ·IĀ·S in this case. Of course, we need to watch out because μ is a vector itself and so we need the angle between μ andĀ B to calculate that vector cross productĀ Ļ„ = μ×B. However, if you check how we defined the direction of μ, you’ll see it’s normal to the plane of the coil and, hence, the angle between μ andĀ B is the very same ĪøĀ = ω·t that we started our analysis with. So, to make a long story short, theĀ magnitudeĀ of the torqueĀ Ļ„ is equal to:

Ļ„ = (NĀ·IĀ·S)Ā·BĀ·sinĪø

Now, we know the torque is also equal to theĀ work done per unit of distance traveledĀ (around the axis of rotation, that is), soĀ Ļ„ = dW/dĪø. Now dĪø = d(ω·t) = ω·dt. So we can now find the work done per unit of time, so that’s the powerĀ once more:

dW/dt = ω·τ = ω·(NĀ·IĀ·S)Ā·BĀ·sinĪø

But so we found that ʐ = NĀ·SĀ·B·ω·sinĪø, so… Well… We find that:

dW/dt = Ɛ·I

Now, this equation doesn’t sort out our question as to howĀ muchĀ power actually goes in and out of the circuit as we put someĀ loadĀ on it, but it is what we promised to do:Ā I showed that the mechanical work we’re doing on the coil is equal to theĀ electricĀ energy that’s being delivered to the circuit. šŸ™‚

It’s all quite mysterious, isn’t it? It is. And we didn’t include other stuff that’s relevant here, such as the phenomenon of self-inductance: the varying current in the coil will actually produce its own magnetic field and, hence, in practice, we’d get some “back emf” in the circuit. This “back emf” is opposite to the current when it is increasing, and it is in the direction of the current when it is decreasing.Ā In short, the self-inductance effect causes a current to have ‘inertia’: the inductive effects try to keep the flow constant, just as mechanical inertia tries to keep the velocity of an object constant. But… Well… I left that out. I’ll take about next time because…

[…] Well… It’s getting late in the day, and so I must assume this is sort of ‘OK enough’ as an introduction to what we’ll be busying ourselves with over the coming week. You take care, and I’ll talk to you again some day soon. šŸ™‚

Perhaps one little note, on a question that might have popped up when you were reading all of the above: so how do actualĀ generators keep the voltage up? Well… Most AC generators are, indeed, so-calledĀ constant speed devices. You can download some manuals from the Web, and you’ll find things like this: don’t operate at speeds above 4% of the rated speed, or more than 1% below the rated speed. Fortunately, the so-called engine governorĀ will take car of that. šŸ™‚

Addendum: The concept of impedance

In one of my posts on oscillators, I explain the concept of impedance, which is the equivalent ofĀ resistance, but for AC circuits. Just like resistance, impedance also sort ofĀ measures the ā€˜opposition’ that a circuit presents to a current when a voltage is applied, but it’s a complexĀ ratio, as opposed to R = V/I. It’s literally a complex ratio because the impedance has a magnitude and a direction, or aĀ phaseĀ as it’s usually referred to. Hence, one will often write the impedance (denoted by Z) using Euler’s formula:

Z =Ā |Z|eiĪø

The illustration below (credit goes to Wikipedia, once again) explains what’s going on. It’s a pretty generic view of the same AC circuit. The truth is: if we apply an alternating current, then the current and the voltage will both go up and down, but the current signal will usually lag the voltage signal, and the phase factor Īø tells us by how much. Hence, using complex-number notation, we write:

V = Iāˆ—ZĀ = Iāˆ—|Z|eiĪø

General_AC_circuit

Now, while that resembles the V = RĀ·I formula, you should note the bold-face type for V and I, and theĀ āˆ— symbol I am using here for multiplication. First theĀ āˆ— symbol: that’s to make it clear we’re not talking a vector cross product AƗB here, but a product of two complex numbers. The bold-face for V and I implies they’re like vectors, or like complex numbers: so they have a phase too and, hence, we can write them as:

  • VĀ =Ā |V|ei(ωt +Ā ĪøV)
  • IĀ =Ā |I|ei(ωt +Ā ĪøI)

To be fully complete – you may skip all of this if you want, but it’s not that difficult, nor very long – it all works out as follows. We write:

VĀ =Ā Iāˆ—ZĀ = |I|ei(ωt +Ā ĪøI)āˆ—|Z|eiĪøĀ = |I||Z|ei(ωt +Ā ĪøIĀ +Ā Īø)Ā =Ā |V|ei(ωt +Ā ĪøV)

Now, this equation must hold for all t, so we can equate the magnitudes and phases and, hence, we get:Ā |V| = |I||Z| and so we get the formula we need, i.e. theĀ phase differenceĀ between our function for the voltage and our function for the current.

ĪøVĀ = ĪøIĀ + Īø

Of course, you’ll say: voltage and current are something real, isn’t it? So what’s this about complex numbers? You’re right. I’ve used the complex notation only to simplify the calculus, so it’s only theĀ real partĀ of those complex-valued functions that counts.

Oh… And also note that, as mentioned above, we do not have such lag or phase difference when onlyĀ resistors are involved. So we don’t need the concept of impedance in the analysis above. With this addendum, I just wanted to be as complete as I can be. šŸ™‚

Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 17, 2020 as a result of a DMCA takedown notice from Michael A. Gottlieb, Rudolf Pfeiffer, and The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 20, 2020 as a result of a DMCA takedown notice from Michael A. Gottlieb, Rudolf Pfeiffer, and The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/

Induced currents

In my two previous posts, I presented all of the ingredients of the meal we’re going to cook now, most notably:

  1. The formula for theĀ torque on a loop of a current in a magnetic field, and its energy: (i)Ā Ļ„ = μ×B, andĀ (ii)Ā Umech = āˆ’Ī¼Ā·B.
  2. The Biot-Savart Law, which gives you the magnetic field that’s produced by wires carrying currents:

B formula 2

Both ingredients are, obviously, relevant to the design of an electromagneticĀ motor, i.e. an ‘engine that can do some work’, as Feynman calls it. šŸ™‚ Its principle is illustrated below.

motor

The two formulas above explain how and why the coil go around, andĀ theĀ coil can be made to keep going by arranging that the connections to the coil are reversed each half-turn by contacts mounted on the shaft. Then the torque is always in the same direction. That’s how a small direct currentĀ (DC) motor is made. MyĀ father made me make a couple of these thirty years ago, with a magnet, a big nail and some copper coil. I used sliding contacts, and they were the most difficult thing in the whole design. But now I found a very nice demo on YouTubeĀ of a guy whose system to ‘reverse’ the connections is wonderfully simple: he doesn’t use any sliding contacts. He just removes half of the insulation on the wire of the coil on one side. It works like a charm, but I think it’s not so sustainable, as it spins so fast that the insulation on the other side will probably come off after a while! šŸ™‚

Now, to make this motor run, you needĀ currentĀ and, hence, 19th century physicists and mechanical engineers also wondered how one could produce currents byĀ changingĀ the magnetic field. Indeed, they could use Alessandro Volta’s ‘voltaic pile‘ to produce currents but it was not very handy: it consisted of alternating zinc and copper discs, withĀ pieces of cloth soaked in salt water in-between!

Now, while theĀ Biot-Savart Law goes back to 1820, it took another decade to find out how that could be done.Ā Initially, people thought magnetic fields should just cause some current, but that didn’t work. Finally, Faraday unequivocally established the fundamental principle thatĀ electric effects are only there when something is changing.Ā So you’ll get a current in a wire byĀ movingĀ it in a magnetic field, or by moving the magnet or, if the magnetic field is caused by some other current, by changing the current in that wire. It’s referred to as the ‘flux rule’, or Faraday’s Law. Remember: we’ve seen Gauss’ Law, then AmpĆØre’s Law, and then that Biot-Savart Law, and so now it’s time for Faraday’s Law. šŸ™‚Ā Faraday’s Law is Maxwell’s third equation really, aka as the Maxwell-FaradayĀ Law of Induction:

āˆ‡Ć—E =Ā āˆ’āˆ‚B/āˆ‚t

Now you’ll wonder: what’sĀ flux got to do with this formula? āˆ‡Ć—E is aboutĀ circulation, not about flux!Ā Well… Let me copy Feynman’s answer:

Faraday's law

So… There you go. And, yes, you’re right, instead of writing Faraday’s Law asĀ āˆ‡Ć—E =Ā āˆ’āˆ‚B/āˆ‚t, we should write it as:

emf

That’s a easier to understand, and it’s also easier to work with, as we’ll see in a moment. So the point is: whenever the magnetic fluxĀ changes, there’s a push on the electrons in the wire. That push is referred to as theĀ electromotive force, abbreviated as emfĀ or EMF, and so it’s that line and/or surface integral above indeed. Let me paraphrase FeynmanĀ so you fullyĀ understand what we’re talking about here:

When we move our wire in a magnetic field, or when we move a magnet near the wire, or when we change the current in a nearby wire, there will be some net push on the electrons in the wire in one direction along the wire. There may be pushes in different directions at different places, but there will be more push in one direction than another. What counts is the push integrated around the complete circuit. We call this net integrated push the electromotive force (abbreviated emf) in the circuit. More precisely, the emf is defined as the tangential force per unit charge in the wire integrated over length, once around the complete circuit.

So that’s the integral. šŸ™‚ And that’s how we can turn that motor above into a generator:Ā instead of putting a current through the wire to make it turn, we can turn the loop, by hand or by a waterwheel or by whatever. Now, when the coil rotates, its wires will be moving in the magnetic field and so we will find an emf in the circuit of the coil, and so that’s how the motor becomes a generator.

Now, let me quickly interject something here: when I say ‘a push on the electrons in the wire’, what electrons are we talking about? How many? Well… I’ll answer that question in very much detail in a moment but, as for now, just note that the emfĀ is some quantity expressedĀ per coulombĀ or, as Feynman puts it above, per unit charge.Ā So we’ll need to multiply it with the current in the circuit to get theĀ powerĀ of our little generator.

OK. Let’s move on. Indeed, all I can doĀ here is mention just a few basics, so we can move on to the next thing. If you really want to know all of the nitty-gritty, then you should just readĀ Feynman’sĀ LectureĀ on induced currents. That’s got everything. And, no, don’t worry: contrary to what you might expect, my ‘basics’ do not amount to a terrible pile of formulas. In fact, it’s all easy and quite amusing stuff, and I should probably include a lot more. But then… Well… I always need to move on… If not, I’ll never get to the stuff that I reallyĀ want to understand. 😦

The electromotive force

We defined the electromotive force above, including its formula:

emf

What are the units? Let’s see… We know B was measuredĀ notĀ in newton per coulomb, like the electric field E, but in NĀ·s/CĀ·m, because we had to multiply the magnetic field strength with the velocity of the charge to find the force per unit charge, cf. theĀ F/q = vƗBĀ equation. Now what’s the unit in which we’d express that surface integral? We must multiply with m2, so we getĀ NĀ·mĀ·s/C. Now let’s simplify that by noting that one volt is equal to 1 NĀ·m/C. [The voltĀ has a number of definitions, but the one that applies here is that it’s the potential difference between two points that will impart one jouleĀ (i.e. 1 NĀ·m) of energyĀ to a unit of charge (i.e. 1 C) that passes between them.] So we can measure the magnetic flux in volt-seconds, i.e. VĀ·s.Ā And then we take the derivative in regard to time, so we divide by s, and so we get… Volt! The emf is measured in volt!

Does that make sense? I guess so: the emf causes a current, just like a potential difference, i.e. a voltage, and, therefore, we can and should look at the emf as a voltage too!

But let’s think about it some more, though. In differential form,Ā Faraday’s Law, is just that āˆ‡Ć—E =Ā āˆ’āˆ‚B/āˆ‚t equation, so that’s just one of Maxwell’s four equations, and so we prefer to write it as the “flux rule”. Now, the “flux rule” says that the electromotive force (abbreviated as emf or EMF) on the electrons inĀ a closed circuit is equal to the time rate of change of the magnetic flux it encloses. As mentioned above, we measureĀ magneticĀ flux inĀ volt-secondsĀ (i.e. VĀ·s), so its time rate of change is measured in volt (because the time rate of change is a quantity expressed per second), and so the emfĀ is measured in volt, i.e. jouleĀ perĀ coulomb, as 1 V = 1 NĀ·m/CĀ = 1 J/C.Ā What does it mean?

The time rate of change of the magnetic flux can changeĀ because the surface covered by our loop changes,Ā orĀ because the field itself changes, or by both. Whatever the cause, it will change the emf, or the voltage, and so it will make the electrons move. So let’s suppose we have some generatorĀ generating someĀ emf.Ā The emfĀ can be used to do someĀ work. We can charge a capacitor, for example. So how would that work?

More charge on the capacitor will increase the voltage V of the capacitor, i.e. the potential difference V = Φ1 āˆ’ Φ2 between the two plates. Now, we know that the increase of the voltage V will be proportional to the increase of the charge Q, and that the constant of proportionality is definedĀ by the capacity C of the capacitor: C = Q/V. [How do we know that? Well… Have a look at my post on capacitors.] Now, if our capacitor has an enormous capacity, then its voltage won’t increase very rapidly. However, it’s clear that, no matter how large the capacity, its voltage will increase. It’s just a matter of time. Now, its voltage cannot be higher than the emfĀ provided by our ‘generator’, because it will then want to discharge through the same circuit!

So we’re talking power and energy here, and so we need to put some loadĀ on our generator. Power is the rate of doing work, so it’s the time rate of change of energy, and it’s expressed in joule per second. The energy of our capacitor isĀ U =Ā (1/2)Ā·Q2/C =Ā (1/2)Ā·CĀ·V2.Ā [How do we know that? Well… Have a look at my post on capacitorsĀ once again. :-)] So let’s take the time derivative of U assumingĀ some constant voltageĀ V. We get: dU/dt = d[(1/2)Ā·Q2/C]/dt = (Q/C)Ā·dQ/dt = VĀ·dQ/dt. So that’s the powerĀ that the generator would need to supply to charge the generator. As I’ll show in a moment, the power supplied by a generator is, indeed, equal to theĀ emf times theĀ current, and the current is the time rate of change of the charge, so I = dQ/dt.

So, yes, it all works out: the power that’s being supplied by our generator will be used to charge our capacitor. Now, you may wonder: what about the current? Where is the current in Faraday’s Law?Ā The answer is: Faraday’s Law doesn’t have the current. It’s just not there. The emf is expressed in volt, and so that’s energy per coulomb, so it’s per unit charge. How much power an generator can and will deliver depends on its design, and the circuit and load that we will be putting on it. So we can’t say howĀ many coulomb we will have. It all depends. But you can imagine that, if the loop would be bigger, or if we’d have a coil with many loops, then our generator would be able to produce more power, i.e. it would be able to move more electrons, so the mentioned power = (emf)Ɨ(current) product would be larger. šŸ™‚

Finally, to conclude, note Feynman’s definition of the emf: the tangential force per unit charge in the wire integrated over length around the complete circuit. So we’ve got force times distance here, but per unit charge. Now, force times distance is work, or energy, and so… Yes, emfĀ is joule per coulomb, definitely! šŸ™‚

[…] Don’t worry too much if you don’t quite ‘get’ this. I’ll come back to it when discussingĀ electric circuits, which I’ll do in my next posts.

Self-inductance andĀ Lenz’s rule

We talked about motors and generators above. We also haveĀ transformers, like the one below. What’s going on here is that anĀ alternating currentĀ (AC) produces a continuously varying magnetic field, which generates an alternating emf in the second coil, which produces enough power to light an electric bulb.

transformer

Now, the total emf in coil (b) is the sum of the emf’s of the separate turns of coil, so if we wind (b) with many turns, we’ll get a larger emf, so we can ‘transform’ the voltage to some other voltage. From your high-school classes, you should know howĀ thatĀ works.

The thing I want to talk about here is something else, though. There is an induction effect in coil (a) itself. Indeed, the varying current in coilĀ (a) produces a varying magnetic field inside itself, and the flux of this field is continually changing, so there is a self-induced emf in coilĀ (a). The effect is called self-inductance, and so it’s the emf acting on a current itself when it is building up a magnetic field or, in general, when its field is changing in any way.Ā It’s a most remarkable phenomenon, and so let me paraphrase Feynman as he describes it:

“When we gave ā€œthe flux ruleā€ that the emf is equal to the rate of change of the flux linkage, we didn’t specify the direction of the emf. There is a simple rule, called Lenz’s rule, for figuring out which way the emf goes: the emf tries to oppose any flux change. That is, the direction of an induced emf is always such that if a current were to flow in the direction of the emf, it would produce a flux of B that opposes the change inĀ BĀ that produces the emf.Ā In particular, if there is a changing current in a single coil (or in any wire), there is a ā€œbackā€ emf in the circuit. This emf acts on the charges flowing in the coil to oppose the change in magnetic field, and so in the direction to oppose the change in current. It tries to keep the current constant; it is opposite to the current when the current is increasing, and it is in the direction of the current when it is decreasing. A current in a self-inductance has ā€œinertia,ā€ because the inductive effects try to keep the flow constant, just as mechanical inertia tries to keep the velocity of an object constant.”

Hmm… That’s something you need to read a couple of times to fully digest it. There’s a nice demo on YouTube, showing an MIT physics videoĀ demonstrating this effect with a metal ring placed on the end of an electromagnet. You’ve probably seen it before: the electromagnet is connected to a current, and the ring flies into the air. The explanation is that theĀ induced currentsĀ in the ring create a magnetic fieldĀ opposing the change of field through it. So the ring and the coil repel just like two magnets with opposite poles. The effect is no longer there when a thin radial cut is made in the ring, because then there can be no current. The nice thing about the video is that it shows how the effect gets much more dramatic when anĀ alternatingĀ current is applied, rather than a DC current. And it also shows what happens when you first cool the ring in liquid nitrogen. šŸ™‚

You may also notice the sparks when the electromagnet is being turned on. Believe it or not, that’s also related to a “back emf”. Indeed, when we disconnect a large electromagnet by opening a switch, the current is supposed to immediately go to zero but, in trying to do so, it generates a large “back emf”: large enough to develop an arc across the opening contacts of the switch. The high voltage is also not good for the insulation of the coil, as it might damage it. So that’s why large electromagnets usually include some extra circuit, which allows the “back current” to discharge less dramatically. But I’ll refer you to Feynman for more details, as any illustration here would clutter the exposĆ©.

Eddy currents

I like educational videos, and so I should give you a few references here, but there’s so many of this that I’ll let youĀ googleĀ a few yourself. The most spectacular demonstration of eddy currents is those that appear in a superconductor: even back in the 1970s, when Feynman wrote hisĀ Lectures, the effect ofĀ magnetic levitationĀ was well known. Feynman illustrates the effect with the simple diagram below: when bringing a magnet near to a perfect conductor, such as tinĀ below 3.8°K, eddy currents will create opposing fields, so that no magnetic flux entersĀ the superconducting material. The effect is also referred to as theĀ MeisnerĀ effect, after the German physicist Walther Meisner, although it was discovered much earlier (in 1911) by a Dutch physicist in Leiden, Heike Kamerlingh Onnes, who got a Nobel Prize for it.

superconductor

Of course, we have eddy currents in less dramatic situations as well. The phenomenon of eddy currents is usually demonstrated by the braking of a sheet of metal as it swings back and forth between the poles of an electromagnet, as illustrated below (left). The illustration on the right shows how eddy-current effect can be drastically reduced by cutting slots in the plate, so that’s like making a radial cut in our jumping ring. šŸ™‚

eddy currentseddy currents 2

The Faraday disc

The Faraday disc is interesting, not only from a historical point of view – the illustration below is a 19th century model, so Michael Faraday may have used himself – but also because it seems to contradict the “flux of rule”: as the disc rotates through aĀ steadyĀ magnetic field, it will produce some emf, but so there’s no change in the flux. How is that possible?

Faraday_disk_generatorFaraday disk

The answer, of course, is that we are ‘cheating’ here: the material is moving, so we’re actually moving the ‘wire’, or the circuit if you want, so here we need to combine twoĀ equations:

two laws

If we do that, you’ll see it all makes sense. šŸ™‚ Oh… That Faraday disc is referred to as a homopolar generator, and it’s quite interesting. You should check out what happened to the concept in the Wikipedia article on it. The Faraday disc was apparently used as a source for power pulsesĀ in the 1950s. The thing below could store 500Ā mega-joules and deliver currents up to 2Ā mega-ampĆØre, i.e. 2 millionĀ amps!Ā Fascinating, isn’t it? šŸ™‚800px-Homopolar_anu-MJC

Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/

Magnetic dipoles and their torque and energy

We studied the magnetic dipole in very much detail in one of my previous posts but, while we talked about an awful lot of stuff there, we actually managed toĀ notĀ talk about theĀ torqueĀ on a it, when it’s placed in the magnetic field ofĀ otherĀ currents, or some other magnetic field tout court. Now, that’s what drives electric motors and generators, of course, and so weĀ shouldĀ talk about it, which is what I’ll do here. Let me first remind you of the concept of torque, and then we’ll apply it to a loop of current. šŸ™‚

The concept of torque

The concept of torque is easy to grasp intuitively, but the math involved isĀ notĀ so easy. Let me sum up the basics (for the detail, I’ll refer you to my posts on spin and angular momentum). In essence, for rotations in space (i.e. rotational motion), the torque is what the force is for linear motion:

  1. It’s the torque (Ļ„) that makes an object spin faster or slower around some axis, just like the force would accelerate or decelerate that very same object when it would be moving along some curve.
  2. There’s also a similar ā€˜law of Newton’ for torque: you’ll remember that the force equals the time rate of change of a vector quantity referred to as (linear) momentum: F = dp/dt = d(mv)/dt = ma (the mass times the acceleration). Likewise, we have a vector quantity that is referred to as angular momentum (L), and we can write: Ļ„ (i.e. the Greek tau) = dL/dt.
  3. Finally, instead of linear velocity, we’ll have an angular velocity ω (omega), which is the time rate of change of the angle Īø that defines how far the object has goneĀ around (as opposed to the distance in linear dynamics, describing how far the object has gone along). So we have ω = dĪø/dt. This is actually easy to visualize because we know that Īø, expressed in radians, is actually the length of the corresponding arc on the unit circle. Hence, the equivalence with the linear distance traveled is easily ascertained.

There are many moreĀ similarities, like an angular acceleration: α = dω/dt = d2Īø/dt2, and we should also note that, just like the force, the torque is doing work – in its conventional definition asĀ used in physics – as it turns an object instead of justĀ moving it, so we can write:

Ī”W = τ·Δθ

SoĀ it’s all the same-same but different once more šŸ™‚ and so now we also need to point out some differences. The animation below does that very well, as it relates the ā€˜new’ concepts – i.e. torque and angular momentum – to the ā€˜old’ concepts – i.e. force and linear momentum. It does so using theĀ vector cross product, which is really all you need to understand the math involved. Just look carefully at all of the vectors involved, which you can identify by their colors, i.e. red-brownĀ (r),Ā light-blueĀ (Ļ„), dark-blueĀ (F), light-green (L), and dark-greenĀ (p).

Torque_animation

So what do we have here? We have vector quantities once again, denoted by symbols inĀ bold-face. Having said that, I should note thatĀ Ļ„, L and ω are ‘special’ vectors: they are referred to asĀ axial vectors, as opposed to the polar vectors F, p and v. To put it simply: polar vectors represent somethingĀ physical, and axial vectors are more likeĀ mathematicalĀ vectors, but that’s a very imprecise and, hence, essential non-correct definition. šŸ™‚Ā Axial vectors are directed along theĀ axisĀ of spin – so that is, strangely enough, at right angles to the direction of spin, or perpendicular to the ā€˜plane of the twist’ as Feynman calls it – and the direction of the axial vector is determined by a convention which is referred to as the ā€˜right-hand screw rule’. šŸ™‚

Now, I know it’s not so easy to visualize vector cross products, so it may help to first think of torque (also known, for some obscure reason, as the moment of the force) as a twist on an object or a plane. Indeed, the torque’s magnitudeĀ can be defined in another way: it’s equal to the tangential component of the force, i.e. FĀ·sin(Δθ), times the distance between the object and the axis of rotation (we’ll denote this distance by r). This quantity is also equal to the product of the magnitude of the force itself and the length of the so-called lever arm, i.e. the perpendicular distance from the axis to the line of action of the force (this lever arm length is denoted by r0). So, we can define Ļ„ without the use of the vector cross-product, and in not less than three different ways actually. Indeed, the torque is equal to:

  1. The product of the tangential component of the force times the distance r: Ļ„ = rĀ·Ft= rĀ·FĀ·sin(Δθ);
  2. The product of the length of the lever arm times the force: Ļ„ = r0Ā·F;
  3. The work done per unit of distance traveled: Ļ„ = Ī”W/Δθ or Ļ„ = dW/dĪø in the limit.

Phew!Ā Yeah. I know. It’s not so easy… However, I regret to have to inform you that you’ll need to go evenĀ further in your understanding of torque. More specifically, you really need to understand why and how we define the torque as a vector cross product, and so please do check out that post of mine on the fundamentals of ‘torque math’. If you don’t want to do that, then just try to remember the definition of torque as an axial vector, which is:

τ = (τyz, τzx, τxy) = (τx, τy, τz) with

Ļ„x = Ļ„yz = yFz – zFyĀ (i.e. the torque about the x-axis, i.e. in the yz-plane),

Ļ„y = Ļ„zx = zFx – xFzĀ (i.e. the torque about the y-axis, i.e. in the zx-plane), and

Ļ„z = Ļ„xy = xFy – yFxĀ (i.e. the torque about the z-axis, i.e. in the xy-plane).

The angular momentum L is defined in the same way:

L = (Lyz, Lzx, Lxy) = (Lx, Ly, Lz) with

Lx = Lyz = ypz – zpy (i.e. the angular momentum about the x-axis),

Ly = Lzx = zpx – xpz (i.e. the angular momentum about the y-axis), and

Lz = Lxy = xpy – ypx (i.e. the angular momentum about the z-axis).

Let’s nowĀ apply the concepts to a loop of current.

The forces on a current loop

The geometry of the situation is depicted below. I know it looks messy but let me help you identifying the moving parts, so to speak. šŸ™‚ We’ve got a loop with current and so we’ve got a magnetic dipole with some moment μ. From my post on the magnetic dipole, you know that μ‘s magnitude is equal to |μ| = μ = (current)Ā·(area of the loop) = IĀ·aĀ·b.

Geometry 2

Now look at the B vectors, i.e. the magnetic field.Ā Please note that these vectors represent some external magnetic field!Ā So it’sĀ notĀ like what we did in our post on the dipole: we’re not looking at the magnetic fieldĀ causedĀ by our loop, but at how it behaves in some externalĀ magnetic field. Now, because it’s kinda convenient to analyze, we assume that the direction of our external field B is the direction of the z-axis, so that’s what you see in this illustration: the B vectors all point north.Ā Now look at the force vectors, remembering that the magnetic force is equal to:

FmagneticĀ = qvƗB

So that gives the F1,Ā F2,Ā F3, andĀ F4Ā vectors (so that’s the force on the first, second, third and fourth leg of the loop respectively) the magnitude and direction they’re having. Now, it’s easy to see that the opposite forces, i.e. theĀ F1–F2Ā andĀ F3–F4Ā pair respectively, create aĀ torque. The torque because ofĀ F1Ā andĀ F2Ā is a torque which will tend to rotate the loopĀ about the y-axis, so that’s a torque in the xz-plane, while the torque because ofĀ F3Ā andĀ F4Ā will be some torque about the x-axis and/or the z-axis. As you can see, the torque is such that it will try to line up the moment vector μ with the magnetic field B. In fact, the geometry of the situation above is such thatĀ F3Ā andĀ F4Ā have already done their job, so to speak: the moment vector μ is already lined up with the xz-plane, so there’s not net torque in that plane. However, that’s just because of the specifics of the situation here: the more general situation is that we’d have some torque about all three axes, and so we need to find that vectorĀ Ļ„.

If we’d be talking someĀ electricĀ dipole, the analysis would be veryĀ straightforward, because the electric force is justĀ FelectricĀ = qE, which we can also write as E = FelectricĀ =/q, so the field is just the force on oneĀ unit of electricĀ charge, and so it’s (relatively) easy to see that we’d get the following formula for the torque vector:

Ļ„ = pƗE

Of course, the p is the electric dipole momentĀ here, not someĀ linear momentum. [And, yes, please do try to check this formula. Sorry I can’t elaborate on it, but the objective of this blog is notĀ substitute for a textbook!]

Now, all of the analogies between the electric and magnetic dipole field, which we explored in the above-mentioned post of mine, would tend to make us think that we can write Ļ„ here as:

Ļ„ = μ×B

Well… Yes. It works. Now you may want to know why it works šŸ™‚ and so let meĀ give you the following hint. Each charge in a wire feels thatĀ FmagneticĀ = qvƗB force, so the total magnetic force on some volume Ī”V, which I’ll denote by Ī”F for a while, is the sum of the forces on all of the individual charges. So let’s assume we’ve got N charges per unit volume, then we’ve got NĀ·Ī”V charges in our little volume Ī”V, so we write:Ā Ī”FĀ = NĀ·Ī”VĀ·qĀ·vƗB. You’re probably confused now: what’s the v here? It’s the (drift) velocity of the (free) electrons that make up our current I. Indeed, the protons don’t move. šŸ™‚ So NĀ·qĀ·v is just the current density j, so we get: Ī”FĀ = jƗBĪ”V, which implies that the forceĀ per unit volumeĀ is equal to jƗB. But we need to relate it to theĀ currentĀ in our wire, not the current density. Relax. We’re almost there. TheĀ Ī”V in a wire is just its cross-sectional area A times some length, which I’ll denote byĀ Ī”L, so Ī”FĀ = jƗBĪ”V becomes Ī”FĀ = jƗBAĪ”L. Now, jA is the vector current I, so we get the simple result we need here: Ī”FĀ = IƗBĪ”L, i.e. Ā the magnetic force per unit length on a wire is equal toĀ Ī”F/Ī”L = IƗB.

Let’s now get back to our magnetic dipole and calculate F1Ā andĀ F2. The length of ‘wire’ is the length of the leg of the loop, i.e. b, so we can write:

F1Ā =Ā āˆ’F2Ā = bĀ·IƗB

So the magnitude of these forces is equal F1Ā =Ā F2Ā = IĀ·BĀ·b. Now,Ā The length of the moment orĀ lever armĀ is, obviously, equal to aĀ·sinĪø, so the magnitude of the torque is equal to the force times the lever arm (cf. theĀ Ļ„ = r0Ā·F formula above) and so we can write:

Ļ„ =Ā IĀ·BĀ·bĀ·aĀ·sinĪø

But I·a·b is the magnitude of the magnetic moment μ, so we get:

Ļ„ = μ·BĀ·sinĪø

Now that’s consistent with the definition of the vector cross product:

Ļ„ = μ×BĀ = |μ|Ā·|B|Ā·sinĪøĀ·n = μ·BĀ·sinĪøĀ·n

Done! Now,Ā electric motors and generators are all aboutĀ workĀ and, therefore, we also need to briefly talk about energy here.

The energy of a magnetic dipole

Let me remind you that we could also write the torque as the work done per unit of distance traveled, i.e. as Ļ„ = Ī”W/Δθ or Ļ„ = dW/dĪø in the limit. Now, the torque tries to line up the moment with the field, and so the energy will beĀ lowestĀ when μ and B are parallel, so we need to throw in a minus sign when writing:

Ā Ļ„ = āˆ’dU/dĪø ⇔ dU =Ā āˆ’Ļ„Ā·dĪø

We should now integrate over the [0, Īø] interval toĀ find U, also using our Ļ„ = μ·BĀ·sinĪø formula. That’s easy, because we know that d(cosĪø)/dĪø =Ā āˆ’sinĪø, so that integral yields:

U = 1 āˆ’ μ·BĀ·cosĪø + a constant

If we choose the constant to be zero, and if we equate μ·B with 1, we get the blue graph below:

graph energy magnetic dipole 3

The μ·B in the U = 1 āˆ’ μ·BĀ·cosĪø formula is just a scaling factor, obviously, so it determines the minimumĀ andĀ maximumĀ energy. Now, you may want to limitĀ the relevant range of Īø to [0, π], but that’s not necessary: the energy of our loop of current does go up and down as shown in the graph. Just think about it: it all makes perfect sense!

Now, there is, of course, more energy in the loop than this U energy because energy is needed to maintain the current in the loop, and so we didn’t talk about that here. Therefore, we’ll qualify this ‘energy’ and call it the mechanicalĀ energy, which we’ll abbreviate byĀ Umech. In addition, we could, and will, choose some other constant of integration, so that amounts to choosing some other reference point for the lowest energy level. Why? Because it then allows us to write UmechĀ as a vectorĀ dotĀ product, so we get:

Umech =Ā āˆ’Ī¼Ā·BĀ·cosĪø = āˆ’Ī¼Ā·B

The graph is pretty much the same, but it now goes from āˆ’Ī¼Ā·B to +μ·B, as shown by the red graph in the illustration above.

Finally, you should note that the Umech = āˆ’Ī¼Ā·BĀ formula is similar to what you’ll usually see written for the energy of an electric dipole:Ā UĀ =Ā āˆ’pĀ·E.Ā So that’s all nice and good! However, you should remember that the electrostatic energy of anĀ electric dipole (i.e. two opposite charges separated by some distance d) isĀ all of the energy, as we don’t need to maintain some current to create the dipole moment!

Now, Feynman does all kinds of things with these formulas in his LecturesĀ on electromagnetism but I really think this is all you need to know about it—for the moment, at least. šŸ™‚

The magnetic field of circuits: the Law of Biot and Savart

Pre-script (dated 26 June 2020): This post got mutilated by the removal of some material by the dark force. You should be able to follow the main story line, however. If anything, the lack of illustrations might actually help you to think things through for yourself. In any case, we now have different views on these concepts as part of our realist interpretation of quantum mechanics, so we recommend you read our recent papers instead of these old blog posts.

Original post:

We studied the magnetic dipole in very much detail in one of my previous posts. While we talked about an awful lot of stuff there, we actually managed toĀ notĀ talk about theĀ torqueĀ on a it, when it’s placed in the magnetic field ofĀ otherĀ currents. Now, that’s what drives electric motors and generators, of course, and so weĀ shouldĀ talk about it, which is what I’ll do in my next post. Before doing so, however, I need to give you one or two extra formulas generalizing some of the results we obtained in our previous posts on magnetostatics. So that’s what I do under this heading: the magnetic field of circuits.Ā The idea is simple: loops of current are not always nice squares or circles. Their shape might be quite irregular, indeed, like the loop below.

irregular loop

Of course, the same general formula should apply. So we can find the magnetic vector potential with the following integral:

loop of current V

Just to make sure, let me re-insert its equivalent forĀ electrostatics, so you can see they’reĀ (almost) the same:

formula 1

But we’re talking a wire here, so how can we relate the current densityĀ j and the volume elementĀ dV to that? It’s easy: the illustration below shows that we can simply write:

jĀ·dV = jĀ·SĀ·ds = IĀ·ds

volume and wire

Therefore, we can write our integral for the vector potential as:

loop of current I

Of course, you should note the subtle change from aĀ volumeĀ integral to aĀ lineĀ integral, so it’s not all that straightforward, but we’re good to go.Ā Now, in electrostatics, we actually had a fairly simple integral for the electric field itself:

formula for E no 2

To be clear, E(1) is the field of a known charge distribution, which is represented by ρ(2), at pointĀ (1). The integral isĀ almost the same as the one for Φ, but we’re talkingĀ vectorsĀ hereĀ (E andĀ e12) rather than scalarsĀ (ρ and Φ), and you should also note theĀ squareĀ in the denominator of the integral. šŸ™‚

As you might expect, there is a similar integral for B, which we find by… Well… We just need to calculate B, so that’s the curl of A:

formula for B integral

How do we do that? It’s not so easy, so let me just copy the master himself:

integral calculation

So this integral gives B directly in terms of the known currents. The geometry involved is easy but, just in case, Feynman illustrates it, quite simply, as follows:

geometry

Now, there’s one more step to take, and then we’re done. If we’re talking a circuit of small wire, then we can replaceĀ jĀ·dV by IĀ·dsĀ once more, and, hence, we get the Biot-Savart Law in its final form:

B formula 2

Note the minus sign: it appears because we reversed the order of the vector cross product, and also note we actually have threeĀ integrals here, one for each componentĀ of B, so that’sĀ just like that integral for A.

So… That’s it. šŸ™‚ I’ll conclude by two small remarks:

  1. The law is named after Jean-Baptiste Biot and FĆ©lix Savart, two incredible Frenchmen (it’s really worthwhile checking their biographies on Wikipedia), who jotted it down inĀ 1820, so that’s almost 200 years ago. Isn’t that amazing?
  2. You see we sort of got rid of the vector potential with this formula. So the question is: ā€œWhat is the advantage of the vector potential if we can findĀ BĀ directly with a vector integral? After all, AĀ also involves three integrals!ā€ I’ll let Feynman reply to that question:

Because of the cross product, the integrals for BĀ are usually more complicated. Also, since the integrals forĀ AĀ are like those of electrostatics, we may already know them. Finally, we will see that in more advanced theoretical matters (in relativity, in advanced formulations of the laws of mechanics, like the principle of least action to be discussed later, and in quantum mechanics), the vector potential plays an important role.

In fact, Feynman makes the point on the vector potential being relevant veryĀ explicit by just boldly stating two laws in quantum mechanics in which the magnetic and electric potential are used, not the magnetic or electric field. Indeed, it seems an external magnetic or electric fieldĀ changes probability amplitudes. I’ll just jot down the two laws below, but leave it to you to decide whether or not you want to read the whole argument.Ā qm1qm2

The key point that Feynman is making is that Φ and A are equally ‘real’ or ‘unreal’ as E and B in terms of explaining physical realities. I get the point, but I don’t find it necessary to copy the whole argument here. Perhaps it’s sufficient to just quote Feynman’s introduction to it, which says it all, in my humble opinion, that is:

“There are many changes in what concepts are important when we go from classical to quantum mechanics. We have already discussed some of them in Volume I. In particular, the force concept gradually fades away, while the concepts of energy and momentum become of paramount importance. You remember that instead of particle motions, one deals with probability amplitudes which vary in space and time. In these amplitudes there are wavelengths related to momenta, and frequencies related to energies. The momenta and energies, which determine the phases of wave functions, are therefore the important quantities in quantum mechanics. Instead of forces, we deal with the way interactions change the wavelength of the waves. The idea of a force becomes quite secondary—if it is there at all. When people talk about nuclear forces, for example, what they usually analyze and work with are the energies of interaction of two nucleons, and not the force between them. Nobody ever differentiates the energy to find out what the force looks like. In this section we want to describe how the vector and scalar potentials enter into quantum mechanics. It is, in fact, just because momentum and energy play a central role in quantum mechanics that A and Φ provide the most direct way of introducing electromagnetic effects into quantum descriptions.”

OK. That’s sufficient really.Ā Onwards!

Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/
Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

https://wordpress.com/support/copyright-and-the-dmca/