We studied the magnetic dipole in very much detail in one of my previous posts. While we talked about an awful lot of stuff there, we actually managed to not talk about the torque on a it, when it’s placed in the magnetic field of other currents. Now, that’s what drives electric motors and generators, of course, and so we should talk about it, which is what I’ll do in my next post. Before doing so, however, I need to give you one or two extra formulas generalizing some of the results we obtained in our previous posts on magnetostatics. So that’s what I do under this heading: the magnetic field of circuits. The idea is simple: loops of current are not always nice squares or circles. Their shape might be quite irregular, indeed, like the loop below.
Of course, the same general formula should apply. So we can find the magnetic vector potential with the following integral:
Just to make sure, let me re-insert its equivalent for electrostatics, so you can see they’re (almost) the same:
But we’re talking a wire here, so how can we relate the current density j and the volume element dV to that? It’s easy: the illustration below shows that we can simply write:
j·dV = j·S·ds = I·ds
Therefore, we can write our integral for the vector potential as:
Of course, you should note the subtle change from a volume integral to a line integral, so it’s not all that straightforward, but we’re good to go. Now, in electrostatics, we actually had a fairly simple integral for the electric field itself:
To be clear, E(1) is the field of a known charge distribution, which is represented by ρ(2), at point (1). The integral is almost the same as the one for Φ, but we’re talking vectors here (E and e12) rather than scalars (ρ and Φ), and you should also note the square in the denominator of the integral. 🙂
As you might expect, there is a similar integral for B, which we find by… Well… We just need to calculate B, so that’s the curl of A:
How do we do that? It’s not so easy, so let me just copy the master himself:
So this integral gives B directly in terms of the known currents. The geometry involved is easy but, just in case, Feynman illustrates it, quite simply, as follows:
Now, there’s one more step to take, and then we’re done. If we’re talking a circuit of small wire, then we can replace j·dV by I·ds once more, and, hence, we get the Biot-Savart Law in its final form:
Note the minus sign: it appears because we reversed the order of the vector cross product, and also note we actually have three integrals here, one for each component of B, so that’s just like that integral for A.
So… That’s it. 🙂 I’ll conclude by two small remarks:
- The law is named after Jean-Baptiste Biot and Félix Savart, two incredible Frenchmen (it’s really worthwhile checking their biographies on Wikipedia), who jotted it down in 1820, so that’s almost 200 years ago. Isn’t that amazing?
- You see we sort of got rid of the vector potential with this formula. So the question is: “What is the advantage of the vector potential if we can find B directly with a vector integral? After all, A also involves three integrals!” I’ll let Feynman reply to that question:
Because of the cross product, the integrals for B are usually more complicated. Also, since the integrals for A are like those of electrostatics, we may already know them. Finally, we will see that in more advanced theoretical matters (in relativity, in advanced formulations of the laws of mechanics, like the principle of least action to be discussed later, and in quantum mechanics), the vector potential plays an important role.
In fact, Feynman makes the point on the vector potential being relevant very explicit by just boldly stating two laws in quantum mechanics in which the magnetic and electric potential are used, not the magnetic or electric field. Indeed, it seems an external magnetic or electric field changes probability amplitudes. I’ll just jot down the two laws below, but leave it to you to decide whether or not you want to read the whole argument.
The key point that Feynman is making is that Φ and A are equally ‘real’ or ‘unreal’ as E and B in terms of explaining physical realities. I get the point, but I don’t find it necessary to copy the whole argument here. Perhaps it’s sufficient to just quote Feynman’s introduction to it, which says it all, in my humble opinion, that is:
“There are many changes in what concepts are important when we go from classical to quantum mechanics. We have already discussed some of them in Volume I. In particular, the force concept gradually fades away, while the concepts of energy and momentum become of paramount importance. You remember that instead of particle motions, one deals with probability amplitudes which vary in space and time. In these amplitudes there are wavelengths related to momenta, and frequencies related to energies. The momenta and energies, which determine the phases of wave functions, are therefore the important quantities in quantum mechanics. Instead of forces, we deal with the way interactions change the wavelength of the waves. The idea of a force becomes quite secondary—if it is there at all. When people talk about nuclear forces, for example, what they usually analyze and work with are the energies of interaction of two nucleons, and not the force between them. Nobody ever differentiates the energy to find out what the force looks like. In this section we want to describe how the vector and scalar potentials enter into quantum mechanics. It is, in fact, just because momentum and energy play a central role in quantum mechanics that A and Φ provide the most direct way of introducing electromagnetic effects into quantum descriptions.”
OK. That’s sufficient really. Onwards!