As part of his presentation of indirect methods for finding the field, Feynman presents an interesting argument on the electrostatic field of a grid. It’s just another *indirect* method to arrive at meaningful conclusions on how a field is supposed to look like, but it’s quite remarkable, and that’s why I am expanding it here. Feynman’s presentation is *extremely *succint indeed and, hence, I hope the elaboration below will help *you *to understand it somewhat quicker than I did. :-)

The grid is shown below: it’s just a uniformly spaced array of parallel wires in a plane. We are looking at the field above the plane of wires here, and the dotted lines represent equipotential *surfaces* above the grid.

As you can see, for larger distances above the plane, we see a constant electric field, just as though the charge were uniformly spread over a *sheet* of charge, rather than over a grid. However, as we approach the grid, the field begins to deviate from the uniform field.

Let’s analyze it by assuming the wires lie in the xy-plane, running parallel to the y-axis. The distance between the wires is measured along the x-axis, and the distance to the grid is measured along the z-axis, as shown in the illustration above. We assume the wires are infinitely long and, hence, the electric field does *not *depend on y. So the component of **E** in the y-direction is 0, so E_{y }= –∂Φ/∂y = 0. Therefore, ∂^{2}Φ/∂y^{2 }= 0 and our Poisson equation *above the wires *(where there are no charges) is reduced to ∂^{2}Φ/∂x^{2 }+ ∂^{2}Φ/∂z^{2 }=0. What’s next?

Let’s look at the field of *two *positive wires first. The plot below comes from the Wolfram Demonstrations Project. I recommend you click the link and play with it: you can vary the charges and the distance, and the tool will redraw the equipotentials and the field lines accordingly. It will give you a better feel for the (a)symmetries involved. The equipotential lines are the gray contours: they are cross-sections of equipotential *surfaces*. The red curves are the field lines, which are always orthogonal to the equipotentials.

The point at the center is really interesting: the straight horizontal and vertical red lines through it are limits really. Feynman’s illustration below shows the point represents an *unstable* equilibrium: the hollow tube prevents the charge from going sideways. So if it wouldn’t be there, the charge would go sideways, of course! So it’s some kind of *saddle point*. *Onward!*

Look at the illustration below and try to imagine how the field looks like by thinking about the value of the potential as you move along one of the two blue lines below: the potential goes down as we move to the right, reaches a minimum in the middle, and then goes up again. Also think about the difference between the lighter and darker blue line: going along the light-blue line, we start at a *lower *potential, and its minimum will also be *lower *than that of the dark-blue line.

So you can start drawing curves. However, I have to warn you: the graphs are *not* so simple. Look at the detail below. The potential along the blue line goes slightly *up* before it decreases, so the graph of the potential may resemble the green curve on the right of the image. I did an *actual *calculation here. :-) If there are only two charges, the formula for the potential is quite simple: Φ = (1/4πε_{0})·(q_{1}/r_{1}) + (1/4πε_{0})·(q_{2}/r_{2}). Briefly forgetting about the (1/4πε_{0}) and equating q_{1} and q_{2} to +1, we get Φ = 1/r_{1} + 1/r_{2 }= (r_{1} + r_{2})/r_{1}r_{2}. That looks like an easy function, and it is. You should think of it as the equivalent of the 1/r formula, but written as 1/r = r/r^{2}, and with a factor 2 in front because we have two charges. :-)

However, we need to express it as a function of x, keeping z (i.e. the ‘vertical’ coordinate) constant. That’s what I did to get the graphs below. It’s easy to see that 1/r_{1 }= (x^{2 }+ z^{2})^{−1/2}, while 1/r_{2 }= [(a−x)^{2 }+ z^{2}]^{−1/2}. Assuming a = 2 and z = 0.8, the contribution from the first charge is given by the blue curve, the contribution of the second charge is represented by the red curve, and the green curve adds both and, hence, represents the *potential* generated by both charges, i.e. q_{1 }at x = 0 and q_{2 }at x = a. OK… *Onward!*

The point to note is that we have an *extremely *simple situation here – two charges only, or two wires, I should say – but a potential function that is * surely not *some simple sinusoidal function. To drive the point home, I plotted a few more curves below, keeping

*a*at

*a*= 2, but equating

*z*with 0.4, 0.7 and 1.7 respectively. The

*z*= 1.7 curve shows that, at larger distances, the potential actually increases slightly as we move from left to right along the

*z*= 1.7 line. Note the remarkable symmetry of the curves and the equipotential lines: there should be some obvious mathematical explanation for that but, unfortunately, not obvious enough for me to find it, so please let me know if

*you*see it! :-)

OK. Let’s get back to our grid. For your convenience, I copied it once more below.

Feynman’s approach to calculating the variations is quite original. He also duly notes that the potential function is surely* not *some simple sinusoidal function. However, he also notes that, when everything is said and done, it

*is*some periodic quantity, in one way or another, and, therefore, we should be able to do a

*Fourier analysis*and express it as a

**. To be precise, we should be able to write Φ(x, z) as a sum of**

*sum*of sinusoidal waves*harmonics*.

[…] I know. […] Now you say: ** Oh sh**! **And you’ll just turn off. That’s OK, but why don’t you give it a try? I promise to be lengthy. :-)

Before we get too much into the weeds, let’s briefly recall how it works for our classical guitar string. That post explained how the wavelengths of the harmonics of a string depended on its length. If we denote the various harmonics by their harmonic number n = 1, 2, 3 etcetera, and the length of the string by L, we have λ_{1} = 2L = (1/1)·2L, λ_{2} = L = (1/2)·2L, λ_{3} = (1/3)·2L,… λ_{n} = (1/n)·2L. In short, the harmonics – i.e. the *components* of our waveform* –* look like this:

etcetera (1/8, 1/9,…,1/n,… 1/∞)

Beautiful, isn’t it? As I explained in that post, it’s so beautiful it triggered a misplaced fascination with harmonic ratios. It was misplaced because the Pythagorean theory was a bit *too* simple to be true. However, their intuition was right, and they set the stage for guys like Copernicus, Fourier and Feynman, so that was good! :-)

Now, as you know, we’ll usually substitute wavelength and frequency by *wavenumber *and *angular *frequency so as to convert all to something expressed in *radians*, which we can then use as the argument in the sine and/or cosine component waves. [Yes, the Pythagoreans once again! :-)] The wavenumber k is equal to k = 2π/λ, and the angular frequency is ω = 2π·*f* = 2π/T (in case you doubt, you can quickly check that the speed of a wave *c *is equal to the product of the wavelength and its frequency by substituting: *c *= λ·*f *= (2π/k)·(ω/2π) = ω/k, which gives you the *phase *velocity *v*_{p}= *c*). To make a long story short, we wrote k = k_{1} = 2π·1/(2L), k_{2} = 2π·2/(2L) = 2k, k_{3} = 2π·3/(2L) = 3k,,… k_{n} = 2π·3/(2L) = nk,… to arrive at the grand result, and that’s our wave F(x) expressed as the sum of an infinite number of simple sinusoids:

F(x) = a_{1}cos(kx) + a_{2}cos(2kx) + a_{3}cos(3kx) + … + a_{n}cos(nkx) + … = ∑ a_{n}cos(nkx)

That’s easy enough. The problem is to find those amplitudes a_{1}, a_{2}, a_{3},… of course, but the great French mathematician who gave us the Fourier series also gave us the formulas for that, so we should be fine! Can we use them here? Should we use them here? Let’s see…

The *a *in the analysis, i.e. the spacing of the wires, is the *physical* quantity that corresponds to the length of our guitar string in our musical sound problem. In fact, a corresponds to 2L, because guitar strings are fixed at two ends and, hence, the two ends have to be nodes and, therefore, the wavelength of our first harmonic is *twice* the length of the string. ** Huh? **Well… Something like that. As you can see from the illustration of the grid,

*a*, in contrast to L, does correspond to

*one*full wavelength of our periodic function. So we write:

Φ(x) = ∑ a_{n}cos(n·k·x) = ∑ a_{n}cos(2π·n·x/a) (n = 1, 2, 3,…)

Now, that’s the formula for Φ(x) assuming we’re fixing *z*, so it’s Φ(x) at some fixed distance from the grid. Let’s think about those amplitudes a_{n} now. They should not depend on x, because the harmonics themselves (i.e. the cos(2π·n·x/a) components) are all that varies with x. So they have be some function of *n* and – *most importantly* – **some function of z** also. So we denote them by F_{n}(z) and re-write the equation above as:

Φ(x, z) = ∑ F_{n}(z)·cos(2π·n·x/a) (n = 1, 2, 3,…)

Now, the rest of Feynman’s analysis speaks for itself, so I’ll just shamelessly copy it:

What did he find here? What is he saying, *really*? :-) First note that the derivation above has been done for one term in the Fourier sum only, so we’re talking a specific harmonic *n *here. *That *harmonic *n *is a function of z which – let me remind you – is the distance from the grid. To be precise, the function is F_{n}(z) = A_{n}*e*^{−z/z0}. [In case you wonder how Feynman goes from equation (7.43) to (7.44), he’s just solving a second-order linear differential equation here. :-)]

Now, you’ve seen the graph of that function a *zillion* times before: it starts at A_{n }for z = 0 and goes to zero as z goes to infinity, as shown below. :-)

Now, that’s the case for *all *F_{n}(z) coefficients of course. As Feynman writes:

“We have found that if there is a Fourier component of the field of harmonic *n*, *that* component will decrease exponentially with a characteristic distance z_{0 }= a/2π*n*. For the first harmonic (*n*=1), the amplitude falls by the factor *e*^{−2π }(i.e. a large decrease) each time we increase *z* by one grid spacing *a*. The other harmonics fall off even more rapidly as we move away from the grid. We see that if we are only a few times the distance *a* away from the grid, the field is very nearly uniform, i.e., the oscillating terms are small. There would, of course, always remain the “zero harmonic” field, i.e. Φ_{0 }= −E_{0}·*z*, to give the uniform field at large *z. *Of course, for the complete solution, the sum needs to be made, and the coefficients A_{n} would need to be adjusted so that the total sum, when differentiated, gives an electric field that would fit the charge density of the grid wires.”

*Phew! *Quite something, isn’t it? But that’s it really, and it’s actually *simpler *than the ‘direct’ calculations of the field that I *googled*. Those calculations involve complicated series and logs and what have you, to arrive at the same result: **the field away from a grid of charged wires is very nearly uniform**.

Let me conclude this post by noting Feynman’s explanation of *shielding* by a screen. It’s quite terse:

“The method we have just developed can be used to explain why electrostatic shielding by means of a screen is often just as good as with a solid metal sheet. Except within a distance from the screen a few times the spacing of the screen wires, the fields inside a closed screen are zero. We see why copper screen—lighter and cheaper than copper sheet—is often used to shield sensitive electrical equipment from external disturbing fields.”

Hmm… So how does *that *work? The logic should be similar to the logic I explained when discussing shielding in one of my previous posts. Have a look—if only because it’s a lot easier to understand than the rather convoluted business I presented above. :-) But then I guess it’s all par for the course, isn’t it? :-)