# Electric circuits (1): the circuit elements

OK. No escape. It’s part of physics. I am not going to go into the nitty-gritty of it all (because this is a blog about physics, not about engineering) but it’s good to review the basics, which are, essentially, Kirchoff’s rules. Just for the record, Gustav Kirchhoff was a German genius who formulated these circuit laws while he was still a student, when he was like 20 years old or so. He did it as a seminar exercise 170 years ago, and then turned it into doctoral dissertation. Makes me think of that Dire Straits song—That’s the way you do it—Them guys ain’t dumb. 🙂

So this post is, in essence, just an ‘explanation’ of Feynman’s presentation of Kirchoff’s rules, so I am writing this post basically for myself, so as to ensure I am not missing anything. To be frank, Feynman’s use of notation when working with complex numbers is confusing at times and so, yes, I’ll do some ‘re-writing’ here. The nice thing about Feynman’s presentation of electrical circuits is that he sticks to Maxwell’s Laws when describing all ideal circuit elements, so he keeps using line integrals of the electric field E around closed paths (that’s what a circuit is, indeed) to describe the so-called passive circuit elements, and he also recapitulates the idea of the electromotive force when discussing the so-called active circuit element, so that’s the generator. That’s nice, because it links it all with what we’ve learned so far, i.e. the fundamentals as expressed in Maxwell’s set of equations. Having said that, I won’t make that link here in this post, because I feel it makes the whole approach rather heavy.

OK. Let’s go for it. Let’s first recall the concept of impedance.

#### The impedance concept

There are three ideal (passive) circuit elements: the resistor, the capacitor and the inductor. Real circuit elements usually combine characteristics of all of them, even if they are designed to work like ideal circuit elements. Collectively, these ideal (passive) circuit elements are referred to as impedances, because… Well… Because they have some impedance. In fact, you should note that, if we reserve the terms ending with -ance for the property of the circuit elements, and those ending on -or for the objects themselves, then we should call them impedors. However, that term does not seem to have caught on.

You already know what impedance is. I explained it before, notably in my post on the intricacies related to self- and mutual inductance. Impedance basically extends the concept of resistance, as we know it from direct current (DC) circuits, to alternating current (AC) circuits. To put it simply, when AC currents are involved – so when the flow of charge periodically changes reverses direction – then it’s likely that, because of the properties of the circuit, the current signal will lag the voltage signal, and so we’ll have some phase difference telling us by how much. So, resistance is just a simple real number R – it’s the ratio between (1) the voltage that is being applied across the resistor and (2) the current through it, so we write R = V/I – and it’s got a magnitude only, but impedance is a ‘number’ that has both a magnitude as well as phase, so it’s a complex number, or a vector.

In engineering, such ‘numbers’ with a magnitude as well as a phase are referred to as phasors. A phasor represents voltages, currents and impedances as a phase vector (note the bold italics: they explain how we got the pha-sor term). It’s just a rotating vector really. So a phasor has a varying magnitude (A) and phase (φ) , which is determined by (1) some maximum magnitude A0, (2) some angular frequency ω and (3) some initial phase (θ). So we can write the amplitude A as:

A = A(φ) = A0·cos(φ) = A0·cos(ωt + θ)

As usual, Wikipedia has a nice animation for it:

In case you wonder why I am using a cosine rather than a sine function, the answer is that it doesn’t matter: the sine and the cosine are the same function except for a π/2 phase difference: just rotate the animation above by 90 degrees, or think about the formula: sinφ = cos(φ−π/2). 🙂

So A = A0·cos(ωt + θ) is the amplitude. It could be the voltage, or the current, or whatever real variable. The phase vector itself is represented by a complex number, i.e. a two-dimensional number, so to speak, which we can write as all of the following:

A = A0·eiφ = A0·cosφ + i·A0·sinφ = A0·cos(ωt+θ) + i·A0·sin(ωt+θ)

= A0·ei(ωt+θ) = A0·eiθ·eiωt = A0·eiωt with A= A0·eiθ

That’s just Euler’s formula, and I am afraid I have to refer you to my page on the essentials if you don’t get this. I know what you are thinking: why do we need the vector notation? Why can’t we just be happy with the A = A0·cos(ωt+θ) formula? The truthful answer is: it’s just to simplify calculations: it’s easier to work with exponentials than with cosines or sines. For example, writing ei(ωt + θ) = eiθ·eiωt is easier than writing cos(ωt + θ) = … […] Well? […] Hmm… 🙂

See! You’re stuck already. You’d have to use the cos(α+β) = cosα·cosβ − sinα·sinβ formula: you’d get the same results (just do it for the simple calculation of the impedance below) but it takes a lot more time, and it’s easier to make mistake. Having said why complex number notation is great, I also need to warn you. There are a few things you have to watch out for. One of these things is notation. The other is the kind of mathematical operations we can do: it’s usually alright but we need to watch out with the i2 = –1 thing when multiplying complex numbers. However, I won’t talk about that here because it would only confuse you even more. 🙂

Just for the notation, let me note that Feynman would write Aas A0 with the little hat or caret symbol (∧) on top of it, so as to indicate the complex coefficient is not a variable. So he writes Aas Â0 = A0·eiθ. However, I find that confusing and, hence, I prefer using bold-type for any complex number, variable or not. The disadvantage is that we need to remember that the coefficient in front of the exponential is not a variable: it’s a complex number alright, but not a variable. Indeed, do look at that A= A0·eiθ equality carefully: Ais a specific complex number that captures the initial phase θ. So it’s not the magnitude of the phasor itself, i.e. |A| = A0. In fact, magnitude, amplitude, phase… We’re using a lot confusing terminology here, and so that’s why you need to ‘get’ the math.

The impedance is not a variable either. It’s some constant. Having said that, this constant will depend on the angular frequency ω. So… Well… Just think about this as you continue to read. 🙂 So the impedance is some number, just like resistance, but it’s a complex number. We’ll denote it by Z and, using Euler’s formula once again, we’ll write it as:

Z = |Z|eiθ = V/I = |V|ei(ωt + θV)/|I|ei(ωt + θI= [|V|/|I|]·ei(θ− θI)

So, as you can see, it is, literally, some complex ratio, just like R = V/I was some real ratio: it is a complex ratio because it has a magnitude and a direction, obviously. Also please do note that, as I mentioned already, the impedance is, in general, some function of the frequency ω, as evidenced by the ωt term in the exponential, but so we’re not looking at ω as a variable: V and I are variables and, as such, they depend on ω, but so you should look at ω as some parameter. I know I should, perhaps, not be so explicit on what’s going on, but I want to make sure you understand.

So what’s going on? The illustration below (credit goes to Wikipedia, once again) explains. It’s a pretty generic view of a very simple AC circuit. So we don’t care what the impedance is: it might be an inductor or a capacitor, or a combination of both, but we don’t care: we just call it an impedance, or an impedor if you want. 🙂 The point is: if we apply an alternating current, then the current and the voltage will both go up and down, but the current signal will lag the voltage signal, and some phase factor θ tells us by how much, so θ will be the phase difference.

Now, we’re dividing one complex number by another in that Z = V/I formula above, and dividing one complex number by another is not all that straightforward, so let me re-write that formula for Z above as:

V = IZ = I∗|Z|eiθ

Now, while that V = IZ formula resembles the V = I·R formula, you should note the bold-face type for V and I, and the ∗ symbol I am using here for multiplication. The bold-face for V and I implies they’re vectors, or complex numbers. As for the ∗ symbol, that’s to make it clear we’re not talking a vector cross product A×B here, but a product of two complex numbers. [It’s obviously not a vector dot product either, because a vector dot product yields a real number, not some other vector.]

Now we write V and I as you’d expect us to write them:

• = |V|ei(ωt + θV) = V0·ei(ωt + θV)
• = |I|ei(ωt + θI= I0·ei(ωt + θI)

θV and θare, obviously, the so-called initial phase of the voltage and the current respectively. These ‘initial’ phases are not independent: we’re talking a phase difference really, between the voltage and the current signal, and it’s determined by the properties of the circuit. In fact, that’s the whole point here: the impedance is a property of the circuit and determines how the current signal varies as a function of the voltage signal. In fact, we’ll often choose the t = 0 point such that θand so then we need to find θI. […] OK. Let’s get on with it. Writing out all of the factors in the V = IZ = I∗|Z|eiθ equation yields:

= |V|ei(ωt + θV) IZ = |I|ei(ωt + θI)∗|Z|eiθ = |I||Z|ei(ωt + θ+ θ)

Now, this equation must hold for all values of t, so we can equate the magnitudes and phases and, hence, the following equalities must hold:

1. |V| = |I||Z| ⇔ |Z| = |V|/|I|
2. ωt + θV =  ωt + θθ ⇔ θ = θV − θI

Done!

Of course, you’ll complain once again about those complex numbers: voltage and current are something real, isn’t it? And so what is really about this complex numbers? Well… I can just say what I said already. You’re right. I’ve used the complex notation only to simplify the calculus, so it’s only the real part of those complex-valued functions that counts.

OK. We’re done with impedance. We can now discuss the impedors, including resistors (for which we won’t have such lag or phase difference, but the concept of impedance applies nevertheless).

Before I start, however, you should think about what I’ve done above: I explained the concept of impedance, but I didn’t do much with it. The real-life problem will usually be that you get the voltage as a function of time, and then you’ll have to calculate the impedance of a circuit and, then, the current as a function of time. So I just showed the fundamental relations but, in real life, you won’t know what θ and θI could possibly be. Well… Let me correct that statement: we’ll give you formulas for θ as we discuss the various circuit elements and their impedance below, and so then you can use these formulas to calculate θI. 🙂

#### Resistors

Let’s start with what seems to be the easiest thing: a resistor. A real resistor is actually not easy to understand, because it requires us to understand the properties of real materials. Indeed, it may or may not surprise you, but the linear relation between the voltage and the current for real materials is only approximate. Also, the way resistors dissipate energy is not easy to understand. Indeed, unlike inductors and capacitors, i.e. the other two passive components of an electrical circuit, a resistor does not store but dissipates energy, as shown below.

It’s a nice animation (credit for it has to go to Wikipedia once more), as it shows how energy is being used in an electric circuit. Note that the little moving pluses are in line with the convention that a current is defined as the movement of positive charges, so we write I = dQ/dt instead of I = −dQ/dt. That also explains the direction of the field line E, which has been added to show that the charges move with the field that is being generated by the power source (which is not shown here). So, what we have here is that, on one side of the circuit, some generator or voltage source will create an emf pushing the charges, and so the animation shows how some load – i.e. the resistor in this case – will consume their energy, so they lose their push (as shown by the change in color from yellow to black). So power, i.e.energy per unit time, is supplied, and is then consumed.

To increase the current in the circuit above, you need to increase the voltage, but increasing both amounts to increasing the power that’s being consumed in the circuit. Electric power is voltage times current, so P = V·I (or v·i, if I use the small letters that are used in the two animations below). Now, Ohm’s Law (I = V/R) says that, if we’d want to double the current, we’d need to double the voltage, and so we’re quadrupling the power then: P2 = V2·I= (2·V1)·(2·I1) = 4·V1·I= 22·P1. So we have a square-cube law for the power, which we get by substituting V for R·I or by substituting I for V/R, so we can write the power P as P = V2/R = I2·R. This square-cube law says exactly the same: if you want to double the voltage or the current, you’ll actually have to double both and, hence, you’ll quadruple the power.

But back to the impedance: Ohm’s Law is the Z = V/I law for resistors, but we can simplify it because we know the voltage across the resistor and the current that’s going through are in phase. Hence, θV and θare identical and, therefore, the θ = θθin Z = |Z|eiθ is equal to zero and, hence, Z = |Z|. Now, |Z| = |V|/|I| = V0/I0. So the impedance is just some real number R = V0/I0, which we can also write as:

R = V0/I= (V0·ei(ωt + α))/(I0·ei(ωt + α)) = V(t)/I(t), with α = θV = θI

The equation above goes from R = V0/Ito R = V(t)/I(t) = V/I. It’s note the same thing: the second equation says that, at any point in time, the voltage and the current will be proportional to each other, with R or its reciprocal as the proportionality constant. In any case, we have our formula for Z here:

Z = R = V/I = V0/I0

So that’s simple. Before we move to the next, let me note that the resistance of a real resistor may depend on its temperature, so in real-life applications one will want to keep its temperature as stable as possible. That’s why real-life resistors have power ratings and recommended operating temperatures. The image below illustrates how so-called heat-sink resistors can be mounted on a heat sink with a simple spring clip so as to ensure the dissipated heat is transported away. These heat-sink resistors are rather small (10 by 15 mm only) but are rated for 35 watt – so that’s quite a lot for such small thing – if correctly mounted.

As mentioned, the linear relation between the voltage and the current is only approximate, and the observed relation is also there only for frequencies that are not ‘too high’ because, if the frequency becomes very high, the free electrons will start radiating energy away, as they produce electromagnetic radiation. So one always needs to look at the tolerances of real-life resistors, which may be ± 5%, ± 10%, or whatever. In any case… On to the next.

#### Capacitors (condensers)

We talked at length about capacitors (aka condensers) in our post explaining capacitance or, the more widely used term, capacity: the capacity of a capacitor is the observed proportionality between (1) the voltage (V) across and (2) the charge (Q) on the capacitor, so we wrote it as:

C = Q/V

Now, it’s easy to confuse the C here with the C for coulomb, which I’ll also use in a moment, and so… Well… Just don’t! 🙂 The meaning of the symbol is usually obvious from the context.

As for the explanation of this relation, it’s quite simple: a capacitor consists of two separate conductors in space, with positive charge on one, and an equal and opposite (i.e. negative) charge on the other. Now, the logic of the superposition of fields implies that, if we double the charges, we will also double the fields, and so the work one needs to do to carry a unit charge from one conductor to the other is also doubled! So that’s why the potential difference between the conductors is proportional to the charge.

The C = Q/V formula actually measures the ability of the capacitor to store electric charge and, therefore, to store energy, so that’s why the term capacity is really quite appropriate. I’ll let you google a few illustrations like the one below, that shows how a capacitor is actually being charged in a circuit. Usually, some resistance will be there in the circuit, so as to limit the current when it’s connected to the voltage source and, therefore, as you can see, the R times C factor (R·C) determines how fast or how slow the capacitor charges and/or discharges. Also note that the current is equal to the time rate of change of the charge: I = dQ/dt.

In the above-mentioned post, we also give a few formulas for the capacity of specific types of condensers. For example, for a parallel-plate condenser, the formula was C =  ε0A/d. We also mentioned its unit, which is is coulomb/volt, obviously, but – in honor of Michael Faraday, who gave us Faraday’s Law, and many other interesting formulas – it’s referred to as the farad: 1 F = 1 C/V. The C here is coulomb, of course. Sorry we have to use C to denote two different things but, as I mentioned, the meaning of the symbol is usually clear from the context.

We also talked about how dielectrics actually work in that post, but we did not talk about the impedance of a capacitor, so let’s do that now. The calculation is pretty straightforward. Its interpretation somewhat less so. But… Well… Let’s go for it.

It’s the current that’s charging the condenser (sorry I keep using both terms interchangeably), and we know that the current is the time rate of change of the charge (I = dQ/dt). Now, you’ll remember that, in general, we’d write a phasor A as A = A0·eiωt with A= A0·eiθ, so Ais a complex coefficient incorporating the initial phase, which we wrote as θand θfor the voltage and for the current respectively. So we’ll represent the voltage and the current now using that notation, so we write: V = V0·eiωt and I = I0·eiωt. So let’s now use that C = Q/V by re-writing it as Q = C·V and, because C is some constant, we can write:

I = dQ/dt = d(C·V)/dt = C·dV/dt

Now, what’s dV/dt? Oh… You’ll say: V is the magnitude of V, so it’s equal to |V| = |V0·eiωt| = |V0|·|eiωt| = |V0| = |V0·eiθ| = |V0|·|eiθ| = |V0| = V0. So… Well… What? V0 is some constant here! It’s the maximum amplitude of V, so… Well… It’s time derivative is zero: dV0/dt = 0.

Yes. Indeed. We did something very wrong here! You really need to watch out with this complex-number notation, and you need to think about what you’re doing. V is not the magnitude of V but its (varying) amplitude. So it’s the real voltage V that varies with time: it’s equal to V0·cos(ωt + θV), which is the real part of our phasor V. Huh? Yes. Just hang in for a while. I know it’s difficult and, frankly, Feynman doesn’t help us very much here. Let’s take one step back and so – you will see why I am doing this in a moment – let’s calculate the time derivative of our phasor V, instead of the time derivative of our real voltage V. So we calculate dV/dt, which is equal to:

dV/dtd(V0·eiωt)/dt = V0·d(eiωt)/dt = V0·(iω)·eiωt = iω·V0·eiωt = iω·V

Remarkable result, isn’t it? We take the time derivative of our phasor, and the result is the phasor itself multiplied with iω. Well… Yes. It’s a general property of exponentials, but still… Remarkable indeed! We’d get the same with I, but we don’t need that for the moment. What we do need to do is go from our I = C·dV/dt relation, which connects the real parts of I and V one to another, to the I = C·dV/dt relation, which relates the (complex) phasors. So we write:

I = C·dV/dt ⇔ I = C·dV/dt

Can we do that? Just like that? We just replace I and V by I and V? Yes, we can. Why? Well… We know that I is the real part of I and so we can write I = Re(I)+ Im(Ii = I + Im(Ii, and then we can write the right-hand side of the equation as C·dV/dt = Re(C·dV/dt)+ Im(C·dV/dt)·i. Now, two complex numbers are equal if, and only if, their real and imaginary parts are the same, so… Well… Write it all out, if you want, using Euler’s formula, and you’ll see it all makes sense indeed.

So what do we get? The I = C·dV/dt gives us:

I = C·dV/dt = C·(iω)·V

That implies that I/V = C·(iω) and, hence, we get – finally! – what we need to get:

Z = V/I = 1/(iωC)

This is a grand result and, while I am sorry I made you suffer for it, I think it did a good job here because, if you’d check Feynman on it, you’ll see he – or, more probably, his assistants, – just skate over this without bothering too much about mathematical rigor. OK. All that’s left now is to interpret this ‘number’ Z = 1/(iωC). It is a purely imaginary number, and it’s a constant indeed, albeit a complex constant. It can be re-written as:

Z = 1/(iωC) = i-1/(ωC) = –i/(ωC) = (1/ωC)·ei·π/2

[Sorry. I can’t be more explicit here. It’s just of the wonders of complex numbers: i-1 = –i. Just check one my posts on complex numbers for more detail.] Now, a –i factor corresponds to a rotation of minus 90 degrees, and so that gives you the true meaning of what’s usually said about a circuit with a capacitor: the voltage across the capacitor will lag the current with a phase difference equal to π/2, as shown below. Of course, as it’s the voltage driving the current, we should say it’s the current that is lagging with a phase difference of 3π/2, rather than stating it the other way around! Indeed, i-1 = –i = –1·i = i2·i = i3, so that amounts to three ‘turns’ of the phase in the counter-clockwise direction, which is the direction in which our ωt angle is ‘turning’.

It is a remarkable result, though. The illustration above assumes the maximum amplitude of the voltage and the current are the same, so |Z| = |V|/|I| = 1, but what if they are not the same? What are the real bits then? I can hear you, indeed: “To hell with the bold-face letters: what’s V and I? What’s the real thing?”

Well… V and I are the real bits of = |V|ei(ωt+θV) = V0·ei(ωt+θVand of= |I|ei(ωt+θI= I0·ei(ωt+θV−θ) = I0·ei(ωt−θ) = I0·ei(ωt+π/2respectively so, assuming θV = 0 (as mentioned above, that’s just a matter of choosing a convenient t = 0 point), we get:

• V = V0·cos(ωt)
• I = I0·cos(ωt + π/2)

So the π/2 phase difference is there (you need to watch out with the signs, of course: θ = −π/2, but so it’s the current that seems to lead here) but the V0/Iratio doesn’t have to be one, so the real voltage and current could look like something below, where the maximum amplitude of the current is only half of the maximum amplitude of the voltage.

So let’s analyze this quickly: the V0/Iratio is equal to |Z| = |V|/|I| = V0/I= 1/ωC = (1/ω)(1/C) (note that it’s not equal to V/I = V(t)/I(t), which is a ratio that doesn’t make sense because I(t) goes through zero as the current switches direction). So what? Well… It means the ratio is inversely proportional to both the frequency ω as well as the capacity C, as shown below. Think about this: if ω goes to zero, V0/Igoes to ∞, which means that, for a given voltage, the current must go to zero. That makes sense, because we’re talking DC current when ω → 0, and the capacitor charges itself and then that’s it: no more currents. Now, if C goes to zero, so we’re talking capacitors with hardly any capacity, we’ll also get tiny currents. Conversely, for large C, we’ll get huge currents, as the capacitor can take pretty much any charge you throw at it, so that makes for small V0/Iratios. The most interesting thing to consider is ω going to infinity, as the V0/Iratio is also quite small then. What happens? The capacitor doesn’t get the time to charge, and so it’s always in this state where it has large currents flowing in and out of it, as it can’t build the voltage that would counter the electromotive force that’s being supplied by the voltage source.

OK. That’s it. Le’s discuss the last (passive) element.

#### Inductors

We’ve spoiled the party a bit with that illustration above, as it gives the phase difference for an inductor already:

Z = iωL = ωL·ei·π/2, with L the inductance of the coil

So, again assuming that θV = 0, we can calculate I as:

= |I|ei(ωt+θI= I0·ei(ωt+θV−θ) = I0·ei(ωt−θ) = I0·ei(ωt−π/2

Of course, you’ll want to relate this, once again, to the real voltage and the real current, so let’s write the real parts of our phasors:

• V = V0·cos(ωt)
• I = I0·cos(ωt − π/2)

Just to make sure you’re not falling asleep as you’re reading, I’ve made another graph of how things could look like. So now’s it’s the current signal that’s lagging the voltage signal with a phase difference equal to θ = π/2.

Also, to be fully complete, I should show you how the V0/Iratio now varies with L and ω. Indeed, here also we can write that |Z| = |V|/|I| = V0/I0, but so here we find that V0/I0 =  ωL, so we have a simple linear proportionality here! For example, for a given voltage V0, we’ll have smaller currents as ω increases, so that’s the opposite of what happens with our ideal capacitors. I’ll let you think about that… 🙂

Now how do we get that Z = iωL formula? In my post on inductance, I explained what an inductor is: a coil of wire, basically. Its defining characteristic is that a changing current will cause a changing magnetic field in it and, hence, some change in the flux of the magnetic field. Now, Faraday’s Law tells us that that will cause some circulation of the electric field in the coil, which amounts to an induced potential difference which is referred to as the electromotive force (emf). Now, it turns out that the induced emf is proportional to the change in current. So we’ve got another constant of proportionality here, so it’s like how we defined resistance, or capacitance. So, in many ways, the inductance is just another proportionality coefficient. If we denote it by L – the symbol is said to honor the Russian phyicist Heinrich Lenz, whom you know from Lenz’ Law – then we define it as:

L = −Ɛ/(dI/dt)

The dI/dt factor is, obviously, the time rate of change of the current, and the negative sign indicates that the emf opposes the change in current, so it will tend to cause an opposing current. However, the power of our voltage source will ensure the current does effectively change, so it will counter the ‘back emf’ that’s being generated by the inductor. To be precise, the voltage across the terminals of our inductor, which we denote by V, will be equal and opposite to Ɛ, so we write:

V = −Ɛ = L·(dI/dt)

Now, this very much resembles the I = C·dV/dt relation we had for capacitors, and it’s completely analogous indeed: we just need to switch the I and V, and C and L symbols. So we write:

V = L·dI/dt⇔ V = L·dI/dt

Now, dI/dt is a similar time derivative as dV/dt. We calculate it as:

dI/dtd(I0·eiωt)/dt = I0·d(eiωt)/dt = I0·(iω)·eiωt = iω·I0·eiωt = iω·I

So we get what we want and have to get:

V = L·dI/dt = iωL·I

Now, Z = V/I, so ZiωL indeed!

#### Summary of conclusions

Let’s summarize what we found:

1. For a resistor, we have Z(resistor) = Z= R = V/I = V0/I0
2. For an capacitor, we have Z(capacitor) = Z= 1/(iωC) = –i/(ωC)
3. For an inductor, we have Z(inductance) = ZL= iωL

Note that the impedance of capacitors decreases as frequency increases, while for inductors, it’s the other way around. We explained that by making you think of the currents: for a given voltage, we’ll have large currents for high frequencies, and, hence, a small V0/Iratio. Can you think of what happens with an inductor? It’s not so easy, so I’ll refer you to the addendum below for some more explanation.

Let me also note that, as you can see, the impedance of (ideal) inductors and capacitors is a pure imaginary number, so that’s a complex number which has no real part. In engineering, the imaginary part of the impedance is referred to as the reactance, so engineers will say that ideal capacitors and inductors have a purely imaginary reactive impedance

However, in real life, the impedance will usually have both a real as well as an imaginary part, so it will be some kind of mix, so to speak. The real part is referred to as the ‘resistance’ R, and the ‘imaginary’ part is referred to as the ‘reactance’ X. The formula for both is given below:

But here I have to end my post on circuit elements. It’s become quite long, so I’ll discuss Kirchoff’s rules in my next post.

#### Addendum: Why is V = − Ɛ?

Inductors are not easy to understand—intuitively, that is. That’s why I spent so much time writing on them in my other post on them, to which I should be referring you here. But let me recapitulate the key points. The key idea is that we’re pumping energy into an inductor when applying a current and, as you know, the time rate of change is power: P = dW/dt, so we’re talking power here too, which is voltage times current: P = dW/dt = V·I. The illustration below shows what happens when an alternating current is applied to the circuit with the inductor. So the assumption is that the current goes in one and then in the other direction, so I > 0, and then I < 0, etcetera. We’re also assuming some nice sinusoidal curve for the current here (i.e. the blue curve), and so we get what we get for U (i.e. the red curve), which is the energy that’s stored in the inductor really, as it tries to resist the changing current: the energy goes up and down between zero and some maximum amplitude that’s determined by the maximum current.

So, yes, building up current requires energy from some external source, which is used to overcome the ‘back emf’ in the inductor, and that energy is stored in the inductor itself. [If you still wonder why it’s stored in the inductor, think about the other question: where else would it be stored?] How is stored? Look at the graph and think: it’s stored as kinetic energy of the charges, obviously. That explains why the energy is zero when the current is zero, and why the energy maxes out when the current maxes out. So, yes, it all makes sense! 🙂

Let me give another example. The graph below assumes the current builds up to some maximum. As it reaches its maximum, the stored energy will also max out. This example assumes direct current, so it’s a DC circuit: the current builds up, but then stabilizes at some maximum that we can find by applying Ohm’s Law to the resistance of the circuit: I = V/R. Resistance? But we were talking an ideal inductor? We are. If there’s no other resistance in the circuit, we’ll have a short-circuit, so the assumption is that we do have some resistance in the circuit and, therefore, we should also think of some energy loss to heat from the current in the resistance. If not, well… Your power source will obviously soon reach its limits. 🙂

So what’s going on then? We have some changing current in the coil but, obviously, some kind of inertia also: the coil itself opposes the change in current through the ‘back emf’. Now, it requires energy, or power, to overcome the inertia, so that’s the power that comes from our voltage source: it will offset the ‘back emf’, so we may effectively think of a little circuit with an inductor and a voltage source, as shown below.

But why do we write V = − Ɛ? Our voltage source can have any voltage, can’t it? Yes. Sure. But so the coil will always provide an emf that’s exactly the opposite of this voltage. Think of it: we have some voltage that’s being applied across the terminals of the inductor, and so we’ll have some current. A current that’s changing. And it’s that current will generate an emf that’s equal to Ɛ = –L·(dI/dt). So don’t think of Ɛ as some constant: it’s the self-inductance coefficient L that’s constant, but I (and, hence, dI/dt) and V are variable.

The point is: we cannot have any potential difference in a perfect conductor, which is what the terminals are: any potential difference, i.e. any electric field really, would cause huge currents. In other words, the voltage V and the emf Ɛ have to cancel each other out, all of the time. If not, we’d have huge currents in the wires re-establishing the V = −Ɛ equality.

Let me use Feynman’s argument here. Perhaps that will work better. 🙂 Our ideal inductor is shown below: it’s shielded by some metal box so as to ensure it does not interact with the rest of the circuit. So we have some current I, which we assume to be an AC current, and we know some voltage is needed to cause that current, so that’s the potential difference V between the terminals.

The total circulation of E – around the whole circuit – can be written as the sum of two parts:

Now, we know circulation of E can only be caused by some changing magnetic field, which is what’s going on in the inductor:

So this change in the magnetic flux is what it causing the ‘back emf’, and so the integral on the left is, effectively, equal to Ɛ, not minus Ɛ but +Ɛ. Now, the second integral is equal to V, because that’s the voltage V between the two terminals a and b. So the whole integral is equal to 0 = Ɛ + V and, therefore, we have that:

V = − Ɛ = L·dI/dt

Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

# Capacitors

Pre-script (dated 26 June 2020): This post got mutilated by the removal of some material by the dark force. You should be able to follow the main story-line, however. If anything, the lack of illustrations might actually help you to think things through for yourself.

Original post:

This post briefly explores the properties of capacitors. Why? Well… Just because they’re an element in electric circuits, and so we should try to fully understand how they function so we can understand how electric circuits work. Indeed, we’ll look at some interesting DC and AC circuits in the very near future. 🙂

Feynman introduces condensers − now referred to as capacitors – right from the start, as he explains Maxwell’s fourth equation, which is written as c2×B =  ∂E/∂t + j0 in differential form, but easier to read when integrating over a surface S bounded by a curve C:

The ∂E/∂t term implies that changing electric fields produce magnetic effects (i.e. some circulation of B, i.e. the c2×B on the left-hand side). We need this term because, without it, there could be no currents in circuits that are not complete loops, like the circuit below, which is just a circuit with a capacitor made of two flat plates. The capacitor is charged by a current that flows toward one plate and away from the other. It looks messy because of the complicated drawing: we have a curve C around one of the wires defining two surfaces: S1 is a surface that just fills the loop and, hence, crosses the wire, while S2 is a bowl-shaped surface which passes between the plates of the capacitor (so it does not cross the wire).

If we look at C and S1 only, then the circulation of B around C is explained by the current through the wire, so that’s the j0 term in Maxwell’s equation, which is probably how you understood magnetism during your high-school time. However, no current goes through the S2 surface, so if we look at C and S2 only, we need the ∂E/∂t to explain the magnetic field. Indeed, as Feynman points out, changing the location of an imaginary surface should not change a real magnetic field! 🙂

Let’s look at those charged sheets. For a single sheet of charge, we found two opposite fields of magnitude E = (1/2)·σ/ε0. Now, it is easy to see that we can superimpose the solutions for two parallel sheets with equal and opposite charge densities +σ and −σ, so we get:

between the sheets = σ/ε0 and E outside = 0

Now, actual capacitors are not made of some infinitely thin sheet of charge: they are made of some conductor and, hence, we get that shielding effect and we’re talking surface charge densities +σ and −σ, so the actual picture is more like the one below. Having said that, the formula above is still correct: E is σ/ε0 between the plates, and zero everywhere else (except at the edge, but I’ll talk about that later).

We’re now ready to tackle the first property of a capacitor, and that is its capacity. In fact, the correct term is capacitance, but that sounds rather strange, doesn’t it?

The capacity of a capacitor

We know the two plates are both equipotentials but with different potential, obviously! If we denote these two potentials as Φ1 and Φrespectively, we can define their difference Φ1 − Φ2 as the voltage between the two plates. It’s unit is the same as the unit for potential which, as you may or may not remember, is potential energy per unit charge, so that’s newton·meter/coulomb. [In honor of the guy who invented the first battery, 1 N·m/C is usually referred to as one volt, which – quite annoyingly – is also abbreviated as V, even if the voltage and the volt are two very different things: the volt is the unit of voltage.]

Now, it’s easy to see that the voltage, or potential difference, is the amount of work that’s required to carry one unit charge from one plate to the other. To be precise, because the coulomb is a huge unit − it’s equivalent to the combined charge of some 6.241×1018 protons − we should say that the voltage is the work per unit charge required to carry a small charge from one plate to the other. Hence, if d is the distance between the two plates (as shown in the illustration above), we can write:

Q is the total charge on each plate (so it’s positive on one, and negative on the other), A is the area of each plate, and is the separation between the two plates. What the equation says is that the voltage is proportional to the charge, and the constant of proportionality is d over ε0A. Now, the proportionality between V and Q is there for any two conductors in space (provided we have a plus charge on one, and a minus charge on the other, and so we assume there are no other charges around). Why? It’s just the logic of the superposition of fields: we double the charges, so we double the fields, and so the work done in carrying a unit charge from one point to the other is also doubled! So that’s why the potential difference between any two points is proportional to the charges.

Now, the constant of proportionality is called the capacity or capacitance of the system. In fact, it’s defined as C = Q/V. [Again, it’s a bit of a nuisance the symbol (C) is the same as the symbol that is used for the unit of charge, but don’t worry about it.] To put it simply, the capacitance is the ability of a body to store electric charge. For our parallel-plate condenser, it is equal to C =  ε0A/d. Its unit is coulomb/volt, obviously, but – again in honor of some other guy – it’s referred to as the farad: 1 F = 1 C/V.

To build a fairly high-capacity condenser, one could put waxed paper between sheets of aluminium and roll it up. Sealed in plastic, that made a typical radio-type condenser. The principle used today is still the same. In order to reduce the risk of breakdown (which occurs when the field strength becomes so large that it pulls electrons from the dielectric between the plates, thus causing conduction), higher capacity is generally better, so the voltage developed across the condenser will be smaller. Condensers used to be fairly big, but modern capacitors are actually as small as other computer card components. It’s all interesting stuff, but I won’t elaborate on it here, because I’d rather focus on the physics and the math behind the engineering in this blog. 🙂

Onward! Let’s move to the next thing. Before we do so, however, let me quickly give you the formula for the capacity of a charged sphere (for a parallel-plate capacitor, it’s C = ε0A/d, as noted above): C = 4πε0a. You’ll wonder: where’s the ‘other’ conductor here? Well… When this formula is used, it assumes some imaginary sphere of infinite radius with opposite charge −Q.

The energy of a capacitor

I talked about the energy of fields in various places, most notably my posts on fields and charges. The idea behind is quite simple: if there’s some distribution of charges in space, then we always have some energy in the system, because a certain amount of work was required to bring the charges together. [For the concept of energy itself, please see my post on energy and potential.] Remember that simple formula, and the equally simple illustration:

Also remember what we wrote above: the voltage is the work per unit charge required to carry a small charge from one plate to the other. Now, when charging a conductor, what’s happening is that charge gets transferred from one plate to another indeed, and the work required to transfer a small charge dQ is, obviously, equal to V·dQ. Hence, the change in energy is dU = V·dQ. Now, because V = Q/C, we get dU = (Q/C)·dQ, and integrating this from zero charge to some final charge Q, we get:

U = (1/2)·Q2/C = (1/2)·C·V2

Note how the capacity C, or its inverse 1/C, appears as a a constant of proportionality in both equations. It’s the charge, or the voltage, that’s the variable really, and the formulas say the energy is proportional to the square of the charge, or the voltage. Finally, also note that we immediately get the energy of a charged sphere by substituting C for 4πε0a (see the capacity formula in the previous section):

Now, Feynman applies this energy formula to an interesting range of practical problems, but I’ll refer you to him for that: just click on the link and check it out. 🙂

OK… Next thing. The next thing is to look at the dielectric material inside capacitors.

Dielectrics

You know the dielectric inside a capacitor increases its capacity. In case you wonder what I am talking about: the dielectric is the waxed paper inside of that old-fashioned radio-type condenser, or the oxide layer on the metal foil used in more recent designs. However, before analyzing dielectric, let’s first look at what happens when putting another conductor in-between the plates of our parallel-plate condenser, as shown below.

As a matter of fact, the neutral conductor will also increase the capacitance of our condenser. Now how does that work? It’s because of the induced charges. As I explained in my post on how shielding works, the induced charges reduce the field inside of the conductor to zero. So there is no field inside the (neutral) conductor. The field in the rest of the space is still what it was: σ/ε0, so that’s the surface density of charge (σ) divided by ε0. However, the distance over which we have to integrate to get the potential difference (i.e. the voltage V) is reduced: it’s no longer d but d minus b, as there’s no work involved in moving a charge across a zero field. Hence, instead of writing V = E·d = σ·d/ε0, we now write V = σ·(d−b)/ε0. Hence, the capacity C = Q/V = ε0A/is now equal to C = Q/V = ε0A/(d−b), which we prefer to write as:

Now, because 0 < 1 − b/d < 1, we have a factor (1 − b/d)−1 that is greater than 1. So our capacitor will have greater capacity which, remembering our C = Q/V and U = (1/2)·C·V2, formulas, implies (a) that it will store more charge at the same potential difference (i.e. voltage) and, hence, (a) that it will also store more energy at the same voltage.

Having said that, it’s easy to see that, if there’s air in-between, the risk of the capacitor breaking down will be much more significant. Hence, the use of conducting material to increase the capacitance of a capacitor is not recommended. [The question of how a breakdown actually occurs in a vacuum is an interesting one: the vacuum is expected to undergo electrical breakdown at or near the so-called Schwinger limit. If you want to know more about it, you can read the Wikipedia article on this.]

So what happens when we put a dielectric in-between. It’s illustrated below. The field is reduced but it is not zero, so the positive charge on the surface of the dielectric (look at the gaussian surface S shown by the broken lines) is less than the negative charge on the conductor: in the illustration below, it’s a 1 to 2 ratio.

But what’s happening really? What’s the reality behind? Good question. The illustration above is just a mathematical explanation. It doesn’t tell us anything − nothing at all, really − on the physics of the situation. As Feynman writes:

“The experimental fact is that if we put a piece of insulating material like lucite or glass between the plates, we find that the capacitance is larger. That means, of course, that the voltage is lower for the same charge. But the voltage difference is the integral of the electric field across the capacitor; so we must conclude that inside the capacitor, the electric field is reduced even though the charges on the plates remain unchanged. Now how can that be? Gauss’ Law tells us that the flux of the electric field is directly related to the enclosed charge. Consider the gaussian surface S shown by broken lines. Since the electric field is reduced with the dielectric present, we conclude that the net charge inside the surface must be lower than it would be without the material. There is only one possible conclusion, and that is that there must be positive charges on the surface of the dielectric. Since the field is reduced but is not zero, we would expect this positive charge to be smaller than the negative charge on the conductor. So the phenomena can be explained if we could understand in some way that when a dielectric material is placed in an electric field there is positive charge induced on one surface and negative charge induced on the other.”

Now that’s a mathematical model indeed, based on the formula for the work involved in transferring charge from one plate to the other:

W = ∫ F·ds = ∫qE·d= q·∫E·ds = qV

If your physics classes in high school were any good, you’ve probably seen the illustration above. Having said that, the physical model behind is more complicated, and so let’s have a look at that now.

The key to the whole analysis is the assumption that, inside a dielectric, we have lots of little atomic or molecular dipoles. Feynman presents an atomic model (shown below) but we could also think of highly polar molecules, like water, for instance. [Note, however, that, with water, we’d have a high risk of electrical breakdown once again.]

The micro-model doesn’t matter very much. The whole analysis hinges on the concept of a dipole moment per unit volume. We’ve introduced the concept of the dipole moment tout court in a previous post, but let me remind you: the dipole moment is the product of the distance between two equal but opposite charges q+ and q.

Now, because we’re using the d symbol for the distance between our plates, we’ll use δ for the distance between the two charges. Also note that we usually write the dipole moment as a vector so we keep track of its direction and we can use it in vector equations. To make a long story: p = qδ and, using boldface for vectors, p = qδ. [Please do note that δ is a vector going from the negative to the positive charge, otherwise you won’t understand a thing of what follows.]

As mentioned above, we can have atomic or molecular or whatever other type of dipoles, but what we’re interested in is the dipole moment per unit volume, which we write as:

P = Nqδ, with N the number of dipoles per unit volume.

For rather obvious reasons, P is also often referred to as the polarization vector. […] OK. We’re all set now. We should distinguish two possibilities:

1. P is uniform, i.e. constant, across our sheet of material.
2. P is not uniform, i.e. P varies across the dielectric.

So let’s do the first case first.

1. Uniform P

This assumption gives us the mathematical model of the dielectric almost immediately. Indeed, when everything is said and done, what’s going on here is that the positive/negative charges inside the dielectric have just moved in/out over that distance δ, so at the surface, they have also moved in/out over the very same distance. So the image is effectively the image below, which is equivalent to that mathematical of a dielectric we presented above.

Of course, no analysis is complete without formulas, so let’s see what we need and what we get.

The first thing we need is the surface density of the polarization charge induced on the surface, which was denoted by σpol, as opposed to σfree, which is the surface density on the plates of our capacitor (the subscript ‘free’ refers to the fact that the electrons are supposed to be able to move freely, which is not the case in our dielectric). Now, if A is the area of our surface slabs, and if, for each of the dipoles, we have that q charge, then the illustration above tells us that the total charge in the tiny negative surface slab will be equal to Q = A·δ·q·N. Hence, the surface charge density σpol = Q/A = A·δ·q·N/A = N·δ·q. But N·δ·q is also the definition of P! Hence, σpol = P. [Note that σpol is positive on one side, and negative on the other, of course!]

Now that we have σpol, we can use our E = σ/ε0 formula and add the fields from the dielectric and the capacitor plates respectively. Just think about that gaussian surface S, for example. The field there, taking into account that σpol and σfree have opposite signs, is equal to:

Using our σpol = P identity, we can also write this as E = (σfree−P)/ε0. But what’s P? Well… It’s a property of the material obviously, but then it’s also related to the electric field, of course! For larger E, we can reasonably assume that δ will be larger too (assuming some grid of atoms or molecules, we should obviously not assume a change in N or q) and, hence, dP/dE is supposed to be positive. In fact, it turns out that the relation between E and P is pretty linear, and so we can define some constant of proportionality and write E ≈ kP. Moreover, because the E and P vectors have the same direction, we can actually write E ≈ kP. Now, for historic reasons, we’ll write our k as k = ε0·χ, so we’re singling out our ε0 constant once more and – as usual – we add some gravitas to the analysis by using one of those Greek capital letters (χ is chi). So we have P = ε0·χ·E, and our equation above becomes:

Now, remembering that V = E·d and that the total charge on our capacitor is equal to Q = σfree·A, we get the formula which you may or may not know from your high school physics classes:

So… As Feynman puts it: “We have explained the observed facts. When a parallel-plate capacitor is filled with a dielectric, the capacitance is increased by the factor 1+χ.” The table below gives the values for various materials. As you can see, water’d be a great dielectric… if it wouldn’t be so conducive. 🙂

As for the assumption of linearity between E and P, there’s stuff on the Web on non-linear relationships too, but you can google that yourself. 🙂 Let’s now analyze the second case.

2. Non-uniform P

The analysis for non-uniform polarization is more general, and includes uniform polarization as a special case. To get going with it, Feynman uses an illustration (reproduced below) which is not so evident to interpret. Take your time to study it. The connects, once again, two equal but opposite charges. The P vector points in the same direction as the d vector, obviously, but has a different magnitude, because P is equal to P = Nqd. We also have the normal unit vector n here and an angle θ between the normal and P. Finally, the broken lines represent a tiny imaginary surface. To be precise, it represents, once again, an infinitesimal surface, or a surface element, as Feynman terms it.

Just take your time and think about it. If there’s no field across, then θ = π/2 and our surface disappears. If n and P point in the same direction, then θ = 0 and our surface becomes a tiny rectangle of height d. Feynman uses the illustration above to point out that the charge moved across any surface element is proportional to the component of P that is perpendicular to the surface. Hence, remembering what the vector dot product stands for, and remembering that both σpol as well as P are expressed per unit area, we can write:

σpol = P·n = |P|·|n|·cosθ = P·cosθ

So P·is the normal component of P, i.e. the component of P that’s perpendicular to our infinitesimal surface, and this component gives us the charge that moves across a surface element. [I know… The analysis is everything but easy here… But just hang in and try to get through it.]

Now, while the illustration above, and the formula, show us how some charge moves across the infinitesimal surface to create some surface polarization, it is obvious that it should not result in a net surface charge, because there are equal and opposite contributions from the dielectric on the two sides of the surface. However, having said that, the displacements of the charges do result in some tiny volume charge density, as illustrated below.

Now, I must admit Feynman does not make it easy to intuitively understand what’s going on because the various P vectors are chosen rather randomly, but you should be able to get the idea. P is not uniform indeed. Therefore, the electric field across our dielectric causes the P vectors to have different magnitudes and/or lengths. Now, as mentioned above, to get the total charge that is being displaced out of any volume bound by some surface S, we should look at the normal component of P over the surface S. To be precise, to get the total charge that is being displaced out of the volume V, we should integrate the outward normal component of P over the surface S. Of course, an equal excess charge of the opposite sign will be left behind. So, denoting the net charge inside V by ΔQpol, we write:

Now, you may or may not remember Gauss’ Theorem, which is related but not to be confused with Gauss’ Law (for more details, check one of my previous posts on vector analysis), according to which we can write:

[I know… You’re getting tired, but we’re almost there.] We can look at the net charge inside ΔQpol as an infinite sum of the (surface) charge densities σpol, but then added over the volume V. So we write:

Again, the integral above may not appear to be be very intuitive, but it actually is: we have a formula for the surface density for a surface element – so that’s something two-dimensional – and now we integrate over the volume, so the third spatial dimension comes in. Again, just let it sink in for a while, and you’ll see it all makes sense. In any case, the equalities above imply that:

and, therefore, that

σpol = −· P

You’ll say: so what? Well… It’s a nice result, really. Feynman summarizes it as follows:

“If there is a nonuniform polarization, its divergence gives the net density of charge appearing in the material. We emphasize that this is a perfectly real charge density; we call it “polarization charge” only to remind ourselves how it got there.”

Well… That says it all, I guess. To make sure you understand what’s written here: please note, once again, that the net charge over the whole of the dielectric is and remains zero, obviously!

The only question you may have is if non-uniform polarization is actually relevant. It is. You can google and you’re likely to get a lot of sites relating to multi-layered transducers and piezoelectric materials. 🙂 But, you’re right, that’s perhaps too advanced to talk about here.

Having said that, what I write above may look like too much nitty-gritty, but it isn’t: the formulas are pretty basic, and you need them if you want to advance in physics. In fact, Feynman uses these simple formulas in two more Lectures (Chapter 10 and 11 in Volume II, to be precise) to do some more analyses of real physics. However, as this blog is not meant to be a substitute for his Lectures, I’ll refer to him for further reading. At the very least, you have the basics here, and I hope it was interesting enough to induce you to look at the mentioned Lectures yourself. 🙂

Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:

Some content on this page was disabled on June 20, 2020 as a result of a DMCA takedown notice from Michael A. Gottlieb, Rudolf Pfeiffer, and The California Institute of Technology. You can learn more about the DMCA here: