# Fields and charges (II)

Pre-script (dated 26 June 2020): This post has become less relevant (even irrelevant, perhaps) because my views on all things quantum-mechanical have evolved significantly as a result of my progression towards a more complete realist (classical) interpretation of quantum physics. In addition, some of the material was removed by a dark force (that also created problems with the layout, I see now). In any case, we recommend you read our recent papers. I keep blog posts like these mainly because I want to keep track of where I came from. I might review them one day, but I currently don’t have the time or energy for it. š

Original post:

My previous posts was, perhaps, too full of formulas, without offering much reflection. Let me try to correct that here by tying up a few loose ends. The first loose end is about units. Indeed, I haven’t been very clear about that and so let me somewhat more precise on that now.

Note: In case you’re not interested in units, you can skip the first part of this post. However, please do look at the section on the electric constantĀ Īµ0Ā and, most importantly, the section onĀ natural unitsāespecially Planck units, as I will touch upon the topic of gauge coupling parametersĀ there and, hence, on quantum mechanics. Also, the third and last part, on the theoretical contradictions inherent in the idea ofĀ point charges, may be of interest to you.]

The field energy integrals

When we wrote that down that u =Ā Īµ0E2/2 formula for the energy density of an electric field (see my previous post on fields and charges for more details), we noted that the 1/2 factor was there to avoid double-counting. Indeed, those volume integrals we use to calculate the energy over all space (i.e.Ā U =Ā ā«(u)dV) count the energy that’s associated with aĀ pair of charges (or, to be precise, charge elements) twice and, hence, they have a 1/2 factor in front. Indeed, as Feynman notes, there is no convenient way, unfortunately, of writing an integral that keeps track of the pairs so that each pair is counted just once. In fact, I’ll have to come back to that assumption of there being ‘pairs’ of charges later, as that’s another loose end in the theory.

Now, we also said that that Īµ0Ā factor in the second integral (i.e. the one with the vector dot product Eā¢E =|E||E|cos(0) =Ā E2) is there to make the units come out alright. Now, when I say that, what does itĀ mean really? I’ll explain. Let me first make a few obvious remarks:

1. Densities are always measures in terms perĀ unit volume, so that’s the cubic meter (m3). That’s, obviously, an astronomical unit at the atomic or molecular scale.
2. Because of historical reasons, the conventional unit of charge is notĀ the so-called elementary charge +e (i.e. the charge of a proton), butĀ the coulomb. Hence, the charge density Ļ is expressed in Coulomb per cubic meter (C/m3). The coulomb is a rather astronomical unit tooāat the atomic or molecular scale at least: 1 eĀ ā 1.6022Ć10ā19Ā C. [I am rounding here to four digits after the decimal point.]
3. Energy is in joule (J) and that’s, once again, a rather astronomical unit at the lower end of the scales. Indeed, theoreticalĀ physicists prefer to use the electronvoltĀ (eV), which is theĀ energy gained (or lost) when an electron (so that’s a charge of āe, i.e.Ā minusĀ e) moves across a potentialĀ difference of one volt. But so we’ll stick to the joule as for now, not the eV,Ā because the joule is the SI unit that’s used when defining most electrical units, such as the ampere, the watt and… Yes. The volt. Let’s start with that one.

The volt

The voltĀ unit (V) measures both potential (energy) as well as potential difference (in both cases, we meanĀ electricĀ potential only, of course). Now, from all that you’ve read so far, it should be obvious that potential (energy) can only be measured with respect to some reference point. In physics, the reference point is infinity, which is so far away from all charges that there is no influence there. Hence, any charge we’d bring there (i.e. at infinity) will just stay where it is and not be attracted or repelled by anything. We say the potential there is zero:Ā Ī¦(ā) = 0. The choice of that reference point allows us, then, to define positive or negative potential: the potential near positive charges will be positive and, vice versa, the potential near negative charges will be negative. Likewise, the potential difference between the positive and negative terminal of a battery will be positive.

So you should just note that we measure both potential as well as potential differenceĀ in volt and, hence, let’s now answer the question of what a volt really is. The answer is quite straightforward: the potential at some point r = (x, y, z) measures the work done when bringing one unit charge (i.e. +e) from infinity to that point. Hence, it’s only natural that we define one volt as one joule per unit charge:

1 volt = 1 joule/coulomb (1 V = 1 J/C).

Also note the following:

1. One joule is the energy energy transferred (or work done) when applying a force of one newton over a distance of one meter, so one volt can also be measured in newtonĀ·meter per coulomb: 1 V = 1 J/C = NĀ·m/C.
2. One joule can also be written as 1 J = 1 VĀ·C.

It’s quite easy to see why that energy = volt-coulomb product makes sense: higher voltage will be associated with higher energy, and the same goes for higher charge. Indeed, the so-called ‘static’ on our body is usually associated with potential differences of thousands of volts (I am not kidding), but the charges involved are extremely small, because the ability of our body to store electric charge is minimal (i.e. the capacitanceĀ (aka capacity) of our body). Hence, the shock involved in the discharge is usually quite small: it is measured in milli-joules (mJ), indeed.

The remark on ‘static’ brings me to another unit which I should mention in passing: the farad. It measures the capacitance (formerly known as the capacity) of a capacitorĀ (formerly known as aĀ condenser). A condenser consists, quite simply, of two separated conductors: it’s usually illustrated as consisting of two plates or of thin foils (e.g. aluminum foil) separated by an insulating film (e.g. waxed paper), but one can also discuss the capacity of a single body, like our human body, or a charged sphere. In both cases, however, the idea is the same: we have a ‘positive’ charge on one side (+q), and a ‘negative’ charge on the other (āq). In case of a single object, we imagine the ‘other’ charge to be some other large object (the Earth, for instance, but it can also be a car or whatever object that could potentially absorb the charge on our body) or, in case of the charged sphere, we could imagine some other sphere of much larger radius. The faradĀ will then measure the capacity of one or both conductors to store charge.

Now, you may think we don’t need another unit here if that’s the definition: we could just express the capacity of a condensor in terms of its maximum ‘load’, couldn’t we? So that’s so many coulombĀ before the thing breaks down, when the waxed paper fails to separate the two opposite charges on the aluminium foil, for example. No. It’s not like that. It’s true we can not continue to increase the charge without consequences. However, what we want to measure with theĀ faradĀ is another relationship. Because of the opposite charges on both sides, there will be a potential difference, i.e. aĀ voltageĀ difference. Indeed, a capacitor is like a little battery in many ways: it will have two terminals. Now, it is fairly easy to show that the potentialĀ difference (i.e. the voltage) between the two plates will be proportional to the charge. Think of it as follows: if we double the charges, we’re doubling the fields, right? So then we need to do twiceĀ the amount of work to carry the unit charge (against the field) from one plate to the other. Now, because the distance is the same, that means the potential difference must be twice what it was.

Now, while we have a simple proportionality here between the voltage and the charge, the coefficient of proportionality will depend on the type of conductors, their shape, the distance and the type of insulator (aka dielectric) between them, and so on and so on. Now, what’s being measured in faradĀ is that coefficient of proportionality, which we’ll denote by CQĀ (the proportionality coefficient for the charge),Ā CVĀ ((the proportionality coefficient for the voltage) or, because we should make a choice between the two, quite simply,Ā as C. Indeed, we can either write (1) Q = CQV or, alternatively, V = CVQ, with CQĀ = 1/CV. As Feynman notes, “someone originally wrote the equation of proportionality as Q = CV”, so that’s what it will be: the capacitance (aka capacity) of a capacitor (aka condenser) isĀ the ratio of the electric charge Q (on each conductor) to the potential difference V between the two conductors. So we know that’s a constant typical of the type of condenser we’re talking about. Indeed, the capacitance is the constant of proportionalityĀ definingĀ theĀ linear relationship between the charge and the voltage means doubling the voltage, and so we can write:

C = Q/V

Now, the charge is measured in coulomb, and the voltage is measured in volt, so the unitĀ in which we will measure C is coulomb per voltĀ (C/V), which is also known as the farad (F):

1 faradĀ = 1 coulomb/volt (1 F = 1 C/V)

[Note the confusing use of the same symbol C for both the unit of charge (coulomb) as well as for the proportionality coefficient! I am sorrry about that, but so that’s convention!].

To be precise, I should add that the proportionality is generally there, but there are exceptions. More specifically, the way theĀ charge builds up (and the way the field builds up, at the edges of the capacitor, for instance) may cause the capacitance to vary a little bit as it is being charged (or discharged). In that case, capacitance will be defined in terms of incremental changes: C = dQ/dV.

Let me conclude this section by giving you two formulas, which are also easily proved but so I will just give you the result:

1. The capacity of a parallel-plate condenser is C =Ā Īµ0A/d. In this formula, we have, once again, that ubiquitous electric constant Īµ0Ā (think of it as just another coefficient of proportionality), and then A, i.e. the area of the plates, and d, i.e. the separation between the two plates.
2. The capacity of a charged sphere of radius rĀ (so we’re talking the capacity of a singleĀ conductor here) is C = 4ĻĪµ0r. This may remind you of the formula for the surfaceĀ of a sphere (A = 4Ļr2), but note we’re not squaring the radius. It’s just aĀ linear relationshipĀ with r.

I am not giving you these two formulas to show off or fill the pages, but because they’re so ubiquitous and you’ll need them. In fact, I’ll need the second formula in this post when talking about the other ‘loose end’ that I want to discuss.

Other electrical units

From your high school physics classes, you know theĀ ampereĀ and theĀ watt, of course:

1. The ampere is the unit of current, so it measures the quantity of charge moving or circulatingĀ per second.Ā Hence, one ampere is one coulomb per second: 1 A = 1 C/s.
2. The wattĀ measures power. Power isĀ the rate of energy conversion or transfer with respect to time. One watt is one joule per second: 1 W = 1 J/s = 1 NĀ·m/s. Also note that we can write power as the product of current and voltage: 1 W = (1 A)Ā·(1 V) = (1 C/s)Ā·(1 J/C) = 1 J/s.

Now, because electromagnetism is such well-developed theory and, more importantly, because it has so many engineering and household applications, there are many other units out there, such as:

• The ohmĀ (Ī©): that’s the unit of electricalĀ resistance. Let me quickly define it: the ohm is defined as the resistance between two points of a conductor when a (constant) potential difference (V) of one volt, applied to these points, produces a current (I) of one ampere. So resistance (R) is another proportionality coefficient: R = V/I, and 1 ohm (Ī©)Ā = 1 volt/ampere (V/A). [Again, note the (potential) confusion caused by the use of the same symbol (V) for voltage (i.e. the difference in potential) as well as its unit (volt).] Now, note that it’s often useful to write the relationship as V = RĀ·I, so that gives the potential difference as the product of the resistance and the current.
• TheĀ weber (Wb) and the teslaĀ (T): that’s theĀ unit ofĀ magnetic fluxĀ (i.e. the strength of theĀ magneticĀ field) and magnetic flux density (i.e. one tesla = one weberĀ per square meter) respectively. So these have to do with the field vector B, rather than E. So we won’t talk about it here.
• TheĀ henryĀ (H): that’s the unit of electromagneticĀ inductance. It’s also linked to the magnetic effect. Indeed, from Maxwell’s equations, we know that aĀ changingĀ electric current will cause the magnetic field to change. Now, a changingĀ magnetic field causes circulationĀ of E. Hence, we can make the unit charge go around in some loop (we’re talking circulation of E indeed, not flux). The related energy, or the work that’s done by a unit of charge as it travels (once) around that loop, is ā quite confusingly! ā referred to as electromotive force (emf). [The term is quite confusing because we’re not talking force but energy, i.e. work, and, as you know by now, energy is force times distance, so energy and force are related but not the same.] To ensure you know what we’re talking about, let me note that emf is measured in volts, so that’s inĀ joule per coulomb: 1 V = 1 J/C.Ā Back to theĀ henry now. If the rate of change of current in a circuit (e.g. the armature winding of an electric motor) is one ampere per second, and the resulting electromotive force (remember: emf is energy per coulomb)Ā is one volt, then the inductanceĀ of the circuit is one henry. Hence, 1 H = 1 V/(1 A/s) = 1 VĀ·s/A.Ā Ā Ā Ā Ā

The concept ofĀ impedance

You’ve probably heard about the so-calledĀ impedanceĀ of a circuit. That’s a complex concept, literally, because it’s a complex-valued ratio. I should refer you to the Web for more details, but let me try to summarize it because, while it’s complex, that doesn’t mean it’s complicated. š In fact, I think it’s rather easy to grasp after all you’ve gone through already. š So let’s give it a try.

When we have a simpleĀ direct currentĀ (DC), then we have a very straightforward definition of resistance (R), as mentioned above: it’s a simple ratio between the voltage (as measured in volt) and the current (as measured in ampere). Now, withĀ alternatingĀ currentĀ (AC) circuits, it becomes more complicated, and so then it’s the concept of impedanceĀ that kicks in. Just like resistance, impedance also sort ofĀ measures the ‘opposition’ that a circuit presents to a current when a voltage is applied, but we have a complexĀ ratioāliterally: it’s a ratio with a magnitude and a direction, or aĀ phaseĀ as it’s usually referred to. Hence, one will often write the impedance (denoted by Z) using Euler’s formula:

Z =Ā |Z|eiĪø

Now, if you don’t know anything about complex numbers, you should just skip all of what follows and go straight to the next section. However, if you do know what a complex number is (it’s an ‘arrow’, basically, and ifĀ Īø is a variable, then it’s a rotating arrow, or a ‘stopwatch hand’, as Feynman calls it in his more popular Lectures on QED), then you may want to carry on reading.

The illustration below (credit goes to Wikipedia, once again) is, probably, the most generic view of an AC circuit that one can jot down. If we apply an alternating current, both the current as well as the voltage will go up and down. However, the current signal will lag the voltage signal, and the phase factor Īø tells us by how much. Hence, using complex-number notation, we write:

V = IāZĀ = Iā|Z|eiĪø

Now, while that resembles the V = RĀ·I formula I mentioned when discussing resistance, you should note the bold-face type for V and I, and theĀ ā symbol I am using here for multiplication. First theĀ ā symbol: that’s a convention Feynman adopts in the above-mentioned popular account of quantum mechanics. I like it, because it makes itĀ veryĀ clear we’re not talking a vector cross product AĆB here, but a product of two complex numbers. Now, that’s also why I write V and I in bold-face: they have a phase too and, hence, we can write them as:

• VĀ =Ā |V|ei(Ļt +Ā ĪøV)
• IĀ =Ā |I|ei(Ļt +Ā ĪøI)

This works out as follows:

VĀ =Ā IāZĀ = |I|ei(Ļt +Ā ĪøI)ā|Z|eiĪøĀ = |I||Z|ei(Ļt +Ā ĪøIĀ +Ā Īø)Ā =Ā |V|ei(Ļt +Ā ĪøV)

Indeed, because the equation must hold for all t, we can equate the magnitudes and phases and, hence, we get:Ā |V| = |I||Z| andĀ ĪøVĀ = ĪøIĀ + Īø. But voltage and current is something real, isn’t it? Not some complex number? You’re right. The complex notation is used mainly to simplify the calculus, but it’s only theĀ real partĀ of those complex-valued functions that count. [In any case, because we limit ourselves to complex exponentials here, the imaginary part (which is the sine, as opposed to the real part, which is the cosine) is the same as the real part, but with a lag of its own (Ļ/2 or 90 degrees, to be precise). Indeed: when writing Euler’s formula out (eiĪøĀ = cos(Īø) + isin(Īø), you should always remember that the sine and cosine function are basically the same function: they differ only in the phase, as is evident from the trigonometric identity sin(Īø+Ļ/) = cos(Īø).]

Now, that should be more than enough in terms of an introduction to the units used in electromagnetic theory. Hence, let’s move on.

The electric constant Īµ0

Let’s now look at Ā that energy density formula once again. When looking at that u =Ā Īµ0E2/2 formula, you may think that its unit should be the square of the unit in which we measure field strength. How do we measure field strength? It’s defined as the force on a unit charge (E = F/q), so it should be newton per coulomb (N/C). Because theĀ coulomb can also be expressed in newtonĀ·meter/volt (1Ā V = 1 J/C = NĀ·m/C and, hence, 1 C = 1 NĀ·m/V), we can express field strength not only in newton/coulomb but also in volt per meter: 1 N/C = 1 NĀ·V/NĀ·m = 1 V/m. How do we get fromĀ N2/C2Ā and/or V2/m2Ā to J/m3?

Well… Let me first note there’s no issueĀ in terms of units with that ĻĪ¦ formula in the first integral for U: [Ļ]Ā·[Ī¦] = (C/m3)Ā·V = [(NĀ·m/V)/m3)Ā·V = (NĀ·m)/m3Ā = J/m3. No problem whatsoever. It’s only that second expression for U, with theĀ u =Ā Īµ0E2/2 in the integrand, that triggers the question. Here, we just need to accept that we need that Īµ0Ā factor to make the units come out alright. Indeed, justĀ like other physical constants (such as c, G, orĀ h, for example), it has a dimension: its unit is either C2/NĀ·m2Ā or, what amounts to the same, C/VĀ·m. So the units come out alright indeed if, and only if, weĀ multiply theĀ N2/C2Ā and/or V2/m2 units with the dimension of Īµ0:

1. (N2/C2)Ā·(C2/NĀ·m2) = (N2Ā·m)Ā·(1/m3) = J/m3
2. (V2/m2)Ā·(C/VĀ·m) = VĀ·C/m3Ā = (VĀ·NĀ·m/V)/m3Ā = NĀ·m/m3Ā =Ā J/m3

Done!Ā

But so that’s the units only. The electric constant also has a numerical value:

Īµ0Ā =Ā 8.854187817…Ć10ā12Ā C/VĀ·mĀ āĀ 8.8542Ć10ā12Ā C/VĀ·m

ThisĀ numericalĀ value of Īµ0Ā is as important as its unit to ensure both expressions for U yield the same result. Indeed, as you may or may not remember from the second of my two posts on vector calculus, if we have a curl-free field C (that meansĀ āĆCĀ = 0 everywhere, which is the case when talking electrostatics only, as we are doing here),Ā then we can always find some scalar field Ļ such that C = āĻ. But soĀ here we have E =Ā āĪµ0āĪ¦, and so it’s not the minus sign that distinguishes the expression from the C = āĻ expression, but theĀ Īµ0Ā factor in front.

It’s just like the vector equation for heat flow: h = āĪŗāT. Indeed, we also have a constant of proportionality here, which is referred to as the thermal conductivity. Likewise, the electric constant Īµ0Ā is also referred to as the permittivity of the vacuum (or of free space), for similar reasons obviously!

Natural units

You may wonder whether we can’t find some better units, so we don’t need the rather horrendousĀ 8.8542Ć10ā12Ā C/VĀ·m factor (I am rounding to four digits after the decimal point). The answer is: yes, it’s possible. In fact, there are several systems in which the electric constant (and the magnetic constant, which we’ll introduce later) reduce to 1. The best-known are the so-called Gaussian and Lorentz-Heaviside units respectively.

Gauss defined the unit of charge in what is now referred to as the statcoulomb (statC), which is also referred to as the franklinĀ (Fr) and/or the electrostatic unit of charge (esu), but I’ll refer you to the Wikipedia article on it in case you’d want to find out more about it. You should just note the definition of this unit is problematic in other ways. Indeed, it’s not so easy to try to define ‘natural units’ in physics, because there are quite a few ‘fundamental’ relations and/or laws in physics and, hence, equating this or that constant to one usually has implications on other constants. In addition, one should note that many choices that made sense as ‘natural’ units in the 19th century seem to be arbitrary now. For example:

1. Why would we select the charge of the electron or the proton as the unit charge (+1 orĀ ā1) if we now assume that protons (and neutrons) consists of quarks, which have +2/3 or ā1/3?
2. What unit would we choose as the unit for mass, knowing that, despite all of the simplification that took place as a result of the generalized acceptance of the quark model, we’re still stuck with quite a few elementary particles whose mass would be a ‘candidate’ for the unit mass? Do we chose the electron, the u quark, or the d quark?

Therefore, the approach to ‘natural units’ hasĀ notĀ been to redefine mass or charge or temperature, but the physical constants themselves. Obvious candidates are, of course,Ā c andĀ Ä§, i.e. the speed of light and Planck’s constant. [You may wonder why physicists would selectĀ Ä§,Ā rather than h, as aĀ ‘natural’ unit, but I’ll let you think about that. The answer is not so difficult.] That can be done without too much difficulty indeed, and so one can equate some more physical constants with one. The next candidate is the so-calledĀ Boltzmann constantĀ (kB). While this constant isĀ notĀ so well known, it does pop up in a great many equations, including those that led Planck to propose hisĀ quantum of action, i.e. hĀ (see my post on Planck’s constant). When we do thatĀ ā so when we equateĀ c, Ä§ andĀ kBĀ with one (cĀ =Ā Ä§ = kBĀ = 1), we still have a great many choices, so we need to impose further constraints. The next is to equate the gravitational constant with one, so then we have cĀ =Ā Ä§ = kBĀ = G = 1.

Now, it turns out that the ‘solution’ of this ‘set’ of four equations (cĀ =Ā Ä§ = kBĀ = G = 1) does, effectively, lead to ‘new’ values for most of our SI base units, most notably length, time, mass and temperature. These ‘new’ units are referred to as Planck units. You can look up their values yourself, and I’ll let you appreciate the ‘naturalness’ of the new units yourself. They are rather weird. The Planck length and time are usually referred to as the smallest possible measurable units of length and time and, hence, they are related to the so-called limits of quantum theory. Likewise, the Planck temperature is a related limit in quantum theory: it’sĀ the largest possible measurable unit of temperature. To be frank, it’s hard to imagine what the scale of the Planck length, time and temperature really means. In contrast, the scale of the Planck mass is something we actually can imagine ā it is said to correspond to the mass of an eyebrow hair, or a flea egg ā but, again, its physical significanceĀ isĀ notĀ so obvious: Natureās maximum allowed mass for point-like particles, or theĀ mass capable of holding a single elementary charge. That triggers the question: do point-like charges really exist? I’ll come back to that question. But first I’ll conclude this little digression on units by introducing the so-called fine-structure constant, of which you’ve surely heard before.

The fine-structure constant

I wrote that the ‘set’ of equationsĀ cĀ =Ā Ä§ = kBĀ = G = 1 gave us Planck units forĀ mostĀ of our SI base units. It turns out that these four equations do notĀ lead to a ‘natural’ unit for electric charge. We need to equate a fifth constant with one to get that. That fifth constant is Coulomb’s constant (often denoted asĀ ke)Ā and, yes, it’s the constant that appears in Coulomb’s Law indeed, as well as in some other pretty fundamental equations in electromagnetics, such as the field caused by a point charge q: E = q/4ĻĪµ0r2. Hence, keĀ = 1/4ĻĪµ0. So if we equate keĀ with one, then Īµ0Ā will, obviously, be equal to Īµ0Ā = 1/4Ļ.

To make a long story short, adding this fifth equation to our set of five also gives us a Planck charge, and I’ll give you its value: it’s aboutĀ 1.8755Ć10ā18Ā C. As I mentioned that the elementary charge is 1 eĀ ā 1.6022Ć10ā19Ā C, it’s easy to that the Planck charge corresponds to some 11.7 times the charge of the proton. In fact, let’s be somewhat more precise and round, once again, to four digits after the decimal point: the qP/e ratio is about 11.7062. Conversely, we can also say that the elementary charge as expressed in Planck units, is about 1/11.7062 ā 0.08542455. In fact, we’ll use that ratio in a moment in some other calculation, so please jot it down.

0.08542455? That’sĀ a bit of a weird number, isn’t it? You’re right.Ā AndĀ trying to write it in terms of the charge of a u or d quark doesn’t make it any better. Also, note that the first four significant digits (8542) correspond to the first four significant digits after the decimal pointĀ of our Īµ0Ā constant. So what’sĀ the physical significance here? Some other limit of quantum theory?

Frankly, I did not find anything on that, but the obvious thing to do is to relate is to what is referred to as the fine-structure constant, which is denoted by Ī±. This physical constant is dimensionless, and can be defined in various ways, but all of them are some kind of ratio of a bunch of these physical constants we’ve been talking about:

The only constants you haveĀ notĀ seen before areĀ Ī¼0,Ā RKĀ and, perhaps, reĀ as well asĀ meĀ . However, these can be defined as a function of the constants that you did seeĀ before:

1. TheĀ Ī¼0Ā constant is the so-called magnetic constant. It’s something similar as Īµ0Ā and it’s referred to as the magneticĀ permeability of the vacuum. So it’s just like the (electric)Ā permittivity of the vacuum (i.e. the electric constant Īµ0) and the only reason why you haven’t heard of this before is because we haven’t discussed magnetic fields so far. In any case, you know that the electric and magnetic force are part and parcel of the same phenomenon (i.e. the electromagneticĀ interaction between charged particles) and, hence, they are closely related. To be precise, Ī¼0Ā = 1/Īµ0c2. That shows the first and second expression forĀ Ī± are, effectively, fully equivalent.
2. Now, from the definition of keĀ = 1/4ĻĪµ0, it’s easy to see how those two expressions are, in turn, equivalent with the third expression for Ī±.
3. The RKĀ constant is the so-called von Klitzing constant, but don’t worry about it: it’s, quite simply, equal to RKĀ =Ā h/e2. Hene, substituting (and don’t forget that h = 2ĻÄ§) will demonstrate the equivalence of the fourth expression for Ī±.
4. Finally, theĀ reĀ factor is the classical electron radius, which is usually written as a function of me, i.e. the electron mass: reĀ = e2/4ĻĪµ0mec2. This very same equation implies that remeĀ = e2/4ĻĪµ0c2. So… Yes. It’s all the same really.

Let’s calculate its (rounded) value in the old units first, using the third expression:

• The e2Ā constant is (roughly) equal to (1.6022Ć10ā19Ā C)2Ā = 2.5670Ć10ā38Ā C2. Coulomb’s constant keĀ = 1/4ĻĪµ0Ā is about 8.9876Ć109Ā NĀ·m2/C2. Hence, the numerator e2keĀ ā 23.0715Ć10ā29Ā NĀ·m2.
• The (rounded) denominator is Ä§cĀ = (1.05457Ć10ā34Ā NĀ·mĀ·s)(2.998Ć108Ā m/s) = 3.162Ć10ā26Ā NĀ·m2.
• Hence, we getĀ Ī± =Ā kee2/Ä§cĀ ā 7.297Ć10ā3Ā = 0.007297.

Note that this number is, effectively, dimensionless. Now, the interesting thing is that if we calculate Ī± using Planck units, we get anĀ e2Ā constant that is (roughly) equal to 0.085424552Ā = … 0.007297! Now, because all of the other constants are equal to 1 in Planck’s system of units, that’s equal toĀ Ī± itself. So… Yes ! The two values forĀ Ī± are one and the same in the two systems of units and, of course, as you might have guessed, the fine-structure constant is effectively dimensionless because it doesĀ notĀ depend on our units of measurement. So what does it correspond to?

NowĀ thatĀ would take me a veryĀ long time to explain, but let me try to summarize what it’s all about. In my post on quantum electrodynamics (QED)Ā ā so that’s the theory of light and matter basically and, most importantly, how they interact ā I wrote about the three basicĀ eventsĀ in that theory, and how they are associated with a probability amplitude, so that’s aĀ complex number, or an ‘arrow’, as Feynman puts it: something with (a) a magnitude and (b) a direction. We had to take the absolute square of these amplitudes in order to calculate the probability (i.e. some real number between 0 and 1) of the event actually happening. These three basic events or actions were:

1. A photon travels from point A to B. To keep things simple and stupid, Feynman denoted this amplitude by P(A to B), and please note that the PĀ stands forĀ photon, not for probability. I should also note that we have an easy formula for P(A to B): it depends on the so-called space-time interval between the two points A and B, i.e. IĀ =Ā Īr2Ā āĀ Īt2Ā = (x2āx1)2+(y2āy1)2+(z2āz1)2Ā āĀ (t2āt1)2. Hence, the space-time interval takes both the distance in space as well as the ‘distance’ in time into account.
2. An electron travels from point A to B: this was denoted by E(A to B) because… Well… You guessed it: the EĀ of electron. The formula for E(A to B) was much more complicated, but the two key elements in the formula was some complex numberĀ j (see below), and some other (real) number n.
3. Finally, an electron could emit or absorb a photon, and the amplitude associated with this event was denoted byĀ j, for junction.

Now, that junction number j is aboutĀ ā0.1. To be somewhat more precise, I should say it’s about ā0.08542455.

ā0.08542455? That’sĀ a bit of a weird number, isn’t it? Hey ! Didn’t we see this number somewhere else?Ā We did, but before you scroll up, let’s first interpret this number.Ā ItĀ looks like an ordinary (real) number, but itās an amplitude alright, so you should interpret it as an arrow. Hence, it can be ‘combined’ (i.e. ‘added’ or ‘multiplied’) with other arrows. More in particular, when you multiply it with another arrow, it amounts to a shrink to a bit less than one-tenth (because its magnitude is about 0.085 = 8.5%), and half a turn (the minus sign amounts to a rotation of 180Ā°).Ā Now, in that post of mine, I wrote that I wouldn’t entertain you on the difficulties of calculating this number but… Well… We did see this number before indeed. Just scroll up to check it. We’ve got a veryĀ remarkable result here:

j āĀ ā0.08542455 =Ā āā0.007297 =Ā āāĪ± =Ā āe expressed in Planck units

So we find that our junction number j or ā as it’s better known ā our coupling constant in quantum electrodynamics (aka as the gauge coupling parameter g)Ā is equal to the (negative) square root of that fine-structure constant which, in turn, is equal to the charge of the electron expressed in the Planck unit for electric charge.Ā NowĀ thatĀ is a very deep and fundamental result which no one seems to be able to ‘explain’āin an ‘intuitive’ way, that is.

I should immediately add that, while we can’t explain it, intuitively, it does makeĀ sense. A lot of sense actually. Photons carry the electromagnetic force, and the electromagnetic field is causedĀ by stationary and moving electric charges, so one would expect to find someĀ relation between that junction number j, describing the amplitude to emit or absorb a photon, and the electric charge itself, but… An equality? Really?

Well… Yes. That’s what it is, and I look forward to trying to understand all of this better. For now, however, I should proceed with what I set out to do, and that is to tie up a few loose ends. This was one, and so let’s move to the next, which is about the assumption of point charges.

Note: More popular accounts of quantum theory say Ī± itself isĀ ‘the’ coupling constant, rather than its (negative) square āāĪ± =Ā j =Ā āe (expressed in Planck units). That’s correct: g or jĀ are, technically speaking, the (gauge)Ā coupling parameter, not the coupling constant. But that’s a little technical detail which shouldn’t bother you. The result is still what it is: veryĀ remarkable! I should also note that it’s often the value of the reciprocal (1/Ī±) that is specified, i.e. 1/0.007297 ā 137.036. But so now you know what this number actually stands for. š

Do point charges exist?

Feynman’sĀ LecturesĀ on electrostatics are interesting, among other things, because, besides highlighting the precision and successes of the theory, he also doesn’t hesitate to point out the contradictions. He notes, for example, that “the idea of locating energy in the field is inconsistent with the assumption of the existence of point charges.”

Huh?

Yes. Let’s explore the point. We do assumeĀ pointĀ charges in classical physics indeed. The electric field caused by a point charge is, quite simply:

E = q/4ĻĪµ0r2

Hence, the energy density uĀ isĀ Īµ0E2/2 = q2/32ĻĪµ0r4.Ā Now, we have thatĀ volume integral U = (Īµ0/2)ā«Eā¢EdV = ā«(Īµ0E2/2)dV integral. As Feynman notes, nothing prevents us from taking aĀ spherical shellĀ for the volume element dV, instead of an infinitesimal cube. This spherical shell would have the charge q at its center, an inner radius equal to r, an infinitesimal thicknessĀ dr, and, finally, a surface area 4Ļr2Ā (that’s just the general formula for the surface area of a spherical shell, which I also noted above). Hence, its (infinitesimally small) volume is 4Ļr2dr, and our integral becomes:

To calculate this integral, we need to take the limit ofĀ āq2/8ĻĪµ0r for (a)Ā r tending to zeroĀ (rā0) and for (b) r tending to infinity (rāā). The limit forĀ r = ā is zero. That’s OK and consistent with the choice of our reference point for calculating the potential of a field. However, the limitĀ forĀ r = 0 is zero is infinity! Hence, thatĀ U = (Īµ0/2)ā«Eā¢EdV basically says there’s anĀ infiniteĀ amount of energy in the field of a point charge! How is that possible?Ā It cannot be true, obviously.

So…Ā Where did we do wrong?

Your first reaction may well be that this very particular approach (i.e. replacing our infinitesimal cubes by infinitesimal shells) to calculating our integral is fishy and, hence, not allowed. Maybe you’re right. Maybe not. It’s interesting to note that we run into similar problems when calculating the energy of a charged sphere. Indeed, we mentioned the formula for the capacity of a charged sphere: C = 4ĻĪµ0r.Ā Now, there’s a similarly easy formula for theĀ energyĀ of a charged sphere. Let’s look at how we charge a condenser:

• We know that the potential difference between two plates of a condenser represents the work we have to do, per unit charge, to transfer a charge (Q) from one plate to the other. Hence, we can write V =Ā ĪU/ĪQ.
• We will, of course, want to do a differential analysis. Hence, we’ll transfer charges incrementally, one infinitesimal little charge dQ at the time, and re-write V as V = dU/dQ or, what amounts to the same: dU = VĀ·dQ.
• Now, we’ve defined the capacitance of a condenser as C = Q/V. [Again, don’t be confused: C stands for capacity here, measured in coulomb per volt, not for the coulomb unit.] Hence, we can re-write dU as dU = QĀ·dQ/C.
• Now we have to integrate dU going from zero charge to the final charge Q. Just do a little bit of effort here and try it. You should get the following result: U = Q2/2C. [We could re-write this asĀ U = (C2/V2)/2C = Ā CĀ·V2/2,Ā which is a form that may be more useful in some other context but not here.]
• Using that C = 4ĻĪµ0rĀ formula, we get our grand result. The energy of a charged sphere is:

U =Ā Q2/8ĻĪµ0r

From that formula, it’s obvious that, if the radius of our sphere goes to zero, its energy should also go to infinity! So it seems we can’t really pack a finiteĀ charge Q in one single point. Indeed, to do that, our formula says we need an infiniteĀ amount of energy. So what’s going on here?

Nothing much. You should, first of all, remember how we gotĀ that integral: see my previous postĀ for the full derivation indeed. It’s not that difficult. WeĀ first assumed we hadĀ pairsĀ of charges qiĀ and qjĀ for which we calculated the totalĀ electrostatic energyĀ UĀ as the sum of the energies of all possible pairs of charges:

And, then, we looked atĀ a continuous distribution of charge. However, in essence, we still did the same: we counted the energy of interaction between infinitesimal charges situated at two different points (referred to as point 1 and 2Ā respectively), with a 1/2 factor in front so as to ensure we didn’t double-count (there’s no way to write an integral that keeps track of the pairs so that each pair is counted only once):

Now, we reduced this double integral by a clever substitution to something that looked a bit better:

Finally, some more mathematical tricks gave us that U = (Īµ0/2)ā«Eā¢EdV integral.

In essence, what’s wrong in that integral above is that it actually includesĀ the energy that’s needed toĀ assemble the finite point charge q itself from an infinite number of infinitesimal parts. Now that energyĀ is infinitely large. We just can’t do it: the energy required to construct a point charge isĀ ā.

NowĀ thatĀ explains the physical significance of that Planck mass ! We saidĀ Nature has some kind of maximum allowable mass for point-like particles, or theĀ mass capable of holding a single elementary charge. What’s going on is, as we try to pile more charge on top of the charge that’s already there, we add energy. Now, energy has an equivalent mass. Indeed, the Planck charge (qPĀ ā 1.8755Ć10ā18Ā C), the Planck lengthĀ (lPĀ = 1.616Ć10ā35 m), the Planck energyĀ (1.956Ć109 J), and the Planck massĀ (2.1765Ć10ā8Ā kg) are all related.Ā NowĀ things start making sense. Indeed, we said that the Planck mass is tiny but, still, it’s something we can imagine, like a flea’s egg or the mass of a hair of a eyebrow. The associated energy (E = mc2, so that’s (2.1765Ć10ā8Ā kg)Ā·(2.998Ć108Ā m/s)2Ā āĀ 19.56Ć108Ā kgĀ·m2/s2Ā = 1.956Ć109Ā jouleĀ indeed.

Now, how much energy is that? Well… That’s about 2Ā giga-joule, obviously, but so what’s that in daily life? It’s about the energy you would get when burning 40 liter of fuel. It’s also likely to amount, more or less, to your home electricity consumption over a month. So it’s sizable, and so we’re packing all that energy into a PlanckĀ volumeĀ (lP3Ā ā 4Ć10ā105Ā m3). If we’d manage that, we’d be able to create tiny black holes, because that’s what that little Planck volume would become if we’d pack so much energy in it. So… Well… Here I just have to refer you to more learned writers than I am. As Wikipedia notes dryly: “The physical significance of the Planck length is a topic of theoretical research. Since the Planck length is so many orders of magnitude smaller than any current instrument could possibly measure, there is no way of examining it directly.”

So… Well… That’s it for now. The point to note is that we would notĀ have any theoretical problems if we’d assume our ‘point charge’ is actuallyĀ notĀ a point charge but some small distribution of charge itself. You’ll say:Ā Great! Problem solved!Ā

Well… For now, yes. But Feynman rightly notes that assuming that our elementary charges do take up some space results in other difficulties of explanation. As we know, these difficulties are solved in quantum mechanics, but so we’re not supposed to know that when doing these classical analyses. š

Some content on this page was disabled on June 17, 2020 as a result of a DMCA takedown notice from Michael A. Gottlieb, Rudolf Pfeiffer, and The California Institute of Technology. You can learn more about the DMCA here: