Strings in classical and quantum physics

This post is not about string theory. The goal of this post is much more limited: it’s to give you a better understanding of why the metaphor of the string is so appealing. Let’s recapitulate the basics by see how it’s used in classical as well as in quantum physics.

In my posts on music and math, or music and physics, I described how a simple single string always vibrates in various modes at the same time: every tone is a mixture of an infinite number of elementary waves. These elementary waves, which are referred to as harmonics (or as (normal) modes, indeed) are perfectly sinusoidal, and their amplitude determines their relative contribution to the composite waveform. So we can always write the waveform F(t) as the following sum:

F(t) = a1sin(ωt) + a2sin(2ωt) + a3sin(3ωt) + … + ansin(nωt) + …

[If this is your first reading of my post, and the formula shies you away, please try again. I am writing most of my posts with teenage kids in mind, and especially this one. So I will not use anything else than simple arithmetic in this post: no integrals, no complex numbers, no logarithms. Just a bit of geometry. That’s all. So, yes, you should go through the trouble of trying to understand this formula. The only thing that you may have some trouble with is ω, i.e. angular frequency: it’s the frequency expressed in radians per time unit, rather than oscillations per second, so ω = 2π·f = 2π/T, with the frequency as you know it (i.e. oscillations per second) and T the period of the wave.]

I also noted that the wavelength of these component waves (λ) is determined by the length of the string (L), and by its length only: λ1 = 2L, λ2 = L, λ3 = (2/3)·L. So these wavelengths do not depend on the material of the string, or its tension. At any point in time (so keeping t constant, rather than x, as we did in the equation above), the component waves look like this:

620px-Harmonic_partials_on_strings

etcetera (1/8, 1/9,…,1/n,… 1/∞)

That the wavelengths of the harmonics of any actual string only depend on its length is an amazing result in light of the complexities behind: a simple wound guitar string, for example, is not simple at all (just click the link here for a quick introduction to guitar string construction). Simple piano wire isn’t simple either: it’s made of high-carbon steel, i.e. a very complex metallic alloy. In fact, you should never think any material is simple: even the simplest molecular structures are very complicated things. Hence, it’s quite amazing all these systems are actually linear systems and that, despite the underlying complexity, those wavelength ratios form a simple harmonic series, i.e. a simple reciprocal function y = 1/x, as illustrated below.

602px-Integral_Test

A simple harmonic series? Hmm… I can’t resist noting that the harmonic series is, in fact, a mathematical beast. While its terms approach zero as x (or n) increases, the series itself is divergent. So it’s not like 1+1/2+1/4+1/8+…+1/2n+…, which adds up to 2. Divergent series don’t add up to any specific number. Even Leonhard Euler – the most famous mathematician of all times, perhaps – struggled with this. In fact, as late as in 1826, another famous mathematician, Niels Henrik Abel (in light of the fact he died at age 26 (!), his legacy is truly amazing), exclaimed that a series like this was “an invention of the devil”, and that it should not be used in any mathematical proof. But then God intervened through Abel’s contemporary Augustin-Louis Cauchy 🙂 who finally cracked the nut by rigorously defining the mathematical concept of both convergent as well as divergent series, and equally rigorously determining their possibilities and limits in mathematical proofs. In fact, while medieval mathematicians had already grasped the essentials of modern calculus and, hence, had already given some kind of solution to Zeno’s paradox of motion, Cauchy’s work is the full and final solution to it. But I am getting distracted, so let me get back to the main story.

More remarkable than the wavelength series itself, is its implication for the respective energy levels of all these modes. The material of the string, its diameter, its tension, etc will determine the speed with which the wave travels up and down the string. [Yes, that’s what it does: you may think the string oscillates up and down, and it does, but the waveform itself travels along the string. In fact, as I explained in my previous post, we’ve got two waves traveling simultaneously: one going one way and the other going the other.] For a specific string, that speed (i.e. the wave velocity) is some constant, which we’ll denote by c. Now, is, obviously, the product of the wavelength (i.e. the distance that the wave travels during one oscillation) and its frequency (i.e. the number of oscillations per time unit), so c = λ·f. Hence, f = c/λ and, therefore, f1 = (1/2)·c/L, f2 = (2/2)·c/L, f3 = (3/2)·c/L, etcetera. More in general, we write fn = (n/2)·c/L. In short, the frequencies are equally spaced. To be precise, they are all (1/2)·c/L apart.

Now, the energy of a wave is directly proportional to its frequency, always, in classical as well as in quantum mechanics. For example, for photons, we have the Planck-Einstein relation: E = h·f = ħ·ω. So that relation states that the energy is proportional to the (light) frequency of the photon, with h (i.e. he Planck constant) as the constant of proportionality. [Note that ħ is not some different constant. It’s just the ‘angular equivalent’ of h, so we have to use ħ = h/2π when frequencies are expressed in angular frequency, i.e. radians per second rather than hertz.] Because of that proportionality, the energy levels of our simple string are also equally spaced and, hence, inserting another proportionality constant, which I’ll denote by a instead of (because it’s some other constant, obviously), we can write:

En = a·fn = (n/2)·a·c/L

Now, if we denote the fundamental frequency f1 = (1/2)·c/L, quite simply, by f (and, likewise, its angular frequency as ω), then we can re-write this as:

En = n·a·f = n·ā·ω (ā = a/2π)

This formula is exactly the same as the formula used in quantum mechanics when describing atoms as atomic oscillators, and why and how they radiate light (think of the blackbody radiation problem, for example), as illustrated below: En = n·ħ·ω = n·h·f. The only difference between the formulas is the proportionality constant: instead of a, we have Planck’s constant here: h, or ħ when the frequency is expressed as an angular frequency.

quantum energy levels

This grand result – that the energy levels associated with the various states or modes of a system are equally spaced – is referred to as the equipartition theorem in physics, and it is what connects classical and quantum physics in a very deep and fundamental way.

In fact, because they’re nothing but proportionality constants, the value of both a and h depends on our units. If w’d use the so-called natural units, i.e. equating ħ to 1, the energy formula becomes En = n·ω, and, hence, our unit of energy and our unit of frequency become one and the same. In fact, we can, of course, also re-define our time unit such that the fundamental frequency ω is one, i.e. one oscillation per (re-defined) time unit, so then we have the following remarkable formula:

En = n

Just think about it for a moment: what I am writing here is E0 = 0, E1 = 1, E2 = 2, E3 = 3, E4 = 4, etcetera. Isn’t that amazing? I am describing the structure of a system here – be it an atom emitting or absorbing photons, or a macro-thing like a guitar string – in terms of its basic components (i.e. its modes), and it’s as simple as counting: 0, 1, 2, 3, 4, etc.

You may think I am not describing anything real here, but I am. We cannot do whatever we wanna do: some stuff is grounded in reality, and in reality only—not in the math. Indeed, the fundamental frequency of our guitar string – which we used as our energy unit – is a property of the string, so that’s real: it’s not just some mathematical shape out: it depends on the string’s length (which determines its wavelength), and it also depends on the propagation speed of the wave, which depends on other basic properties of the string, such as its material, its diameter, and its tension. Likewise, the fundamental frequency of our atomic oscillator is a property of the atomic oscillator or, to use a much grander term, a property of the Universe. That’s why h is a fundamental physical constant. So it’s not like π or e. [When reading physics as a freshman, it’s always useful to clearly distinguish physical constants (like Avogadro’s number, for example) from mathematical constants (like Euler’s number).]

The theme that emerges here is what I’ve been saying a couple of times already: it’s all about structure, and the structure is amazingly simple. It’s really that equipartition theorem only: all you need to know is that the energy levels of the modes of a system – any system really: an atom, a molecular system, a string, or the Universe itself – are equally spaced, and that the space between the various energy levels depends on the fundamental frequency of the system. Moreover, if we use natural units, and also re-define our time unit so the fundamental frequency is equal to 1 (so the frequencies of the other modes are 2, 3, 4 etc), then the energy levels are just 0, 1, 2, 3, 4 etc. So, yes, God kept things extremely simple. 🙂

In order to not cause too much confusion, I should add that you should read what I am writing very carefully: I am talking the modes of a system. The system itself can have any energy level, of course, so there is no discreteness at the level of the system. I am not saying that we don’t have a continuum there. We do. What I am saying is that its energy level can always be written as a (potentially infinite) sum of the energies of its components, i.e. its fundamental modes, and those energy levels are discrete. In quantum-mechanical systems, their spacing is h·f, so that’s the product of Planck’s constant and the fundamental frequency. For our guitar, the spacing is a·f (or, using angular frequency, ā·ω: it’s the same amount). But that’s it really. That’s the structure of the Universe. 🙂

Let me conclude by saying something more about a. What information does it capture? Well… All of the specificities of the string (like its material or its tension) determine the fundamental frequency f and, hence, the energy levels of the basic modes of our string. So a has nothing to do with the particularities of our string, of our system in general. However, we can, of course, pluck our string very softly or, conversely, give it a big jolt. So our a coefficient is not related to the string as such, but to the total energy of our string. In other words, a is related to those amplitudes  a1, a2, etc in our F(t) = a1sin(ωt) + a2sin(2ωt) + a3sin(3ωt) + … + ansin(nωt) + … wave equation.

How exactly? Well… Based on the fact that the total energy of our wave is equal to the sum of the energies of all of its components, I could give you some formula. However, that formula does use an integral. It’s an easy integral: energy is proportional to the square of the amplitude, and so we’re integrating the square of the wave function over the length of the string. But then I said I would not have any integral in this post, and so I’ll stick to that. In any case, even without the formula, you know enough now. For example, one of the things you should be able to reflect on is the relation between a and h. It’s got to do with structure, of course. 🙂 But I’ll let you think about that yourself.

[…] Let me help you. Think of the meaning of Planck’s constant h. Let’s suppose we’d have some elementary ‘wavicle’, like that elementary ‘string’ that string theorists are trying to define: the smallest ‘thing’ possible. It would have some energy, i.e. some frequency. Perhaps it’s just one full oscillation. Just enough to define some wavelength and, hence, some frequency indeed. Then that thing would define the smallest time unit that makes sense: it would the time corresponding to one oscillation. In turn, because of the E = h·relation, it would define the smallest energy unit that makes sense. So, yes, h is the quantum (or fundamental unit) of energy. It’s very small indeed (h = 6.626070040(81)×10−34 J·s, so the first significant digit appears only after 33 zeroes behind the decimal point) but that’s because we’re living at the macro-scale and, hence, we’re measuring stuff in huge units: the joule (J) for energy, and the second (s) for time. In natural units, h would be one. [To be precise, physicist prefer to equate ħ, rather than h, to one when talking natural units. That’s because angular frequency is more ‘natural’ as well when discussing oscillations.]

What’s the conclusion? Well… Our will be some integer multiple of h. Some incredibly large multiple, of course, but a multiple nevertheless. 🙂

Post scriptum: I didn’t say anything about strings in this post or, let me qualify, about those elementary ‘strings’ that string theorists try to define. Do they exist? Feynman was quite skeptical about it. He was happy with the so-called Standard Model of phyics, and he would have been very happy to know that the existence Higgs field has been confirmed experimentally (that discovery is what prompted my blog!), because that confirms the Standard Model. The Standard Model distinguishes two types of wavicles: fermions and bosons. Fermions are matter particles, such as quarks and electrons. Bosons are force carriers, like photons and gluons. I don’t know anything about string theory, but my guts instinct tells me there must be more than just one mathematical description of reality. It’s the principle of duality: concepts, theorems or mathematical structures can be translated into other concepts, theorems or structures. But… Well… We’re not talking equivalent descriptions here: string theory is different theory, it seems. For a brief but totally incomprehensible overview (for novices at least), click on the following link, provided by the C.N. Yang Institute for Theoretical Physics. If anything, it shows I’ve got a lot more to study as I am inching forward on the difficult Road to Reality. 🙂

Euler’s formula revisited

This post intends to take some of the magic out of Euler’s formula. In fact, I started doing that in my previous post but I think that, in this post, I’ve done a better job at organizing the chain of thought. [Just to make sure: with ‘Euler’s formula’, I mean ei= cos(x) + isin(x). Euler produced a lot of formulas, indeed, but this one is, for math, what E = mcis for physics. :-)]

The grand idea is to start with an initial linear approximation for the value of the complex exponential eis near s = 0 (to be precise, we’ll use the eiε = 1 + iε formula) and then show how the ‘magic’ of i – through the i= –1 factor – gives us the sine and cosine functions. What we are going to do, basically, is to construct the sine and cosine functions algebraically.

Let us, as a starting point – just to get us focused – graph (i) the real exponential function ex, i.e. the blue graph, and (ii) the real and imaginary part of the complex exponential function ei= cos(x) + isin(x), i.e. the red and green graph—the cosine and sine function.   graph (5)From these graphs, it’s clear that ex and eiare two very different beasts.

1. eis just a real-valued function of x, so it ‘maps’ the real number x to some other real number y = ex. That y value ‘rockets’ away, thereby demonstrating the power of exponential growth. There’s nothing really ‘special’ about ex. Indeed, writing einstead of 10obviously looks better when you’re doing a blog on math or physics but, frankly, there’s no real reason to use that strange number e ≈ 2.718 when all you need is just a standard real exponential. In fact, if you’re a high school student and you want to attract attention with some paper involving something that grows or shrinks, I’d recommend the use of πx. 🙂

2. eiis something that’s very different. It’s a complex-valued function of x and it’s not about exponential growth (though it obviously is about exponentiation, i.e. repeated multiplication): y = eidoes not ‘explode’. On the contrary: y is just a periodic ‘thing’ with two components: a sine and a cosine. [Note that we could also change the base, to 10, for example: then we write 10ix. We’d also get something periodic, but let’s not get lost before we even start.]

Two different beasts, indeed. How can the addition of one tiny symbol – the little i in ei– can make such big difference?

The two beasts have one thing in common: the value of the function near x = 0 can be approximated by the same linear formula:

FormulaIn case you wonder where this comes from, it’s basically the definition of the derivative of a function, as illustrated below. izvodThis is nothing special. It’s a so-called first-order approximation of a function. The point to note is that we have a similar-looking formula for the complex-valued eifunction. Indeed, its derivative is d(eix)/dx = ieiand when we evaluate that derivative at x = 0, then we get ie= i. So… Yes, the grand result is that we can effectively write:

eiε ≈ 1 + iε for small ε

Of course, 1 + iε is also a different ‘beast’ than 1 +  ε. Indeed, 1 + ε is just a continuation of our usual walk along the real axis, but 1 +  iε points in a different direction (see below). This post will show you where it’s headed.

Capture

Let’s first work with eagain, and think about a value for ε. We could take any value, of course, like 0.1 or some fraction 1/n. We’ll use a fraction—for reasons that will become clear in a moment. So the question now is: what value should we use for n in that 1/n fraction? Well… Because we are going to use this approximation as the initial value in a series of calculations—be patient: I’ll explain in a moment—we’d like to have a sufficiently small fraction, so our subsequent calculations based on that initial value are not too far off. But what’s sufficiently small? Is it 1/10, or 1/100,000, or 1/10100? What gives us ‘good enough’ results? In fact, how do we define ‘good enough’?

Good question! In order to try to define what’s ‘good enough’, I’ll turn the whole thing on its head. In the table below, I calculate backwards from e= e by taking successive square roots of eHuh? What? Patience, please! Just go along with me for a while. First, I calculate e1/2, so our fraction ε, which I’ll just write as  x, is equal to 1/2 here, so the approximation for e1/2 is 1 + 1/2 = 1.5. That’s off. How much? Well… The actual value of e1/2 is about 1.648721 (see the table below (or use a calculator or spreadsheet yourself): note that, because I copied the table from Excel, ex is shown as e^x). Now, 1.648721 is 1.5 + 0.148721, so our approximation (1.5) is about 9% off (as compared to the actual value). Not all that much, but let’s see how we can improve. Let’s take the square root once again: (e1/2)1/2 e1/4, so x = 1/4. And then I do that again, so I get e1/8, and so on and so on. All the way down to x = 1/1024 = 1/210, so that’s ten iterations. Our approximation 1 + x (see the fifth/last column in the table below is then equal to 1 + 1/1024 = 1 + 0.0009765625, which we rounded to 1.000977 in the table.

e calculation

The actual value of e1/1024 is also about 1.000977, as you can see in the third column of the table. Not exactly, of course, but… Well… The accuracy of our approximation here is six digits behind the decimal point, so that’s equivalent to one part in a millionth. That’s not bad, but is it ‘good enough’? Hmm… Let’s think about it, but let’s first calculate some other things. The fourth column in the table above calculates the slope of that AB line in the illustration above: its value converges to one, as we would expect, because that’s the slope of the tangent line at x = 0. [So that’s the value of the derivative of eat x = 0. Just check it: dex/dx = ex, obviously, and e= 1.] Note that our 1 + x approximation also converges to 1—as it should!

So… Well… Let’s now just assume we’re happy with with that approximation that’s accurate to one part in a million, so let’s just continue to work with this fraction 1/1024 for x. Hence, we will write that e1/1024 ≈ 1 + 1/1024 and now we will use that value also for the complex exponentialHuh? What? Why? Just hang in here for a while. Be patient. 🙂 So we’ll just add the again and, using that eiε ≈ 1 + iε expression, we write:

ei/1024 ≈ 1 + i/1024

It’s quite obvious that 1 + i/1024 is a complex number: its real part is 1, and its imaginary part is 1/1024 = 0.0009765625.

Let’s now work our way up again by using that complex number 1 + i/1024 = 1 + i·0.0009765625 to calculate ei/512, ei/256, ei/128 etcetera. All the way back up to x = 1, i.e. ei. I’ll just use a different symbol for x: in the table below, I’ll substitute x for s because I’ll refer to the real part of our complex numbers as ‘x’ from time to time (even if I write a and b in the table below), and so I can’t use the symbol x to denote the fraction. [I could have started with s, but then… Well… Real numbers are usually denoted by x, and so it was easier to start that way.] In any case…

The thing to note is how I calculate those values ei/512, ei/256, ei/128 etcetera. I am doing it by squaring, i.e. I just multiply the (complex) number by itself. To be very explicit, note that ei/512 = (ei/1024)= ei·2/1024 = (ei/1024)(ei/1024). So all that I am doing in the table below is multiply the complex number that I have with itself, and then I have a new result, and then I square that once again, and then again, and again, and again etcetera. In other words, when going back up, I am just taking the square of a (complex) number. Of course, you know how to multiply a number with itself but, because we’re talking complex numbers here, we should actually write it out:

(a + i·b)= a– b2 + i·2ab = a– b2 + 2abi

[It would be good to always separate the imaginary unit from real numbers like a, b, or ab, but then I am lazy and so I hope you’ll always recognize that is the imaginary unit.] In any case… When we’re going back up (by squaring), the real part of the next number (i.e. the ‘x’ in x + iy) is a– b2 and the complex part (the ‘y’) is 2abi. So that’s what’s shown below—in the fourth and fifth column, that is.

Capture

Look at what happens. The x goes to zero and then becomes negative, and the y increases to one. Now, we went down from e1/n = e1 = e1/1 to e1/n = e1/1024, but we could have started with e2, or e4/n, or whatever. Hence, I should actually continue the calculations above so you can see what happens when s goes to 2, and then to 3, and then to 4, and so on and so on. What you’d see is that the value of the real and imaginary part of this complex exponential goes up and down between –1 and +1. You’d see both are periodic functions, like the sine and cosine functions, which I added in the last two columns of the table above. Now compare those a and b values (i.e. the second and third column) with the cosine and sine values (i.e. the last two columns). […] Do you see it? Do you see how close they are? Only a few parts in a million, indeed.

You need to let this sink it for a while. And I’d recommend you make a spreadsheet yourself, so you really ‘get’ what’s going on here. It’s all there is to the so-called ‘magic’ of Euler’s formula. That simple (a + ib)= a– b2 + 2abformula shows us why (and how) the real and imaginary part oscillate between –1 and +1, just like the cosine and sine function. In fact, the values are so close that it’s easy to understand what follows. They are the same—in the limit, of course

Indeed, these values a– b2 and 2ab, i.e. the real and imaginary part of the next complex number in our series, are what Feynman refers to as the algebraic cosine and sine functions, because we calculate them as (a + ib)= a– b2 + 2abi. These algebraic cosine and sine values are close to the real cosine and sine values, especially for small fractions s. Of course, there is a discrepancy becomes – when everything is said and done – we do carry a little error with us from the start, because we stopped at 1/n = 1/1024, before going back up.

There’s actually a much more obvious way to appreciate the error: we know that e1/1024 should be some point on the unit circle itself. Therefore, we should not equate a with 1 if we have some value b > 0. Or – what amounts to saying the same – if if b is slightly bigger than 0, then a should be slightly smaller than 1. So the eiε ≈ 1 + iε is an approximation only. It cannot be exact for positive values of ε. It’s only exact when ε = 0.

So we’re off—but not far off as you can see. In addition, you should note that the error becomes bigger and bigger for larger s. For example, in the line for s = 1, we calculated the values of the algebraic cosine and sine for s = 2 (see the a^2 – b^2 and 2ab column) as –0.416553 and 0.910186, but the actual values are cos(2) = –0.416146 and sin(2) = 0.909297, which shows our algebraic cosine and sine function is gradually losing accuracy indeed (we’re off like one part in a thousand here, instead of one part in a million). That’s what we’d expect, of course, as we’re multiplying the errors as we move ‘back up’.

The graph below plots the values of the table.

Capture

This graph also shows that, as we’re doubling our ratio r all the time, the data points are being spaced out more and more. This ‘spacing out’ gets a lot worse when further increasing s: from s = 1 (that’s the ‘highest’ point in the graph above), we’d go to s = 2, and then to s = 4, s = 8, etcetera. Now, these values are not shown above but you can imagine where they are: for s = 2, we’re somewhere in the second quadrant, for s = 4, we’re in the third, etcetera. So that does not make for a smooth graph. We need points in-between. So let’s ‘fix’ this problem by taking just one value for s out of the table (s = 1/4, for example) and we’ll continue to use that value as a multiplier.

That’s what’s done in the table below. It looks somewhat daunting at first but it’s simple really. First, we multiply the value we got for e1/4 with itself once again, so that gives us a real and an imaginary part for e1/8 (we had that already in the table above and you can check: we get the same here). We then take that value (i.e. e1/8) not to multiply it with itself but with e1/4 once again. Of course, because the complex numbers are not the same, we cannot use the (a + ib)= a– b2 + 2abi rule any more. We must now use the more general rule for multiplying different complex numbers: (a + ib)(c + id) = (ac – bd) + i(ad + bc). So that’s why I have an a, b, c and d column in this table: a and b are the components of the first number, and c and d of the second (i.e. e1/4 = 0.969031 + 0.247434i)

e calculation 4

In the table above, I let s range from zero (0) to seven (7) in steps of 0.25 (= 1/4). Once again, I’ve added the real cosine and sine values for these angles (they are, of course, expressed in radians), because that’s what s is here: an angle, aka as the phase of the complex number. So you can compare.

The table confirms, once again, that we’re slowly losing accuracy (we’re now 3 to 4 parts in a thousand off), but it is very slowly only indeed: we’d need to do many ‘loops’ around the center before we could actually see the difference on a graph. Hey! Let’s do a graph. [Excel is such a great tool, isn’t it?] Here we are: the thick black line describing a circle on the graph below connects the actual cosine and sine values associated with an angle of 1/4, 1/2, 3/8 etcetera, all the way up to 7 (7 is about 2.3π, so we’re some 40 degrees past our original point after the ‘loop’), while the little ‘+‘ marks are the data points for the algebraic cosine and sine. They match perfectly because our eye cannot see the little discrepancy.

graph with sine and cosine

So… That’s it. End of story.

What?

Yes. That’s it. End of story. I’ve done what I promised to do. I constructed the sine and cosine functions algebraically. No compass. 🙂 Just plain arithmetic, including one extra rule only: i= –1. That’s it.

So I hope I succeeded. The goal was to take some of the magic out of Euler’s formula by showing how that eiε = 1 + iε approximation and the definition of i= –1 gives us the cosine and sine function itself as we move around the unit circle starting from the unity point on the real axis, as shown in that little graph:

Capture

Of course, the ε we were working with was much smaller than the size of the arrow suggests (it was equal to 1/1024 ≈ 0.000977 to be precise) but that’s just to show how differentials work. 🙂 Pretty good, isn’t it? 🙂

Post scriptum:

I. If anything, all this post did was to demonstrate multiplication of complex numbers. Indeed, when everything is said and done, exponentiation is repeated multiplication–both for real as well as for complex exponents. The only difference is–well… Complex exponents give us these oscillating things, because a complex exponent effectively throws a sine and cosine function in.

Now, we can do all kinds of things with that. In this post, we constructed a circle without a compass. Now, that’s not as good as squaring the circle 🙂 but, still, it would have awed Pythagoras. Below, I construct a spiral doing the same kind of math: I start off with a complex number again but now it’s somewhat more off the unit circle (1 + 0.247434i). In fact, I took the same sine value as the one we had for ei/4 but I replaced the cosine value (0.969031) with 1 exactly). In other words, my ε is a lot bigger here.

Then I multiply that complex number 1 + 0.247434with itself to get the next number (0.938776 + 0.494868i), and then I multiply that result once again with my first number (1 + 0.247434i), just like we did when we were constructing the circle. And then it goes on and on and on. So the only difference is the initial value: that’s a bit more off the unit circle. [When we constructed the circle, our initial value was also a bit off but much less. Here we go for a much larger difference.]

Capture

graph

So you can see what happens: multiplying complex numbers amounts to adding angles and multiplying magnitudes: αeiβ·γeiδ = αγei(β+δ) =|αeiβ|·|γeiδ|ei(β+δ)| = |α||γ|ei(β+δ). So, because we started off with a complex number with magnitude slightly bigger than 1 (you calculate it using Pythagoras’ theorem: it’s 1.03, more or less, which is 3% off, as opposed less than one part in a million for the 1 + 0.000977i number), the next point is, of course, slightly off the unit circle too, and some more than 3% actually. And so that goes on and on and on and the ‘some more’ becomes bigger and bigger in the process.

Constructing a graph like this one is like doing the kind of silly stuff I did when programming little games with our Commodore 64 in the 1980s, so I shouldn’t dwell too much on this. In fact, now that I think of it: I should have started near –i, then my spiral would have resembled an e. 🙂 And, yes – for family reading this – this is also like the favorite hobby of our dad: calculating a better value for π. 🙂

However… The only thing I should note, perhaps, is that this kind of iterative process resembles – to some extent – the kind of process that iterative function systems (IFSs) use to create fractals. So… Well… It’s just nice, I guess. [OK. That’s just an excuse. Sorry.]

II. The other thing that I demonstrated in this post may seem to be trivial but I’ll emphasize it here because it helped me (not sure about you though) to understand the essence of real exponentials much better than I did before. So, what is it?

Well… It’s that rather remarkable fact that calculating (real) irrational powers amounts to doing some infinite iteration. What do I mean with that?

Well… Remember that we kept on taking the square root of e, so we calculated e1/2, and then (e1/2)1/2 = e1/4, and then (e1/4)1/2 e1/8, and then we went on: e1/16e1/32e1/64, all the way down to e1/1024, where we stopped. That was 10 iterations only. However, it was clear we could go on and on and on, to find that limit we know so well: e1/Δ tends to 1 (not to zero (0), and not to either!) for Δ → ∞.

Now, e = e1 is an exponential itself and so we can switch to another base, base-10 for example, using the general a= (bk)= bks = bt formula, with k = logb(a). Let’s do base-10: we get e1 = [10log10(e)]=  100.434294…etcetera. Now, because is an irrational number, log10(e) is irrational too, so we indeed have an infinite number of decimals behind the decimal point in 0.434294…etcetera. In fact, e is not only irrational but transcendental: we can’t calculate it algebraically, i.e. as the root of some polynomial with rational coefficients. Most irrational numbers are like that, by the way, so don’t think that being ‘transcendental’ is very special. In any case… That’s a finer point that doesn’t matter much here. You get the idea, I hope. It’s the following:

  1. When we have a rational power am/n , it helps to think of it as a product of m factors a1/n (and surely if we would want to calculate am/n without using a calculator, which, I admit, is not very fashionable anymore and so nobody ever does that: too bad, because the manual work involved does help to better understand things). Let’s write it down: am/n = am·(1/n) =(a1/n)m = a1/n·a1/n·a1/n·a1/n =·… (m times). That’s simple indeed: exponentiation is repeated multiplication. [Of course, if m is negative, then we just write am/n as 1/(am/n), but so that doesn’t change the general idea of exponentiation.]
  2. However, it is much more difficult to see why, and how, exponentiation with irrational powers amounts to repeated multiplication too. The rather lengthy exposé above shows… Well, perhaps not why, but surely how. [And in math, if we can show how, that usually amounts to showing why also, isn’t it? :-)] Indeed, when we think of ar (i.e. an irrational power of some (real) number a), we can think of it as a product of an infinite number of factors ar/Δ. Indeed, we can write aas:

a= ar(1/Δ + 1/Δ + 1/Δ + 1/Δ +…) = ar/Δ·ar/Δ·ar/Δ·ar/Δ

Not convinced? Let’s work an example: 10π = [eln10]π = [eln10]π = eln10·π = eln10·π = e7.233784… Of course, if you take your calculator, you’ll find something like 1385.455731, both for 10π  and e7.233784 (hopefully!), but so that’s not the point here. We’ve shown that is an infinite product e1/Δ·e1/Δ·e1/Δ·e1/Δ·… =e(1/Δ+1/Δ+1/Δ+1/Δ+…) eΔ/Δ with Δ some infinitely large (but integer) number. In our example, we stopped the calculation at Δ = 1024, but you see the logic: we could have gone on forever. Therefore, we can write e7.233784… as

e7.233784… = e7.233784…(1/Δ+1/Δ+1/Δ+1/Δ+…) = e7.233784…/Δ·e7.233784…/Δ·e7.233784…/Δ

Still not convinced? Let’s revert back to base 10. We can write the factors e7.233784…/Δ as e(ln10·π)/Δ = [eln10]π/Δ = 10π/Δ. So our original power 10π is equal to: 10π = 10π/Δ·10π/Δ·10π/Δ·10π/Δ·10π/Δ·10π/Δ… = 10π(Δ/Δ), and of course, 101/Δ also tends to 1 as Δ goes to infinity (not to zero, and not to 10 either). 🙂 So, yes, we can do this for any real number a and for any r really.

Again, this may look very trivial to the trained mathematical eye but, as a novice in Mathematical Wonderland, I felt I had to go through this to truly understand irrational powers. So it may or may not help you, depending on where you are in MW.

[Proving that the limit for Δ/Δ goes to 1 as Δ goes to ∞ should not be necessary, I hope? 🙂 But, just in case you wonder how the formula for rational and irrational powers could possibly be related, we can just write am/n = a(m/n)(1/Δ + 1/Δ + 1/Δ + 1/Δ +…) = am/nΔ·am/nΔ·am/nΔ·am/nΔ·…= (a1/Δ + 1/Δ + 1/Δ + 1/Δ +…)m/n = am/n, as we would expect. :-)]

III. So how does that a= ar/Δ·ar/Δ·ar/Δ·ar/Δ… formula work for complex exponentials? We just add the i, so we write air but we know what effect that has: we have a different beast now. A complex-valued function of r, or… Well… If we keep the exponent fixed, then it’s a complex-valued function of a! Indeed, do remember we have a choice here (and two inverse functions as well!).

However, note that we can write air in two slightly different ways. We have two interpretations here really:

A. The first interpretation is the easiest one: we write air as air =  (ar)i = (ar/Δ + r/Δ + r/Δ + r/Δ +…)i.

So we have a real power here, ar, and so that’s some real number, and then we raise it to the power i to create that new beast: a complex-valued function with two components, one imaginary and one real. And then we know how to relate these to the sine and cosine function: we just change the base to e and then we’re done.

In fact, now that we’re here, let’s go all the way and do it. As mentioned in my previous post  – it follows out of that a= (ek)= eks = eformula, with k = ln(a) – the only effect of a change of base is a change of scale of the horizontal axis: the graph of as is fully identical to the graph of et indeed: we just we need to substitute s by t = ks = ln(a)·s. That’s all. So we actually have our ‘Euler formula for aihere. For example, for base 10, we have 10i= cos[ln(a)·s] + isin[ln(a)·s].

But let’s not get lost in the nitty-gritty here. The idea here is that we let ‘act’ on ar, so to say. And then, of course, we can write ar as we want, but that doesn’t change the essence of what we’re dealing with.

B. The second interpretation is somewhat more tricky: we write air as air = air/Δ·air/Δ·air/Δ·air/Δ·…

So that’s a product of an (infinite) number of complex factors air/Δ. Now, that is a very different interpretation than the one above, even if the mathematical result when putting real numbers in for a and r will – obviously – have to be the same. If the result is the same, then what am I saying really? Well… Nothing much, I guess. Just that the interpretation of an exponentiation as repeated multiplication makes sense for complex exponentials as well:

  • For rational r, we’ll have a finite number of complex factors: aim/n = ai/n·ai/n·ai/n·ai/n·… (m times).
  • For irrational r, we’ll have an infinite number of complex factors air = air/Δ·air/Δ·air/Δ·air/Δ… etcetera.

So the difference with the first interpretation is that, instead of looking at aias a real number ar that’s being raised to the complex power i, we’re looking at aias a complex number ai that’s being raised to the real power r. As said, the mathematical result when putting real numbers in for a and r will – obviously – have to be the same. [Otherwise we’d be in serious trouble of course: math is math. We can’t have the same thing being associated with two different results.] But, as said, we can effectively interpret air in two ways.

[…]

What I am doing here, of course, is contemplating all kinds of mathematical operations here – including exponentiation – on the complex space, rather on the real space. So the first step is to raise a complex number to a real power (as opposed to raising a real number to a complex power). The next step will be to raise a complex number to a complex power. So then we’re talking complex-valued functions of complex variables.

Now, that’s what complex analysis is all about, and I’ve written very extensively about that in my October-November 2013 post. So I would encourage you to re-read those, now that you’ve got, hopefully, a bit more of an ‘intuitive’ understanding of complex numbers with the background given in this and my previous post.

Complex analysis involves mapping (i.e. mapping from one complex space to another) and that, in turn, involves the concept of so-called analytic and/or holomorphic functions. Understanding those advanced concepts is, in turn, essential to understanding the kind of things that Penrose is writing about in Chapter 9 to 12 of his Road to Reality. […] I’ll probably re-visit these chapters myself in the coming weeks, as I realize I might understand them somewhat better now. If I could get through these, I’d be at page 250 or so, so that’s only one quarter of the total volume. Just an indication of how long that Road to Reality really is. 🙂

And then I am still not sure if it really leads to ‘reality’ because, when everything is said and done, those new theories (supersymmetry, M-theory, or string theory in general) are quite speculative, aren’t they? 🙂

End of the Road to Reality?

Or the end of theoretical physics?

In my previous post, I mentioned the Goliath of science and engineering: the Large Hadron Collider (LHC), built by the European Organization for Nuclear Research (CERN) under the Franco-Swiss border near Geneva. I actually started uploading some pictures, but then I realized I should write a separate post about it. So here we go.

The first image (see below) shows the LHC tunnel, while the other shows (a part of) one of the two large general-purpose particle detectors that are part of this Large Hadron Collider. A detector is the thing that’s used to look at those collisions. This is actually the smallest of the two general-purpose detectors: it’s the so-called CMS detector (the other one is the ATLAS detector), and it’s ‘only’ 21.6 meter long and 15 meter in diameter – and it weighs about 12,500 tons. But so it did detect a Higgs particle – just like the ATLAS detector. [That’s actually not 100% sure but it was sure enough for the Nobel Prize committee – so I guess that should be good enough for us common mortals :-)]

LHC tunnelLHC - CMS detector

image of collision

The picture above shows one of these collisions in the CMS detector. It’s not the one with the trace of the Higgs particle though. In fact, I have not found any image that actually shows the Higgs particle: the closest thing to such image are some impressionistic images on the ATLAS site. See http://atlas.ch/news/2013/higgs-into-fermions.html

In case you wonder what’s being scattered here… Well… All kinds of things – but so the original collision is usually between protons (so these are hydrogen ions: Hnuclei), although the LHC can produce other nucleon beams as well (collectively referred to as hadrons). These protons have energy levels of 4 TeV (tera-electronVolt: 1 TeV = 1000 GeV = 1 trillion eV = 1×1012 eV).

Now, let’s think about scale once again. Remember (from that same previous post) that we calculated a wavelength of 0.33 nanometer (1 nm = 1×10–9 m, so that’s a billionth of a meter) for an electron. Well, this LHC is actually exploring the sub-femtometer (fm) frontier. One femtometer (fm) is 1×10–15 m so that’s another million times smaller. Yes: so we are talking a millionth of a billionth of a meter. The size of a proton is an estimated 1.7 femtometer indeed and, as you surely know, a proton is a point-like thing occupying a very tiny space, so it’s not like an electron ‘cloud’ swirling around: it’s much smaller. In fact, quarks – three of them make up a proton (or a neutron) – are usually thought of as being just a little bit less than half that size – so that’s about 0.7 fm.

It may also help you to use the value I mentioned for high-energy electrons when I was discussing the LEP (the Large Electron-Positron Collider, which preceded the LHC) – so that was 104.5 GeV – and calculate the associated de Broglie wavelength using E = hf and λ = v/f. The velocity is close to and, hence, if we plug everything in, we get a value close to 1.2×10–15 m indeed, so that’s the femtometer scale indeed. [If you don’t want to calculate anything, then just note we’re going from eV to giga-eV energy levels here, and so our wavelength decreases accordingly: one billion times smaller. Also remember (from the previous posts) that we calculated a wavelength of 0.33×10–6 m and an associated energy level of 70 eV for a slow-moving electron – i.e. one going at 2200 km per second ‘only’, i.e. less than 1% of the speed of light.]  Also note that, at these energy levels, it doesn’t matter whether or not we include the rest mass of the electron: 0.511 MeV is nothing as compared to the GeV realm. In short, we are talking very very tiny stuff here.

But so that’s the LEP scale. I wrote that the LHC is probing things at the sub-femtometer scale. So how much sub-something is that? Well… Quite a lot: the LHC is looking at stuff at a scale that’s more than a thousand times smaller. Indeed, if collision experiments in the giga-electronvolt (GeV) energy range correspond to probing stuff at the femtometer scale, then tera-electronvolt (TeV) energy levels correspond to probing stuff that’s, once again, another thousand times smaller, so we’re looking at distances of less than a thousandth of a millionth of a billionth of a meter. Now, you can try to ‘imagine’ that, but you can’t really.

So what do we actually ‘see’ then? Well… Nothing much one could say: all we can ‘see’ are traces of point-like ‘things’ being scattered, which then disintegrate or just vanish from the scene – as shown in the image above. In fact, as mentioned above, we do not even have such clear-cut ‘trace’ of a Higgs particle: we’ve got a ‘kinda signal’ only. So that’s it? Yes. But then these images are beautiful, aren’t they? If only to remind ourselves that particle physics is about more than just a bunch of formulas. It’s about… Well… The essence of reality: its intrinsic nature so to say. So… Well…

Let me be skeptical. So we know all of that now, don’t we? The so-called Standard Model has been confirmed by experiment. We now know how Nature works, don’t we? We observe light (or, to be precise, radiation: most notably that cosmic background radiation that reaches us from everywhere) that originated nearly 14 billion years ago  (to be precise: 380,000 years after the Big Bang – but what’s 380,000 years  on this scale?) and so we can ‘see’ things that are 14 billion light-years away. In fact, things that were 14 billion light-years away: indeed, because of the expansion of the universe, they are further away now and so that’s why the so-called observable universe is actually larger. So we can ‘see’ everything we need to ‘see’ at the cosmic distance scale and now we can also ‘see’ all of the particles that make up matter, i.e. quarks and electrons mainly (we also have some other so-called leptons, like neutrinos and muons), and also all of the particles that make up anti-matter of course (i.e. antiquarks, positrons etcetera). As importantly – or even more – we can also ‘see’ all of the ‘particles’ carrying the forces governing the interactions between the ‘matter particles’ – which are collectively referred to as fermions, as opposed to the ‘force carrying’ particles, which are collectively referred to as bosons (see my previous post on Bose and Fermi). Let me quickly list them – just to make sure we’re on the same page:

  1. Photons for the electromagnetic force.
  2. Gluons for the so-called strong force, which explains why positively charged protons ‘stick’ together in nuclei – in spite of their electric charge, which should push them away from each other. [You might think it’s the neutrons that ‘glue’ them together but so, no, it’s the gluons.]
  3. W+, W, and Z bosons for the so-called ‘weak’ interactions (aka as Fermi’s interaction), which explain how one type of quark can change into another, thereby explaining phenomena such as beta decay. [For example, carbon-14 will – through beta decay – spontaneously decay into nitrogen-14. Indeed, carbon-12 is the stable isotope, while carbon-14 has a life-time of 5,730 ± 40 years ‘only’ 🙂 and, hence, measuring how much carbon-14 is left in some organic substance allows us to date it (that’s what (radio)carbon-dating is about). As for the name, a beta particle can refer to an electron or a positron, so we can have β decay (e.g. the above-mentioned carbon-14 decay) as well as βdecay (e.g. magnesium-23 into sodium-23). There’s also alpha and gamma decay but that involves different things. In any case… Let me end this digression within the digression.]
  4. Finally, the existence of the Higgs particle – and, hence, of the associated Higgs field – has been predicted since 1964 already, but so it was only experimentally confirmed (i.e. we saw it, in the LHC) last year, so Peter Higgs – and a few others of course – got their well-deserved Nobel prize only 50 years later. The Higgs field gives fermions, and also the W+, W, and Z bosons, mass (but not photons and gluons, and so that’s why the weak force has such short range – as compared to the electromagnetic and strong forces).

So there we are. We know it all. Sort of. Of course, there are many questions left – so it is said. For example, the Higgs particle does actually not explain the gravitational force, so it’s not the (theoretical) graviton, and so we do not have a quantum field theory for the gravitational force. [Just Google it and you’ll see why: there’s theoretical as well as practical (experimental) reasons for that.] Secondly, while we do have a quantum field theory for all of the forces (or ‘interactions’ as physicists prefer to call them), there are a lot of constants in them (much more than just that Planck constant I introduced in my posts!) that seem to be ‘unrelated and arbitrary.’ I am obviously just quoting Wikipedia here – but it’s true.

Just look at it: three ‘generations’ of matter with various strange properties, four force fields (and some ‘gauge theory’ to provide some uniformity), bosons that have mass (the W+, W, and Z bosons, and then the Higgs particle itself) but then photons and gluons don’t… It just doesn’t look good, and then Feynman himself wrote, just a few years before his death (QED, 1985, p. 128), that the math behind calculating some of these constants (the coupling constant j for instance, or the rest mass n of an electron), which he actually invented (it makes use of a mathematical approximation method called perturbation theory) and for which he got a Nobel Prize, is a “dippy process” and that “having to resort to such hocus-pocus has prevented us from proving that the theory of quantum electrodynamics is mathematically self-consistent“. He adds: “It’s surprising that the theory still hasn’t been proved self-consistent one way or the other by now; I suspect that renormalization [“the shell game that we play to find n and j” as he calls it]  is not mathematically legitimate.” And so he writes this about quantum electrodynamics, not about “the rest of physics” (and so that’s quantum chromodynamics (QCD) – the theory of the strong interactions – and quantum flavordynamics (QFD) – the theory of weak interactions) which, he adds, “has not been checked anywhere near as well as electrodynamics.”

Waw ! That’s a pretty damning statement, isn’t it? In short, all of the celebrations around the experimental confirmation of the Higgs particle cannot hide the fact that it all looks a bit messy. There are other questions as well – most of which I don’t understand so I won’t mention them. To make a long story short, physicists and mathematicians alike seem to think there must be some ‘more fundamental’ theory behind. But – Hey! – you can’t have it all, can you? And, of course, all these theoretical physicists and mathematicians out there do need to justify their academic budget, don’t they? And so all that talk about a Grand Unification Theory (GUT) is probably just what is it: talk. Isn’t it? Maybe.

The key question is probably easy to formulate: what’s beyond this scale of a thousandth of a proton diameter (0.001×10–15 m) – a thousandth of a millionth of a billionth of a meter that is. Well… Let’s first note that this so-called ‘beyond’ is a ‘universe’ which mankind (or let’s just say ‘we’) will never see. Never ever. Why? Because there is no way to go substantially beyond the 4 TeV energy levels that were reached last year – at great cost – in the world’s largest particle collider (the LHC). Indeed, the LHC is widely regarded not only as “the most complex and ambitious scientific project ever accomplished by humanity” (I am quoting a CERN scientist here) but – with a cost of more than 7.5 billion Euro – also as one of the most expensive ones. Indeed, taking into account inflation and all that, it was like the Manhattan project indeed (although scientists loathe that comparison). So we should not have any illusions: there will be no new super-duper LHC any time soon, and surely not during our lifetime: the current LHC is the super-duper thing!

Indeed, when I write ‘substantially‘ above, I really mean substantially. Just to put things in perspective: the LHC is currently being upgraded to produce 7 TeV beams (it was shut down for this upgrade, and it should come back on stream in 2015). That sounds like an awful lot (from 4 to 7 is +75%), and it is: it amounts to packing the kinetic energy of seven flying mosquitos (instead of four previously :-)) into each and every particle that makes up the beam. But that’s not substantial, in the sense that it is very much below the so-called GUT energy scale, which is the energy level above which, it is believed (by all those GUT theorists at least), the electromagnetic force, the weak force and the strong force will all be part and parcel of one and the same unified force. Don’t ask me why (I’ll know when I finished reading Penrose, I hope) but that’s what it is (if I should believe what I am reading currently that is). In any case, the thing to remember is that the GUT energy levels are in the 1016 GeV range, so that’s – sorry for all these numbers – a trillion TeV. That amounts to pumping more than 160,000 Joule in each of those tiny point-like particles that make up our beam. So… No. Don’t even try to dream about it. It won’t happen. That’s science fiction – with the emphasis on fiction. [Also don’t dream about a trillion flying mosquitos packed into one proton-sized super-mosquito either. :-)]

So what?

Well… I don’t know. Physicists refer to the zone beyond the above-mentioned scale (so things smaller than 0.001×10–15 m) as the Great Desert. That’s a very appropriate name I think – for more than one reason. And so it’s this ‘desert’ that Roger Penrose is actually trying to explore in his ‘Road to Reality’. As for me, well… I must admit I have great trouble following Penrose on this road. I’ve actually started to doubt that Penrose’s Road leads to Reality. Maybe it takes us away from it. Huh? Well… I mean… Perhaps the road just stops at that 0.001×10–15 m frontier? 

In fact, that’s a view which one of the early physicists specialized in high-energy physics, Raoul Gatto, referred to as the zeroth scenarioI am actually not quoting Gatto here, but another theoretical physicist: Gerard ‘t Hooft, another Nobel prize winner (you may know him better because he’s a rather fervent Mars One supporter, but so here I am referring to his popular 1996 book In Search of the Ultimate Building Blocks). In any case, Gatto, and most other physicists, including ‘T Hooft (despite the fact ‘T Hooft got his Nobel prize for his contribution to gauge theory – which, together with Feynman’s application of perturbation theory to QED, is actually the backbone of the Standard Model) firmly reject this zeroth scenario. ‘T Hooft himself thinks superstring theory (i.e. supersymmetric string theory – which has now been folded into M-theory or – back to the original term – just string theory – the terminology is quite confusing) holds the key to exploring this desert.

But who knows? In fact, we can’t – because of the above-mentioned practical problem of experimental confirmation. So I am likely to stay on this side of the frontier for quite a while – if only because there’s still so much to see here and, of course, also because I am just at the beginning of this road. 🙂 And then I also realize I’ll need to understand gauge theory and all that to continue on this road – which is likely to take me another six months or so (if not more) and then, only then, I might try to look at those little strings, even if we’ll never see them because… Well… Their theoretical diameter is the so-called Planck length. So what? Well… That’s equal to 1.6×10−35 m. So what? Well… Nothing. It’s just that 1.6×10−35 m is 1/10 000 000 000 000 000 of that sub-femtometer scale. I don’t even want to write this in trillionths of trillionths of trillionths etcetera because I feel that’s just not making any sense. And perhaps it doesn’t. One thing is for sure: that ‘desert’ that GUT theorists want us to cross is not just ‘Great’: it’s ENORMOUS!

Richard Feynman – another Nobel Prize scientist whom I obviously respect a lot – surely thought trying to cross a desert like that amounts to certain death. Indeed, he’s supposed to have said the following about string theorists, about a year or two before he died (way too young): I don’t like that they’re not calculating anything. I don’t like that they don’t check their ideas. I don’t like that for anything that disagrees with an experiment, they cook up an explanation–a fix-up to say, “Well, it might be true.” For example, the theory requires ten dimensions. Well, maybe there’s a way of wrapping up six of the dimensions. Yes, that’s all possible mathematically, but why not seven? When they write their equation, the equation should decide how many of these things get wrapped up, not the desire to agree with experiment. In other words, there’s no reason whatsoever in superstring theory that it isn’t eight out of the ten dimensions that get wrapped up and that the result is only two dimensions, which would be completely in disagreement with experience. So the fact that it might disagree with experience is very tenuous, it doesn’t produce anything; it has to be excused most of the time. It doesn’t look right.”

Hmm…  Feynman and ‘T Hooft… Two giants in science. Two Nobel Prize winners – and for stuff that truly revolutionized physics. The amazing thing is that those two giants – who are clearly at loggerheads on this one – actually worked closely together on a number of other topics – most notably on the so-called Feynman-‘T Hooft gauge, which – as far as I understand – is the one that is most widely used in quantum field calculations. But I’ll leave it at that here – and I’ll just make a mental note of the terminology here. The Great Desert… Probably an appropriate term. ‘T Hooft says that most physicists think that desert is full of tiny flowers. I am not so sure – but then I am not half as smart as ‘T Hooft. Much less actually. So I’ll just see where the road I am currently following leads me. With Feynman’s warning in mind, I should probably expect the road condition to deteriorate quickly.

Post scriptum: You will not be surprised to hear that there’s a word for 1×10–18 m: it’s called an attometer (with two t’s, and abbreviated as am). And beyond that we have zeptometer (1 zm = 1×10–21 m) and yoctometer (1 ym = 1×10–23 m). In fact, these measures actually represent something: 20 yoctometer is the estimated radius of a 1 MeV neutrino – or, to be precise, its the radius of the cross section, which is “the effective area that governs the probability of some scattering or absorption event.” But so then there are no words anymore. The next measure is the Planck length: 1.62 × 10−35 m – but so that’s a trillion (1012) times smaller than a yoctometer. Unimaginable, isn’t it? Literally. 

Note: A 1 MeV neutrino? Well… Yes. The estimated rest mass of an (electron) neutrino is tiny: at least 50,000 times smaller than the mass of the electron and, therefore, neutrinos are often assumed to be massless, for all practical purposes that is. However, just like the massless photon, they can carry high energy. High-energy gamma ray photons, for example, are also associated with MeV energy levels. Neutrinos are one of the many particles produced in high-energy particle collisions in particle accelerators, but they are present everywhere: they’re produced by stars (which, as you know, are nuclear fusion reactors). In fact, most neutrinos passing through Earth are produced by our Sun. The largest neutrino detector on Earth is called IceCube. It sits on the South Pole – or under it, as it’s suspended under the Antarctic ice, and it regularly captures high-energy neutrinos in the range of 1 to 10 TeV. Last year (in November 2013), it captured two with energy levels around 1000 TeV – so that’s the peta-electronvolt level (1 PeV = 1×1015 eV). If you think that’s amazing, it is. But also remember that 1 eV is 1.6×10−19 Joule, so it’s ‘only’ a ten-thousandth of a Joule. In other words, you would need at least ten thousand of them to briefly light up an LED. The PeV pair was dubbed Bert and Ernie and the illustration below (from IceCube’s website) conveys how the detectors sort of lit up when they passed. It was obviously a pretty clear ‘signal’ – but so the illustration also makes it clear that we don’t really ‘see’ at such small scale: we just know ‘something’ happened.

Bert and Ernie

Spaces

The term ‘space’ is all over the place when reading math. There are are all kinds of spaces in mathematics and the terminology is quite confusing. Let’s first start with the definition of a space: a space (in mathematics) is, quite simply, just a set of things (elements or objects) with some kind of structure (and, yes, there is also a definition for the extremely general notion of ‘structure’: a structure on a set consists of ‘additional mathematical objects’ that relate to the set in some manner attach – I am not sure that helps but so there you go).

The elements of the set can be anything. From what I read, I understand that a topological space might be the most general notion of a mathematical space. The Wikipedia article on it defines a topological space as “a set of points, along with a set of neighborhoods for each points, that satisfies a set of axioms relating points and neighborhoods.” It also states that “other spaces, such as manifolds and metric spaces, are (nothing but) specializations of topological spaces with extra structures or constraints.” However, the symbolism involved in explaining the concept is complex and probably prevents me from understanding the finer points. I guess I’d need to study topology for that – but so I am only doing a course in complex analysis right now. 🙂

Let’s go to something something more familiar: the metric spaces. A metric space is a set where a notion of distance (i.e a metric) between elements of the set is defined. That makes sense, doesn’t it?

We have Euclidean and non-Euclidean metric spaces. We all know Euclidean spaces as that is what use for the simple algebra and calculus we all had to learn as a teenager. They can have two, three or more dimensions, but there is only one Euclidean space in any dimension (a line, the plane or, more in general, the Cartesian multidimensional coordinate system). In Euclidean geometry, we have the parallel postulate: within a two-dimensional plane, for any given line L and a point p which is not on that line, there is only one line through that point – the parallel line –which does not intersect with the given line. Euclidean geometry is flat – well… Kind of.

Non-Euclidean metric spaces are a study object in their own but – to put it simply – in non-Euclidean geometry, we do not have the parallel postulate. If we do away with that, then there are two broad possibilities: either we have more than one parallel line or, else, we have none. Let’s start with the latter, i.e. no parallels. The most often quoted example of this is the sphere. A sphere is a two-dimensional surface where lines are actually great circles which divide the sphere in two equal hemispheres, like the equator or the meridians through the poles. They are the shortest path between the two points and so that corresponds to the definition of a line on a sphere indeed. Now, any ‘line’ (any geodesic that is) through a point p off a ‘line’ (geodesic) L will intersect with L, and so there are no parallel lines on a sphere – at least not as per the mathematical definition of a parallel line. Indeed, parallel lines do not intersect. I have to note that it is all a bit confusing because the so-called parallels of latitude on a globe are small circles, not great circles, and, hence, they are not the equivalent of a line in spherical geometry. In short, the parallels of latitude are not parallel lines – in the mathematical sense of the word that is. Does that make sense? For me it makes sense enough, so I guess I should move on.

Other Euclidean geometric facts, such as the angles of a triangle summing up to 180°, cannot be observed on a sphere either: the angles of a triangle on a sphere add up to more than 180° (for the big triangle illustrated below, they add up to 90 + 90 + 50 = 230°). That’s typical of Riemannian or elliptic geometry in general and so, yes, the sphere is an example of Riemannian or elliptic geometry.

Triangles_(spherical_geometry)

Of course, it is probably useful to remind ourselves that a sphere is a two-dimensional surface in elliptic geometry, even if we’re visualizing it in a three-dimensional space when we’re discussing its properties. Think about the flatlander walking on the surface on the globe: for him, the sphere is two-dimensional indeed, and so it’s not us – our world is not flat: we think in 3D – but the flatlander who is living in a spherically geometric world. It’s probably useful to introduce the term ‘manifold’ here. Spheres – and other surfaces, like the saddle-shaped surfaces we will introduce next – are (two-dimensional) manifolds. Now what’s that? I won’t go into the etymology of the term because that doesn’t help I feel: apparently, it has nothing to do with the verb to fold so it’s not something with many folds or so – although a two-dimensional manifold can look like something folded. A manifold is, quite simply, a topological space that near each point resembles Euclidean space, so we can define a metric on them indeed and do all kinds of things with that metric – locally that is. If the flatlander does these things – like measuring angles and lengths and what have you – close enough to where he is, then he won’t notice he’s living in a non-Euclidean space. That ‘fact’ is also shown above: the angles of a small (i.e. a local) triangle do add up to 180° – approximately that is.

Fine. Let’s move on again.

The other type of non-Euclidean geometry is hyperbolic or Lobachevskian geometry. Hyperbolic geometry is the geometry of saddle-shaped surfaces. Many lines (in fact, an infinite number of them) can be drawn parallel to a given line through a given point (the illustration below shows just one), and the angles of a triangle add up to less than 180°.

2000px-Hyperbolic_triangle

OK… Enough about metric spaces perhaps – except for noting that, when physicists talk about curved space, they obviously mean that the the space we are living in (i.e. the universe) is non-Euclidean: gravity curves it. And let’s add one or two other points as well. Anyone who has read something about Einstein’s special relativity theory will remember that the mathematician Hermann Minkowski added a time dimension to the three ordinary dimensions of space, creating a so-called Minkowski space, which is actually four-dimensional spacetime. So what’s that in mathematical terms? The ‘points’ in Minkowski’s four-dimensional spacetime are referred to as ‘events’. They are also referred to as four-vectors. The important thing to note here is that it’s not an Euclidean space: it is pseudo-Euclidean. Huh?

Let’s skip that for now and not make it any more complicated than it already. Let us just note that the Minkowski space is not only a metric space (and a manifold obviously, which resembles Euclidean space locally, which is why we don’t notice its curvature really): a Minkowski space is also a vector space. So what’s a vector space? The formal definition is clear: a vector space V over a (scalar) field F is a set of elements (vectors) together with two binary operations: addition and scalar multiplication. So we can add the vectors (that is the elements of the vector space) and scale them with a scalar, i.e. an element of the field F (that’s actually where the word ‘scalar’ comes from: something that scales vectors). The addition and scalar multiplication operations need to satisfy a number of axioms but these are quite straightforward (like associativity, commutativity, distributivity, the existence of an additive and multiplicative inverse, etcetera). The scalars are usually real numbers: in that case, the field F is equal to R, the set of real numbers (sorry I can’t use blackboard bold here for symbols so I am just using a bold capital R for the set of the real numbers), and the vector space is referred to as a real vector space. However, they can also be complex numbers: in that case, the field F is equal to C, the set of complex numbers, and the vector space is, obviously, referred to as a complex vector space.

N-tuples of elements of F itself (a1, a2,…, an) are a very straightforward example of a vector space: Euclidean spaces can be denoted as R (the real line), R(the Euclidean plane), R3 (Euclidean three-dimensional space), etcetera. Another example, is the set C of complex numbers: that’s a vector space too, and not only over the real numbers (F = R), but also over itself (F = C). OK. Fair enough, I’d say. What’s next?

It becomes somewhat more complicated when the ‘vectors’ (i.e. the elements in the vector space) are mathematical functions: indeed, a space can consists of functions (a function is just another object, isn’t it?), and function spaces can also be vector spaces because we can perform (pointwise) addition and scalar multiplication on functions. Vector spaces can consist of other mathematical objects too. In short, the notion of a ‘vector’ as some kind of arrow defined by some point in space does not cover the true mathematical notion of a vector, which is very general (an element of a ‘vector field’ as defined above). The same goes for fields: we usually think a field consists of numbers (real, complex, or whatever other number one can think of), but a field can also consist of vectors. In fact, vector fields are as common as scalar fields in in physics (think of vectors representing the speed and direction of a moving fluid for example, as opposed to its local temperature – which is a scalar quantity).

Quite confusing, isn’t it? Can we have a vector space over a vector field?

Tricky question. From what I read so far, I am not sure actually. I guess the answer is both yes and no. The second binary operation on the vector space is scalar multiplication, so the field F needs to be a scalar field – not a vector field. Indeed, the formal definition of a vector field is quite formal: F has to be a scalar field. But then complex numbers z can be looked at not only as scalars in their own right (that’s the focus of the complex analysis course that I am currently reading) but also as vectors in their own right. OK, we’re talking a special kind of vectors here (vectors from the origin to the point z) but so what? And then we already noted that C is a vector field over itself, didn’t we?

So how do we define scalar multiplication in this case? Can we use the ‘scalar’ product (aka as the dot product, or the inner product) between two vectors here (as opposed to the cross-product, aka as the vector product tout court)? I am not sure. Not at all actually. Perhaps it is the usual product between two complex numbers – (x+iy)(u+iv) = (xu-yv) + i(xv+yu) – which, unlike the standard ‘scalar’ product between two vectors, returns another complex number as a result (as opposed to the dot product (x,y)·(u,v), which is equal to the real number xu + yv). The Brown & Churchill course on complex analysis which I am reading just notes that “this product [of two complex numbers] is, evidently, neither the scalar nor the vector product used in ordinary vector analysis”. However, because the ‘scalar’ product returns a single (real) number as a result, while the product of two complex numbers is – quite obviously – a complex number in itself, I must assume it’s the above-mentioned ‘usual product between two complex numbers’ that is to be used for ‘scalar’ multiplication in this case (i.e. the case of defining C as a vector field over itself). In addition, the geometric interpretation of multiplying two complex numbers show that it actually is a matter of scaling: the ‘length’ of the complex number zw (i.e. its absolute value) is equal to the product of the absolute value of z and w respectively and, as for its argument (i.e. the angle from the real line), the argument of zw is equal to the sum of the arguments of z and w respectively. In short, this looks very much like what scalar multiplication is supposed to do.

Let’s see if we can confirm the above (i.e. C being a vector field over itself, with scalar multiplication being defined as the product between two complex numbers) at some later point in time. For the moment, I think I’ve had quite enough on – for the moment at least. Indeed, while I note there are many other unexplored spaces, I will not attempt to penetrate these as for now. For example, I note the frequent reference to Hilbert spaces but, from what I understand, this is also some kind of vector space, but then with even more additional structure and/or a more general definition of the ‘objects’ which make up its elements. Its definition involves Cauchy sequences of vectors – and that’s a concept I haven’t studied as yet. To make things even more complicated, there are also Banach spaces – and lots of other things really. So I will need to look at that but, again, for the moment I’ll just leave the matter alone and get back into the nitty-gritty of complex analysis. Doing so will probably clarify all of the more subtle points mentioned above.

[…] Wow! It’s been quite a journey so far, and I am still in the first chapters of the course only!

PS: I did look it up just now (i.e. a few days later than when I wrote the text above) and my inference is correct: C is a complex vector space over itself, and the formula to be used for scalar multiplication is the standard formula for multiplying complex numbers: (x+iy)(u+iv) = (xu-yv) + i(xv+yu). C2, or Cn in general, are other examples of complex vector spaces (i.e. a vector space over C).

Euler’s formula

I went trekking (to the Annapurna Base Camp this time) and, hence, left the math and physics books alone for a week or two. When I came back, it was like I had forgotten everything, and I wasn’t able to re-do the exercises. Back to the basics of complex numbers once again. Let’s start with Euler’s formula:

eix = cos(x) + isin(x)

In his Lectures on Physics, Richard Feynman calls this equation ‘one of the most remarkable, almost astounding, formulas in all of mathematics’, so it’s probably no wonder I find it intriguing and, indeed, difficult to grasp. Let’s look at it. So we’ve got the real (but irrational) number e in it. That’s a fascinating number in itself because it pops up in different mathematical expressions which, at first sight, have nothing in common with each other. For example, e can be defined as the sum of the infinite series e = 1/0! + 1/2! + + 1/3! + 1/4! + … etcetera (n! stands for the factorial of n in this formula), but one can also define it as that unique positive real number for which d(et)/dt = et (in other words, as the base of an exponential function which is its own derivative). And, last but not least, there are also some expressions involving limits which can be used to define e. Where to start? More importantly, what’s the relation between all these expressions and Euler’s formula?

First, we should note that eix is not just any number: it is a complex number – as opposed to the more simple ex expression, which denotes the real exponential function (as opposed to the complex exponential function ez). Moreover, we should note that eix is a complex number on the unit circle. So, using polar coordinates, we should say that eix  is a complex number with modulus 1 (the modulus is the absolute value of the complex number (i.e. the distance from 0 to the point we are looking at) or, alternatively, we could say it is the magnitude of the vector defined by the point we are looking at) and argument x (the argument is the angle (expressed in radians) between the positive real axis and the line from 0 to the point we are looking at).

Now, it is self-evident that cos(x) + isin(x) represents exactly the same: a point on the unit circle defined by the angle x. But so that doesn’t prove Euler’s formula: it only illustrates it. So let’s go to one or the other proof of the formula to try to understand it somewhat better. I’ll refer to Wikipedia for proving Euler’s formula in extenso but let me just summarize it. The Wikipedia article (as I looked at it today) gives three proofs.

The first proof uses the power series expansion (yes, the Taylor/Maclaurin series indeed – more about that later) for the exponential function: eix = 1 + ix + (ix)2/2! + (ix)3/3! +… etcetera. We then substitute using i2 = -1, i3 = –i etcetera and so, when we then re-arrange the terms, we find the Maclaurin series for the cos(x) and sin(x) functions indeed. I will come back to these power series in another post.

The second proof uses one of the limit definitions for ex but applies it to the complex exponential function. Indeed, one can write ez (with z = x+iy) as ez = lim(1 + z/n)n for n going to infinity. The proof substitutes ix for z and then calculates the limit for very large (or infinite) n indeed. This proof is less obvious than it seems because we are dealing with power series here and so one has to take into account issues of convergence and all that.

The third proof also looks complicated but, in fact, is probably the most intuitive of the three proofs given because it uses the derivative definition of e. To be more precise, it takes the derivative of both sides of Euler’s formula using the polar coordinates expression for complex numbers. Indeed, eix is a complex number and, hence, can be written as some number z = r(cosθ+ isinθ), and so the question to solve here is: what’s r and θ? We need to write these two values as a function of x. How do we do that? Well… If we take the derivative of both sides, we get d(eix)/dx = ieix = (cosθ + isinθ)dr/dx + r[d(cosθ + isinθ)/dθ]dθ/dx. That’s just the chain rule for derivatives of course. Now, writing it all out and equating the real and imaginary parts on both sides of the expression yields following: dr/dx = 0 and dθ/dx = 1. In addition, we must have that, for x = 0, ei0 = [ei]0 = 1, so we have r(0) = 1 (the modulus of the complex number (1,0) is one) and θ(0) = 0 (the argument of (1,0) is zero). It follows that the functions r and θ are equal to r = 1 and θ = x, which proves the formula.

While these proofs are (relatively) easy to understand, the formula remains weird, as evidenced also from its special cases, like ei0 = ei = 1 = – eiπ = – eiπ or, equivalently, eiπ + 1 = 0, which is a formula which combines the five most basic quantities in mathematics: 0, 1, i, e and π. It is an amazing formula because we have two irrational numbers here, e and π, which have definitions which do not refer to each other at all (last time I checked, π was still being defined as the simple ratio of a circle’s circumference to its diameter, while the various definitions of e have nothing to do with circles), and so we combine these two seemingly unrelated numbers, also inserting the imaginary unit i (using iπ as an exponent for e) and we get minus 1 as a result (eiπ = – 1). Amazing indeed, isn’t it?

[…] Well… I’d say at least as amazing as the Taylor or Maclaurin expansion of a function – but I’ll save my thoughts on these for another post (even if I am using the results of these expansions in this post). In my view, what Euler’s formula shows is the amazing power of mathematical notation really – and the creativity behind. Indeed, let’s look at what we’re doing with complex numbers: we start from one or two definitions only and suddenly all kinds of wonderful stuff starts popping up. It goes more or less like this really:

We start off with these familiar x and y coordinates of points in a plane. Now we call the x-axis the real axis and then, just to distinguish them from the real numbers, we call the numbers on the y-axis imaginary numbers. Again, it is just to distinguish them from the real numbers because, in fact, imaginary numbers are not imaginary at all: they are as real as the real numbers – or perhaps we should say that the real numbers are as imaginary as the imaginary numbers because, when everything is said and done, the real numbers are mental constructs as well, aren’t they? Imaginary numbers just happen to lie on another line, perpendicular to our so-called real line, and so that’s why we add a little symbol i (the so-called imaginary unit) when we write them down. So we write 1i (or i tout court), 2i, 3i etcetera, or i/2 or whatever (it doesn’t matter if we write i before the real number or after – as long as we’re consistent).

Then we combine these two numbers – the real and imaginary numbers – to form a so-called complex number, which is nothing but a point (x, y) in this Cartesian plane. Indeed, while complex numbers are somewhat more complex than the numbers we’re used to in daily life, they are not out of this world I’d say: they’re just points in space, and so we can also represent them as vectors (‘arrows’) from the origin to (x, y).

But so this is what we are doing really: we combine the real and imaginary numbers by using the very familiar plus (+) sign, so we write z = x + iy. Now that is actually where the magic starts: we are not adding the same things here, like we would do when we are counting apples or so, or when we are adding integers or rational or real numbers in general. No, we are adding here two different things here – real and imaginary numbers – which, in fact, we cannot really add. Indeed, your mommy told you that you cannot compare apples with oranges, didn’t she? Well… That’s exactly what we do here really, and so we will keep these real and imaginary numbers separate in our calculations indeed: we will add the real parts of complex numbers with each other only, and the imaginary parts of them also with each other only.

Addition is quite straightforward: we just add the two vectors. Multiplication is somewhat more tricky but (geometrically) easy to interpret as well: the product of two complex numbers is a vector with a length which is equal to the sum of the lengths of the two vectors we are multiplying (i.e. the two complex numbers which make up the product) , and its angle with the real axis is the sum of the angles of the two original vectors. From this definition, many things follow, all equally amazing indeed, but one of these amazing facts is that i2 = -1, i3 = –i, i4 = 1, i5 = i, etcetera. Indeed: multiplying a complex number z = x + iy = (x, y) with the imaginary unit i amounts to rotating it 90° (counterclockwise) about the origin. So we are not defining i2 as being equal to minus 1 (many textbooks treat this equality as a definition indeed): it just comes as a fact which we can derive from the earlier definition of a complex product. Sweet, isn’t it?

So we have addition and multiplication now. We want to do much more of course. After defining addition and multiplication, we want to do complex powers, and so it’s here that this business with e pops up.

We first need to remind ourselves of the simple fact that the number e is just a real number: it’s equal to 2.718281828459045235360287471 etcetera. We have to write ‘etcetera’ because e is an irrational number, which – whatever the term ‘irrational’ may suggest in everyday language – simply means that e is not a fraction of any integer numbers (so irrational means ‘not rational’). e is also a transcendental number – a word which suggest all kinds of mystical properties but which, in mathematics, only means we cannot write it as a root of some polynomial (a polynomial with rational coefficients that is). So it’s a weird number. That being said, it is also the so-called ‘natural’ base for the exponential function. Huh? Why would mathematicians take such a strange number as a so-called ‘natural’ base? They must be irrational, no? Well… No. If we take e as the base for the exponential function ex (so that’s just this real (but irrational) number e to the power x, with x being the variable running along the x-axis: hence, we have a function here which takes a value from the set of real numbers and which yields some other real number), then we have a function here which is its own derivative: d(ex)/dx = ex. It is also the natural base for the logarithmic function and, as mentioned above, it kind of ‘pops up’ – quite ‘naturally’ indeed I’d say – in many other expressions, such as compound interest calculations for example or the general exponential function ax = ex lna. In other words, we need this and exp(x) and ln(x) functions to define powers of real numbers in general. So that’s why mathematicians call it ‘natural’.

While the example of compound interest calculations does not sound very exciting, all these formulas with e and exponential functions and what have you did inspire all these 18th century mathematicians – like Euler – who were in search of a logical definition of complex powers.

Let’s state the problem once again: we can do addition and multiplication of complex numbers but so the question is how to do complex powers. When trying to figure that one out, Euler obviously wanted to preserve the usual properties of powers, like axay = ax+y and, effectively, this property of the so-called ‘natural’ exponential function that d(ex)/dx = ex. In other words, we also want the complex exponential function to be its own derivative so d(ez)/dz should give us ez once again.

Now, while Euler was thinking of that (and of many other things too of course), he was well aware of the fact that you can expand ex into that power series which I mentioned above: ex = 1/0! + x/1! + (x)2/2! + (x)3/3! +… etcetera. So Euler just sat down, substituted the real number x with the imaginary number ix and looked at it: eix = 1 + ix + (ix)2/2! + (ix)3/3! +… etcetera. Now lo and behold! Taking into account that i2 = -1, i3 = –i, i4 = 1, i5 = i, etcetera, we can put that in and re-arrange the terms indeed and so Euler found that this equation becomes eix = (1 – x2/2! + x4/4! – -x6/6! +…) + i(x – x3/3! + x5/5! -… ). Now these two terms do correspond to the Maclaurin series for the cosine and sine function respectively, so there he had it: eix = cos(x) + isin(x). His formula: Euler’s formula!

From there, there was only one more step to take, and that was to write ez = ex+iy as exeiy, and so there we have our definition of a complex power: it is a product of two factors – ex and ei– both of which we have effectively defined now. Note that the ex factor is just a real number, even if we write it as ex: it acts as a sort of scaling factor for eiwhich, you will remember (as we pointed it out above already), is a point on the unit circle. More generally, it can be shown that eis the absolute value of ez (or the modulus or length or magnitude of the vector – whatever term you prefer: they all refer to the same), while y is the argument of the complex number ez (i.e. the angle of the vector ez with the real axis). [And, yes, for those who would still harbor some doubts here: eis just another complex number and, hence, a two-dimensional vector, i.e. just a point in the Cartesian plane, so we have a function which goes from the set of complex numbers here (it takes z as input) and which yields another complex number.]

Of course, you will note that we don’t have something like zw here, i.e. a complex base (i.e. z) with a complex exponent (i.e. w), or even a formula for complex powers of real numbers in general, i.e. a formula for aw with a any real number (so not only e but any real number indeed) and w a complex exponent. However, that’s a problem which can be solved easily through writing z and w in their so-called polar form, so we write z as z = ¦z¦eiθ = ¦z¦(cosθ + isinθ) and w as ¦w¦ eiσ =  ¦w¦(cosσ + isinσ) and then we can take it further from there. [Note that ¦z¦ and ¦w¦represent the modulus (i.e. the length) of z and w respectively, and the angles θ and σ are obviously the arguments of the same z and w respectively.] Of course, if z is a real number (so if y = 0), then the angle θ will obviously be zero (i.e. the angle of the real axis with itself) and so z will be equal to a real number (i.e. its real part only, as its imaginary part is zero) and then we are back to the case of a real base and a complex exponent. In other words, that covers the aw case.

[…] Wel… Easily? OK. I am simplifying a bit here – as I need to keep the length of this post manageable – but, in fact, it actually really is a matter of using these common properties of powers (such as ea+biec = e(a+c)+bi and it actually does all work out. And all of this magic did actually start with simply ‘adding’ the so-called ‘real’ numbers x on the x-axis with the so-called ‘imaginary’ numbers on the y-axis. 🙂

Post scriptum:

Penrose’s Road to Reality dedicates a whole chapter to complex exponentiation (Chapter 5). However, the development is not all that simple and straightforward indeed. The first step in the process is to take integer powers – and integer roots – of complex numbers, so that’s zn for n = 0, ±1, ±2, ±3… etcetera (or z1/2, z1/3, z1/4 if we’re talking integer roots). That’s easy because it can be solved through using the old formula of Abraham de Moivre: (cosθ + sinθ)n = cos(nθ) + isin(nθ) (de Moivre penned this down in 1707 already, more than 40 years before Euler looked at the matter). However, going from there to full-blown complex powers is, unfortunately, not so straightforward, as it involves a bit of a detour: we need to work with the inverse of the (complex) exponential function ez, i.e. the (complex) natural logarithm.

Now that is less easy than it sounds. Indeed, while the definition of a complex logarithm is as straightforward as the definition of real logarithms (lnz is a function for which elnz = z), the function itself is a bit more… well… complex I should say. For starters, it is a multiple-valued function: if we write the solution w = lnz as w = u+iv, then it is obvious that ew will be equal to eu+iv = eueiv and this complex number ew can then be written in its polar form ew = reiθ with r = eu and v = θ + 2nπ. Of course, ln(eu+iv) = u + iv and so the solution of w will look like w = lnr + i(θ + 2nπ) with n = 0, ±1, ±2, ±3 etcetera. In short, we have an infinite number of solutions for w (one for every n we choose) and so we have this problem of multiple-valuedness indeed. We will not dwell on this here (at least not in this post) but simply note that this problem is linked to the properties of the complex exponential function ez itself. Indeed, the complex exponential function ez has very different properties than the real exponential function ex. First, we should note that, unlike e(which, as we know goes from zero at the far end of the negative side of the real axis to infinity as x goes big on the positive side), eis a periodic function – so it oscillates and yields the same values after some time – with this ‘after some time’ being the periodicity of the function. Indeed, e= e+2πi and so its period 2πi (note that this period is an imaginary number – but so it’s a ‘real’ period, if you know what I mean :-)). In addition, and this is also very much unlike the real exponential function ex, ecan be negative (as well as assume all kinds of other complex values). For example, eiπ = -1, as we noted above already.

That being said, the problem of multiple-valuedness can be solved through the definition of a principal value of lnz and that, then, leads us to what we want here: a consistent definition of a complex power of a complex base (or the definition of a true complex exponential (and logarithmic) function in other words). To those who would want to see the details of this (i.e. my imaginary readers :-)), I would say that Penrose’s treatment of the matter in the above-mentioned Chapter 5 of The Road to Reality is rather cryptic – presumably because he has to keep his book around 1000 pages only (not a lot to explain all of the Laws of the Universe) and, hence, Brown & Churchill’s course (or whatever other course dealing with complex analysis) probably makes for easier reading.

[As for the problem of multiple-valuedness, we should probably also note the following: when taking the nth root of a complex number (i.e. z1/n with n = 2, 3, etcetera), we also obtain a set of n values ck (with k = 0, 1, 2,… n-1), rather than one value only. However, once we have one of these values, we have all of them as we can write these cas ck = r1/nei(θ/n+2kπ/n), (with the original complex number z equal to z = reiθ) then so we could also just consider the principal value c0 and, as such, consider the function as a single-valued one. In short, the problem of multiple-valued functions pops up almost everywhere in the complex space, but it is not an issue really. In fact, we encounter the problem of multiple-valuedness as soon as we extend the exponential function in the space of the real numbers and also allow rational and real exponents, instead of positive integers only. For example, 41/2 is equal to ±2, so we have two results here too and, hence, multiple values. Another example would be the 4th root of 16: we have four 4th roots of 16: +2, -2 and then two complex roots +2i and -2i. However, standard practice is that we only take the positive value into account in order to ensure a ‘well-behaved’ exponential function. Indeed, the standard definition of a real exponential function is bx = (elnb)x = elnbex, and so, if x = 1/n, we’ll only assign the positive 4th root to ex. Standard practice will also restrict the value of b to a positive real number (b > 0). These conventions not only ensures a positive result but also continuity of the function and, hence, the existence of a derivative which we can then use to do other things. By the way, the definition also shows – once again – why e is such a nice (or ‘natural’) number: we can use it to calculate the value for any exponential function (for any real base b > 0). But so we had mentioned that already, and it’s now really time to stop writing. I think the point is clear.]

No royal road to reality

I got stuck in Penrose’s Road to Reality in chapter 5 already. That is not very encouraging – because the book has 34 chapters, and every chapter builds on the previous one, and usually also looks more difficult than the previous one.

In Chapter 5, Penrose introduces complex algebra. As I tried to get through it, I realized I had to do some more self-study. Indeed, while Penrose claims no other books or courses are needed to get through his, I do not find this to be the case. So I bought a fairly standard course in complex analysis (James Ward Brown and Ruel V. Churchill, Complex variables and applications) and I’ve done chapter 1 and 2 now. Although these first two chapter do not nothing else than introduce the subject-matter, I find the matter rather difficult and the terminology confusing. Examples:

1. The term ‘scalar’ is used to denote real numbers. So why use the term ‘scalar’ if the word ‘real’ is available as well? And why not use the term ‘real field’ instead of ‘scalar field’? Well… The term ‘real field’ actually means something else. A scalar field associates a (real) number to every point in space. So that’s simple: think of temperature or pressure. The term ‘scalar’ is said to be derived from ‘scaling’: a scalar is that what scales vectors. Indeed, scalar multiplication of a vector and a real number multiplies the magnitude of the vector without changing its direction. So what is a real field then? Well… A (formally) real field is a field that can be extended with a (not necessarily unique) ordering which makes it an ordered field. Does that help? Somewhat I guess. But why the qualifier ‘formally real’? I checked and there is no such thing as an ‘informally real’ field. I guess it’s just to make sure we know what we are talking about, as ‘real’ is a word with many meanings.

2. So what’s a field in mathematics? It is an algebraic structure: a set of ‘things’ (like numbers) with operations defined on it, including the notions of addition, subtraction, multiplication, and division. As mentioned above, we have scalar fields and vector fields. In addition, we also have fields of complex numbers. We also have fields with some less likely candidates for addition and multiplication, such as functions (one can add and multiply functions with each other). In short, anything which satisfies the formal definition of a field – and here I should note that the above definition of a field is not formal – is a field. For example, the set of rational numbers satisfies the definition of a field too. So what is the formal definition? First of all, a field is a ring. Huh? Here we are in this abstract classification of algebraic structures: commutative groups, rings, fields, etcetera (there are also modules – a type of algebraic structure which I had never ever heard of before). To put it simply – because we have to move on of course – a ring (no one seems to know where that word actually comes from) has addition and multiplication only, while a field has division too. In other words, a ring does not need to have multiplicative inverses. Huh?  It’s simply really: the integers form a ring, but the equation 2x = 1 does not have a solution in integers (x = ½) and, hence, the integers do not form a field. The same example shows why rational numbers do.

3. But what about a vector field? Can we do division with vectors? Yes, but not by zero – but that is not a problem as that is understood in the definition of a field (or in the general definition of division for that matter). In two-dimensional space, we can represent vectors by complex numbers: z = (x,y), and we have a formula for the so-called multiplicative inverse of a complex number: z-1 = (x/x2+y2, -y/x2+y2). OK. That’s easy. Let’s move on to more advanced stuff.

4. In logic, we have the concept of well-formed formulas (wffs). In math, we have the concept of ‘well-behaved’: we have well-behaved sets, well-behaved spaces and lots of other well-behaved things, including well-behaved functions, which are, of course, those of interest to engineers and scientists (and, hence, in light of the objective of understanding Penrose’s Road to Reality, to me as well). I must admit that I was somewhat surprised to learn that ‘well-behaved’ is one of the very few terms in math that have no formal definition. Wikipedia notes that its definition, in the science of mathematics that is, depends on ‘mathematical interest, fashion, and taste’. Let me quote in full here: “To ensure that an object is ‘well-behaved’ mathematicians introduce further axioms to narrow down the domain of study. This has the benefit of making analysis easier, but cuts down on the generality of any conclusions reached. […] In both pure and applied mathematics (optimization, numerical integration, or mathematical physics, for example), well-behaved means not violating any assumptions needed to successfully apply whatever analysis is being discussed. The opposite case is usually labeled pathological.” Wikipedia also notes that “concepts like non-Euclidean geometry were once considered ill-behaved, but are now common objects of study.”

5. So what is a well-behaved function? There is actually a whole hierarchy, with varying degrees of ‘good’ behavior, so one function can be more ‘well-behaved’ than another. First, we have smooth functions: a smooth function has derivatives of all orders (as for its name, it’s actually well chosen: the graph of a smooth function is actually, well, smooth). Then we have analytic functions: analytic functions are smooth but, in addition to being smooth, an analytic function is a function that can be locally given by a convergent power series. Huh? Let me try an alternative definition: a function is analytic if and only if its Taylor series about x0 converges to the function in some neighborhood for every x0 in its domain. That’s not helping much either, is it? Well… Let’s just leave that one for now.

In fact, it may help to note that the authors of the course I am reading (J.W. Brown and R.V. Churchill, Complex Variables and Applications) use the terms analytic, regular and holomorphic as interchangeable, and they define an analytic function simply as a function which has a derivative everywhere. While that’s helpful, it’s obviously a bit loose (what’s the thing about the Taylor series?) and so I checked on Wikipedia, which clears the confusion and also defines the terms ‘holomorphic’ and ‘regular’:

“A holomorphic function is a complex-valued function of one or more complex variables that is complex differentiable in a neighborhood of every point in its domain. The existence of a complex derivative in a neighborhood is a very strong condition, for it implies that any holomorphic function is actually infinitely differentiable and equal to its own Taylor series. The term analytic function is often used interchangeably with ‘holomorphic function’ although the word ‘analytic’ is also used in a broader sense to describe any function (real, complex, or of more general type) that can be written as a convergent power series in a neighborhood of each point in its domain. The fact that the class of complex analytic functions coincides with the class of holomorphic functions is a major theorem in complex analysis.”

Wikipedia also adds following: “Holomorphic functions are also sometimes referred to as regular functions or as conformal maps. A holomorphic function whose domain is the whole complex plane is called an entire function. The phrase ‘holomorphic at a point z0’ means not just differentiable at z0, but differentiable everywhere within some neighborhood of z0 in the complex plane.”

6. What to make of all this? Differentiability is obviously the key and, although there are many similarities between real differentiability and complex differentiability (both are linear and obey the product rule, quotient rule, and chain rule), real-valued functions and complex-valued functions are different animals. What are the conditions for differentiability? For real-valued functions, it is a matter of checking whether or not the limit defining the derivative exists and, of course, a necessary (but not sufficient) condition is continuity of the function.

For complex-valued functions, it is a bit more sophisticated because we’ve got so-called Cauchy-Riemann conditions applying here. How does that work? Well… We write f(z) as the sum of two functions: f(z) = u(x,y) + iv(x,y). So the real-valued function u(x,y) yields the real part of f(z), while v(x,y) yields the imaginary part of f(z). The Cauchy-Riemann equations (to be interpreted as conditions really) are the following: ux = vy and uy = -v(note the minus sign in the second equation).

That looks simple enough, doesn’t it? However, as Wikipedia notes (see the quote above), differentiability at a point z0 is not enough (to ensure the existence of the derivative of f(z) at that point). We need to look at some neighborhood of the point z0 and see if these first-order derivatives (ux, uy, vx and vy) exist everywhere in that neighborhood and satisfy these Cauchy-Riemann equations. So we need to look beyond the point z0 itself hen doing our analysis: we need to  ‘approach’ it from various directions before making any judgment. I know this sounds like Chinese but it became clear to me when doing the exercises.

7. OK. Phew!  I got this far – but so that’s only chapter 1 and 2 of Brown & Churchill’s course !  In fact, chapter 2 also includes a few sections on so-called harmonic functions and harmonic conjugates. Let’s first talk about harmonic functions. Harmonic functions are even better behaved than holomorphic or analytic functions. Well… That’s not the right way to put it really. A harmonic function is a real-valued analytic function (its value could represent temperature, or pressure – just as an example) but, for a function to qualify as ‘harmonic’, an additional condition is imposed. That condition is known as Laplace’s equation: if we denote the harmonic function as H(x,y), then it has to have second-order derivatives which satisfies Hxx(x,y) + Hyy(x,y) = 0.

Huh? Laplace’s equation, or harmonic functions in general, plays an important role in physics, as the condition that is being imposed (the Laplace equation) often reflects a real-life physical constraint and, hence, the function H would describe real-life phenomena, such as the temperature of a thin plate (with the points on the plate defined by the (x,y) coordinates), or electrostatic potential. More about that later. Let’s conclude this first entry with the definition of harmonic conjugates.

8. As stated above, a harmonic function is a real-valued function. However, we also noted that a complex function f(z) can actually be written as a sum of a real and imaginary part using two real-valued functions u(x,y) and v(x,y). More in particular, we can write f(z) = u(x,y) + iv(x,y), with i the imaginary number (0,1). Now, if u and v would happen to be harmonic functions (but so that’s an if of course – see the Laplace condition imposed on their second-order derivatives in order to qualify for the ‘harmonic’ label) and, in addition to that, if their first-order derivatives would happen to satisfy the Cauchy-Riemann equations (in other  words, f(z) should be a well-behaved analytic function), then (and only then) we can label v as the harmonic conjugate of u.

What does that mean? First, one should note that when v is a harmonic conjugate of u in some domain, it is not generally true that u is a harmonic conjugate of v. So one cannot just switch the functions. Indeed, the minus sign in the Cauchy–Riemann equations makes the relationship asymmetric. But so what’s the relevance of this definition of a harmonic conjugate? Well… There is a theorem that turns the definition around: ‘A function f(z) = u(x,y) + iv(x,y) is analytic (or holomorphic to use standard terminology) in a domain D if and only if v is a harmonic conjugate of u. In other words, introducing the definition of a harmonic conjugate (and the conditions which their first- and second-order derivatives have to satisfy), allows us to check whether or not we have a well-behaved complex-valued function (and with ‘well-behaved’ I mean analytic or holomorphic).    

9. But, again, why do we need holomorphic functions? What’s so special about them? I am not sure for the moment, but I guess there’s something deeper in that one phrase which I quoted from Wikipedia above: “holomorphic functions are also sometimes referred to as regular functions or as conformal maps.” A conformal mapping preserves angles, as you can see on the illustration below, which shows a rectangular grid and its image under a conformal map: f maps pairs of lines intersecting at 90° to pairs of curves still intersecting at 90°. I guess that’s very relevant, although I do not know why exactly as for now. More about that in later posts.

342px-Conformal_map.svg