Compactifying complex spaces

Pre-scriptum (dated 26 June 2020): the material in this post remains interesting but is, strictly speaking, not a prerequisite to understand quantum mechanics. It’s yet another example of how one can get lost in math when studying or teaching physics. :-/

Original post:

In this post, I’ll try to explain how Riemann surfaces (or topological spaces in general) are transformed into compact spaces. Compact spaces are, in essence, closed and bounded subsets of some larger space. The larger space is unbounded – or ‘infinite’ if you want (the term ‘infinite’ is less precise – from a mathematical point of view at least).

I am sure you have all seen it: the Euclidean or complex plane gets wrapped around a sphere (the so-called Riemann sphere), and the Riemann surface of a square root function becomes a torus (i.e. a donut-like object). And then the donut becomes a coffee cup (yes: just type ‘donut and coffee cup’ and look at the animation). The sphere and the torus (and the coffee cup of course) are compact spaces indeed – as opposed to the infinite plane, or the infinite Riemann surface representing the domain of a (complex) square root function. But what does it all mean?

Let me, for clarity, start with a note on the symbols that I’ll be using in this post. I’ll use a boldface z for the complex number z = (x, y) = reiθ in this post (unlike what I did in my previous posts, in which I often used standard letters for complex numbers), or for any other complex number, such as w = u + iv. That’s because I want to reserve the non-boldface letter z for the (real) vertical z coordinate in the three-dimensional (Cartesian or Euclidean) coordinate space, i.e. R3. Likewise, non-boldface letters such as x, y or u and v, denote other real numbers. Note that I will also use a boldface and a boldface to denote the set of real numbers and the complex space respectively. That’s just because the WordPress editor has its limits and, among other things, it can’t do blackboard bold (i.e. these double struck symbols which you usually see as symbols for the set of real and complex numbers respectively). OK. Let’s go for it now.

In my previous post, I introduced the concept of a Riemann surface using the multivalued square root function w = z1/2 = √z. The square root function has only two values. If we write z as z = rei θ, then we can write these two values as w1 = √r ei(θ/2) and w2 = √r ei(θ/2 ± π). Now, √r ei(θ/2 ± π) is equal to √r ei(±π)ei(θ/2) =  – √r ei(θ/2) and, hence, the second root is just the opposite of the first one, so w= – w1.

Introducing the concept of a Riemann surface using a ‘simple’ quadratic function may look easy enough but, in fact, this square root function is actually not the easiest one to start with. First, a simple single-valued function, such as w = 1/z (i.e. the function that is associated with the Riemann sphere) for example, would obviously make for a much easier point of departure. Secondly, the fact that we’re working with a limited number of values, as opposed to an infinite number of values (which is the case for the log z function for example) introduces this particularity of a surface turning back into itself which, as I pointed out in my previous post, makes the visualization of the surface somewhat tricky – to the extent it may actually prevent a good understanding of what’s actually going on.

Indeed, in the previous post I explained how the Riemann surface of the square root function can be visualized in the three-dimensional Euclidean space (i.e. R3). However, such representations only show the real part of z1/2, i.e. the vertical distance Re(z1/2) = √r cos(θ/2 + nπ), with n = 0 or ± 1. So these representations, like the one below for example, do not show the imaginary part, i.e.  Im(z1/2) = √r sin(θ/2 + nπ) (n = 0, ± 1).

That’s both good and bad. It’s good because, in a graph like this, you want one point to represent one point only, and so you wouldn’t get that if you would superimpose the plot with the imaginary part of wz1/2 on the plot showing the real part only. But it’s also bad, because one often forgets that we’re only seeing some part of the ‘real’ picture here, namely the real part, and so one often forgets to imagine the imaginary part. 🙂 


The thick black polygonal line in the two diagrams in the illustration above shows how, on this Riemann surface (or at least its real part), the argument θ of  z = rei θ will go from 0 to 2π (and further), i.e. we’re making (more than) a full turn around the vertical axis, as the argument Θ of w =  z1/2 = √reiΘ makes half a turn only (i.e. Θ goes from 0 to π only). That’s self-evident because Θ = θ/2. [The first diagram in the illustration above represents the (flat) w plane, while the second one is the Riemann surface of the square root function, so it represents but so we have like two points for every z on the flat z plane: one for each root.]

All these visualizations of Riemann surfaces (and the projections on the z and w plane that come with them) have their limits, however. As mentioned in my previous post, one major drawback is that we cannot distinguish the two distinct roots for all of the complex numbers z on the negative real axis (i.e. all the points z = rei θ for which θ is equal to ±π, ±3π,…). Indeed, the real part of wz1/2, i.e. Re(w), is equal to zero for both roots there, and so, when looking at the plot, you may get the impression that we get the same values for w there, so that the two distinct roots of z (i.e. wand w2) coincide. They don’t: the imaginary part of  wand wis different there, so we need to look at the imaginary part of w too. Just to be clear on this: on the diagram above, it’s where the two sheets of the Riemann surface cross each other, so it’s like there’s an infinite number of branch points, which is not the case: the only branch point is the origin.

So we need to look at the imaginary part too. However, if we look at the imaginary part separately, we will have a similar problem on the positive real axis: the imaginary part of the two roots coincides there, i.e. Im(w) is zero, for both roots, for all the points z = rei θ for which θ = 0, 2π, 4π,… That’s what represented and written in the graph below.

branch point

The graph above is a cross-section, so to say, of the Riemann surface  w = z1/2 that is orthogonal to the z plane. So we’re looking at the x axis from -∞ to +∞ along the y axis so to say. The point at the center of this graph is the origin obviously, which is the branch point of our function w = z1/2, and so the y axis goes through it but we can’t see it because we’re looking along that axis (so the y-axis is perpendicular to the cross-section).

This graph is one I made as I tried to get some better understanding of what a ‘branch point’ actually is. Indeed, the graph makes it perfectly clear – I hope 🙂 – that we really have to choose between one of the two branches of the function when we’re at the origin, i.e. the branch point. Indeed, we can pick either the n = 0 branch or the n = ±1 branch of the function, and then we can go in any direction we want as we’re traveling on that Riemann surface, but so our initial choice has consequences: as Dr. Teleman (whom I’ll introduce later) puts it, “any choice of w, followed continuously around the origin, leads, automatically, to the opposite choice as we turn around it.” For example, if we take the w1 branch (or the ‘positive’ root as I call it – even if complex numbers cannot be grouped into ‘positive’ or ‘negative’ numbers), then we’ll encounter the negative root wafter one loop around the origin. Well… Let me immediately qualify that statement: we will still be traveling on the wbranch but so the value of w1 will be the opposite or negative value of our original was we add 2π to arg z = θ. Mutatis mutandis, we’re in a similar situation if we’d take the w2 branch. Does that make sense?

Perhaps not, but I can’t explain it any better. In any case, the gist of the matter is that we can switch from the wbranch to the wbranch at the origin, and also note that we can only switch like that there, at the branch point itself: we can’t switch anywhere else. So there, at the branch point, we have some kind of ‘discontinuity’, in the sense that we have a genuine choice between two alternatives.

That’s, of course, linked to the fact that one cannot define the value of our function at the origin: 0 is not part of the domain of the (complex) square root function, or of the (complex) logarithmic function in general (remember that our square root function is just a special case of the log function) and, hence, the function is effectively not analytic there. So it’s like what I said about the Riemann surface for the log z function: at the origin, we can ‘take the elevator’ to any other level, so to say, instead of having to walk up and down that spiral ramp to get there. So we can add or subtract ± 2nπ to θ without any sweat.

So here it’s the same. However, because it’s the square root function, we’ll only see two buttons to choose from in that elevator, and our choice will determine whether we get out at level Θ = α (i.e. the wbranch) or at level Θ = α ± π (i.e. the wbranch). Of course, you can try to push both buttons at the same time but then I assume that the elevator will make some kind of random choice for you. 🙂 Also note that the elevator in the log z parking tower will probably have a numpad instead of buttons, because there’s infinitely many levels to choose from. 🙂

OK. Let’s stop joking. The idea I want to convey is that there’s a choice here. The choice made determines whether you’re going to be looking at the ‘positive’ roots of z, i.e. √r(cosΘ+sinΘ) or at the ‘negative’ roots of z, i.e. √r(cos(Θ±π)+isin(Θ±π)), or, equivalently (because Θ = θ/2) if you’re going to be looking at the values of w for θ going from 0 to 2π, or the values of w for θ going from 2π to 4π.

Let’s try to imagine the full picture and think about how we could superimpose the graphs of both the real and imaginary part of w. The illustration below should help us to do so: the blue and red image should be shifted over and across each other until they overlap completely. [I am not doing it here because I’d have to make one surface transparent so you can see the other one behind – and that’s too much trouble now. In addition, it’s good mental exercise for you to imagine the full picture in your head.]  

Real and imaginary sheets

It is important to remember here that the origin of the complex z plane, in both images, is at the center of these cuboids (or ‘rectangular prisms’ if you prefer that term). So that’s what the little red arrow points is pointing at in both images and, hence, the final graph, consisting of the two superimposed surfaces (the imaginary and the real one), should also have one branch point only, i.e. at the origin.


I guess I am really boring my imaginary reader here by being so lengthy but so there’s a reason: when I first tried to imagine that ‘full picture’, I kept thinking there was some kind of problem along the whole x axis, instead of at the branch point only. Indeed, these two plots suggest that we have two or even four separate sheets here that are ‘joined at the hip’ so to say (or glued or welded or stitched together – whatever you want to call it) along the real axis (i.e. the x axis of the z plane). In such (erroneous) view, we’d have two sheets above the complex z plane (one representing the imaginary values of √z and one the real part) and two below it (again one with the values of the imaginary part of √z and one representing the values of the real part). All of these ‘sheets’ have a sharp fold on the x axis indeed (everywhere else they are smooth), and that’s where they join in this (erroneous) view of things.

Indeed, such thinking is stupid and leads nowhere: the real and imaginary parts should always be considered together, and so there’s no such thing as two or four sheets really: there is only one Riemann surface with two (overlapping) branches. You should also note that where these branches start or end is quite arbitrary, because we can pick any angle α to define the starting point of a branch. There is also only one branch point. So there is no ‘line’ separating the Riemann surface into two separate pieces. There is only that branch point at the origin, and there we decide what branch of the function we’re going to look at: the n = 0 branch (i.e. we consider arg w = Θ to be equal to θ/2) or the n = ±1 branch (i.e. we take the Θ = θ/2 ± π equation to calculate the values for wz1/2).

OK. Enough of these visualizations which, as I told you above already, are helpful only to some extent. Is there any other way of approaching the topic?

Of course there is. When trying to understand these Riemann surfaces (which is not easy when you read Penrose because he immediately jumps to Riemann surfaces involving three or more branch points, which makes things a lot more complicated), I found it useful to look for a more formal mathematical definition of a Riemann surface. I found such more formal definition in a series of lectures of a certain Dr. C. Teleman (Berkeley, Lectures on Riemann surfaces, 2003). He defines them as graphs too, or surfaces indeed, just like Penrose and others, but, in contrast, he makes it very clear, right from the outset, that it’s really the association (i.e. the relation) between z and w which counts, not these rather silly attempts to plot all these surfaces in three-dimensional space.

Indeed, according to Dr. Teleman’s introduction to the topic, a Riemann surface S is, quite simply, a set of ‘points’ (z, w) in the two-dimensional complex space C= C x(so they’re not your typical points in the complex plane but points with two complex dimensions), such that w and z are related with each other by a holomorphic function w = f(z), which itself defines the Riemann surface. The same author also usefully points out that this holomorphic function is usually written in its implicit form, i.e. as P(z, w) = 0 (in case of a polynomial function) or, more generally, as F(z, w) = 0.

There are two things you should note here. The first one is that this eminent professor suggests that we should not waste too much time by trying to visualize things in the three-dimensional R3 = R x R x R space: Riemann surfaces are complex manifolds and so we should tackle them in their own space, i.e. the complex Cspace. The second thing is linked to the first: we should get away from these visualizations, because these Riemann surfaces are usually much and much more complicated than a simple (complex) square root function and, hence, are usually not easy to deal with. That’s quite evident when we consider the general form of the complex-analytical (polynomial) P(z, w) function above, which is P(z, w) = wn + pn-1(z)wn-1 + … + p1(z)w + p0(z), with the pk(z) coefficients here being polynomials in z themselves.

That being said, Dr. Teleman immediately gives a ‘very simple’ example of such function himself, namely w = [(z– 1) + ((zk2)]1/2. Huh? If that’s regarded as ‘very simple’, you may wonder what follows. Well, just look him up I’d say: I only read the first lecture and so there are fourteen more. 🙂

But he’s actually right: this function is not very difficult. In essence, we’ve got our square root function here again (because of the 1/2 exponent), but with four branch points this time, namely ± 1 and ± k (i.e. the positive and negative square roots of 1 and krespectively, cf. the (z– 1)  and (z– k2) terms in the argument of this function), instead of only one (the origin).

Despite the ‘simplicity’ of this function, Dr. Teleman notes that “we cannot identify this shape by projection (or in any other way) with the z-plane or the w-plane”, which confirms the above: Riemann surfaces are usually not simple and, hence, these ‘visualizations’ don’t help all that much. However, while not ‘identifying the shape’ of this particular square root function, Dr. Teleman does make the following drawing of the branch points:

Compactification 1

This is also some kind of cross-section of the Riemann surface, just like the one I made above for the ‘super-simple’ w = √z function: the dotted lines represent the imaginary part of w = [(z– 1) + (z– k2)]1/2, and the non-dotted lines are the real part of the (double-valued) w function. So that’s like ‘my’ graph indeed, except that we’ve got four branch points here, so we can make a choice between one of the two branches of at each of them.

[Note that one obvious difficulty in the interpretation of Dr. Teleman’s little graph above is that we should not assume that the complex numbers k and k are actually lying on the same line as 1 and -1 (i.e. the real line). Indeed, k and k are just standard complex numbers and most complex numbers do not lie on the real line. While that makes the interpretation of that simple graph of Dr. Teleman somewhat tricky, it’s probably less misleading than all these fancy 3D graphs. In order to proceed, we can either assume that this z axis is some polygonal line really, representing line segments between these four branch points or, even better, I think we should just accept the fact we’re looking at the z plane here along the z plane itself, so we can only see it as a line and we shouldn’t bother about where these points k and –k are located. In fact, their absolute value may actually be smaller than 1, in which case we’d probably want to change the order of the branch points in Dr. Teleman’s little graph).]

Dr. Teleman doesn’t dwell too long on this graph and, just like Penrose, immediately proceeds to what’s referred to as the compactification of the Riemann space, so that’s this ‘transformation’ of this complex surface into a donut (or a torus as it’s called in mathematics). So how does one actually go about that?

Well… Dr. Teleman doesn’t waste too many words on that. In fact, he’s quite cryptic, although he actually does provide much more of an explanation than Penrose does (Penrose’s treatment of the matter is really hocus-pocus I feel). So let me start with a small introduction of my own once again.

I guess it all starts with the ‘compactification’ of the real line, which is visualized below: we reduce the notion of infinity to a ‘point’ (this ‘point’ is represented by the symbol ∞ without a plus or minus sign) that bridges the two ‘ends’ of the real line (i.e. the positive and negative real half-line). Like that, we can roll up single lines and, by extension, the whole complex plane (just imagine rolling up the infinite number of lines that make up the plane I’d say :-)). So then we’ve got an infinitely long cylinder.


But why would we want to roll up a line, or the whole plane for that matter? Well… I don’t know, but I assume there are some good reasons out there: perhaps we actually do have some thing going round and round, and so then it’s probably better to transform our ‘real line’ domain into a ‘real circle’ domain. The illustration below shows how it works for a finite sheet, and I’d also recommend my imaginary reader to have a look at the Riemann Project website (, where you’ll find some nice animations (but do download Wolfram’s browser plugin if your Internet connection is slow: downloading the video takes time). One of the animations shows how a torus is, indeed, ideally suited as a space for a phenomenon characterized by two “independent types of periodicity”, not unlike the cylinder, which is the ‘natural space’ for “motion marked by a single periodicity”.

plane to torus

However, as I explain in a note below this post, the more natural way to roll or wrap up a sheet or a plane is to wrap it around a sphere, rather than trying to create a donut. Indeed, if we’d roll the infinite plane up in a donut, we’ll still have a line representing infinity (see below) and so it looks quite ugly: if you’re tying ends up, it’s better you tie all of them up, and so that’s what you do when you’d wrap the plane up around a sphere, instead of a torus.

From plane to torus

OK. Enough on planes. Back to our Riemann surface. Because the square root function w has two values for each z, we cannot make a simple sphere: we have to make a torus. That’s because a sphere has one complex dimension only, just like a plane, and, hence, they are topologically equivalent so to say. In contrast, a double-valued function has two ‘dimensions’ so to say and, hence, we have to transform the Riemann surface into something which accommodates that, and so that’s a torus (or a coffee cup :-)). In topological jargon, a torus has genus one, while the complex plane (and the Riemann sphere) has genus zero.

[Please do note that this will be the case regardless of the number of branch points. Indeed, Penrose gives the example of the function w = (1 – z3)1/2, which has three branch points, namely the three cube roots of the 1 – zexpression (these three roots are obviously equal to the three cube roots of unity). However, ‘his’ Riemann surface is also a Riemann surface of a square root function (albeit one with a more complicated form than the ‘core’ w = z1/2 example) and, hence, he also wraps it up as a donut indeed, instead of a sphere or something else.]

I guess that you, my imaginary reader, have stopped reading all of this nonsense. If you haven’t, you’re probably thinking: why don’t we just do it? How does it work? What’s the secret?

Frankly, the illustration in Penrose’s Road to Reality (i.e. Fig. 8.2 on p. 137) is totally useless in terms of understanding how it’s being done really. In contrast, Dr. Teleman is somewhat more explicit and so I’ll follow him here as much as I can while I try to make sense of it (which is not as easy as you might think). 

The short story is the following: Dr. Teleman first makes two ‘cuts’ (or ‘slits’) in the z plane, using the four branch points as the points where these ‘cuts’ start and end. He then uses these cuts to form two cylinders, and then he joins the ends of these cylinders to form that torus. That’s it. The drawings below illustrate the proceedings. 


Compactification 3

Huh? OK. You’re right: the short story is not correct. Let’s go for the full story. In order to be fair to Dr. Teleman, I will literally copy all what he writes on what is illustrated above, and add my personal comments and interpretations in square brackets (so when you see those square brackets, that’s [me] :-)). So this is what Dr. Teleman has to say about it:

The function w = [(z– 1) + ((z– k2)]1/2 behaves like the [simple] square root [function] near ± 1 and ± k. The important thing is that there is no continuous single-valued choice of w near these points [shouldn’t he say ‘on’ these points, instead of ‘near’?]: any choice of w, followed continuously round any of the four points, leads to the opposite choice upon return.

[The formulation may sound a bit weird, but it’s the same as what happens on the simple z1/2 surface: when we’re on one of the two branches, the argument of w changes only gradually and, going around the origin, starting from one root of z (let’s say the ‘positive’ root w1), we arrive, after one full loop around the origin on the z plane (i.e. we add 2π to arg z = θ), at the opposite value, i.e. the ‘negative’ root w= -w1.] 

Defining a continuous branch for the function necessitates some cuts. The simplest way is to remove the open line segments joining 1 with k and -1 with –k. On the complement of these segments [read: everywhere else on the z plane], we can make a continuous choice of w, which gives an analytic function (for z ≠ ±1, ±k). The other branch of the graph is obtained by a global change of sign. [Yes. That’s obvious: the two roots are each other’s opposite (w= –w1) and so, yes, the two branches are, quite simply, just each other’s opposite.]

Thus, ignoring the cut intervals for a moment, the graph of w breaks up into two pieces, each of which can be identified, via projection, with the z-plane minus two intervals (see Fig. 1.4 above). [These ‘projections’ are part and parcel of this transformation business it seems. I’ve encountered more of that stuff and so, yes, I am following you, Dr. Teleman!]

Now, over the said intervals [i.e. between the branch points], the function also takes two values, except at the endpoints where those coincide. [That’s true: even if the real parts of the two roots are the same (like on the negative real axis for our  z1/2 s), the imaginary parts are different and, hence, the roots are different for points between the various branch points, and vice versa of course. This is actually one of the reasons why I don’t like Penrose’s illustration on this matter: his illustration suggests that this is not the case.]

To understand how to assemble the two branches of the graph, recall that the value of w jumps to its negative as we cross the cuts. [At first, I did not get this, but so it’s the consequence of Dr. Teleman ‘breaking up the graph into tow pieces’. So he separates the two branches indeed, and he does so at the ‘slits’ he made, so that’s between the branch points. It follows that the value of w will indeed jump to its opposite value as we cross them, because we’re jumping on the other branch there.]

Thus, if we start on the upper sheet and travel that route, we find ourselves exiting on the lower sheet. [That’s the little arrows on these cuts.] Thus, (a) the far edges of the cuts on the top sheet must be identified with the near edges of the cuts on the lower sheet; (b) the near edges of the cuts on the top sheet must be identified with the far edges on the lower sheet; (c) matching endpoints are identified; (d) there are no other identifications. [Point (d) seems to be somewhat silly but I get it: here he’s just saying that we can’t do whatever we want: if we glue or stitch or weld all of these patches of space together (or should I say copies of patches of space?), we need to make sure that the points on the edges of these patches are the same indeed.]

A moment’s thought will convince us that we cannot do all this in in R3, with the sheets positioned as depicted, without introducing spurious crossings. [That’s why Brown and Churchill say it’s ‘physically impossible.’] To rescue something, we flip the bottom sheet about the real axis.  

[Wow! So that’s the trick! That’s the secret – or at least one of them! Flipping the sheet means rotating it by 180 degrees, or multiplying all points twice with i, so that’s i2 = -1 and so then you get the opposite values. Now that’s a smart move!] 

The matching edges of the cuts are now aligned, and we can perform the identifications by stretching each of the surfaces around the cut to pull out a tube. We obtain a picture representing two planes (ignore the boundaries) joined by two tubes (see Fig. 1.5.a above).

[Hey! That’s like the donut-to-coffee-cup animation, isn’t it? Pulling out a tube? Does that preserve angles and all that? Remember it should!]

For another look at this surface, recall that the function z → R2/z identifies the exterior of the circle ¦z¦ < R with the punctured disc z: ¦z¦ < R and z ≠ 0 (it’s a punctured disc so its center is not part of the disc). Using that, we can pull the exteriors of the discs, missing from the picture above, into the picture as punctured discs, and obtain a torus with two missing points as the definitive form of our Riemann surface (see Fig. 1.5.b).

[Dr. Teleman is doing another hocus-pocus thing here. So we have those tubes with an infinite plane hanging on them, and so it’s obvious we just can’t glue these two infinite planes together because it wouldn’t look like a donut 🙂. So we first need to transform them into something more manageable, and so that’s the punctured discs he’s describing. I must admit I don’t quite follow him here, but I can sort of sense – a little bit at least – what’s going on.] 


Phew! Yeah, I know. My imaginary reader will surely feel that I don’t have a clue of what’s going on, and that I am actually not quite ready for all of this high-brow stuff – or not yet at least. He or she is right: my understanding of it all is rather superficial at the moment and, frankly, I wish either Penrose or Teleman would explain this compactification thing somewhat better. I also would like them to explain why we actually need to do this compactification thing, why it’s relevant for the real world.

Well… I guess I can only try to move forward as good as I can. I’ll keep you/myself posted.

Note: As mentioned above, there is more than one way to roll or wrap up the complex plane, and the most natural way of doing this is to do it around a sphere, i.e. the so-called Riemann sphere, which is illustrated below. This particular ‘compactification’ exercise is equivalent to a so-called stereographic projection: it establishes a one-on-one relationship between all points on the sphere and all points of the so-called extended complex plane, which is the complex plane plus the ‘point’ at infinity (see my explanation on the ‘compactification’ of the real line above).


But so Riemann surfaces are associated with complex-analytic functions, right? So what’s the function? Well… The function with which the Riemann sphere is associated is w = 1/z. [1/z is equal to z = z*/¦z¦, with z* = x – iy, i.e. the complex conjugate of z = x + iy, and ¦z¦ the modulus or absolute value of z, and so you’ll recognize the formulas for the stereographic projection here indeed.]

OK. So what? Well… Nothing much. This mapping from the complex z plane to the complex w plane is conformal indeed, i.e. it preserves angles (but not areas) and whatever else that comes with complex analyticity. However, it’s not as straightforward as Penrose suggests. The image below (taken from Brown and Churchill) shows what happens to lines parallel to the x and y axis in the z plane respectively: they become circles in the w plane. So this particular function actually does map circles to circles (which is what holomorphic functions have to do) but only if we think of straight lines as being particular cases of circles, namely circles “of infinite radius”, as Penrose puts it.

inverse of z function

Frankly, it is quite amazing what Penrose expects in terms of mental ‘agility’ of the reader. Brown and Churchill are much more formal in their approach (lots of symbols and equations I mean, and lots of mathematical proofs) but, to be honest, I find their stuff easier to read, even if their textbook is a full-blown graduate level course in complex analysis.

I’ll conclude this post here with two more graphs: they give an idea of how the Cartesian and polar coordinate spaces can be mapped to the Riemann sphere. In both cases, the grid on the plane appears distorted on the sphere: the grid lines are still perpendicular, but the areas of the grid squares shrink as they approach the ‘north pole’.



The mathematical equations for the stereographic projection, and the illustration above, suggest that the w = 1/z function is basically just another way to transform one coordinate system into another. But then I must admit there is a lot of finer print that I don’t understand – as yet that is. It’s sad that Penrose doesn’t help out very much here.


Euler’s formula

I went trekking (to the Annapurna Base Camp this time) and, hence, left the math and physics books alone for a week or two. When I came back, it was like I had forgotten everything, and I wasn’t able to re-do the exercises. Back to the basics of complex numbers once again. Let’s start with Euler’s formula:

eix = cos(x) + isin(x)

In his Lectures on Physics, Richard Feynman calls this equation ‘one of the most remarkable, almost astounding, formulas in all of mathematics’, so it’s probably no wonder I find it intriguing and, indeed, difficult to grasp. Let’s look at it. So we’ve got the real (but irrational) number e in it. That’s a fascinating number in itself because it pops up in different mathematical expressions which, at first sight, have nothing in common with each other. For example, e can be defined as the sum of the infinite series e = 1/0! + 1/2! + + 1/3! + 1/4! + … etcetera (n! stands for the factorial of n in this formula), but one can also define it as that unique positive real number for which d(et)/dt = et (in other words, as the base of an exponential function which is its own derivative). And, last but not least, there are also some expressions involving limits which can be used to define e. Where to start? More importantly, what’s the relation between all these expressions and Euler’s formula?

First, we should note that eix is not just any number: it is a complex number – as opposed to the more simple ex expression, which denotes the real exponential function (as opposed to the complex exponential function ez). Moreover, we should note that eix is a complex number on the unit circle. So, using polar coordinates, we should say that eix  is a complex number with modulus 1 (the modulus is the absolute value of the complex number (i.e. the distance from 0 to the point we are looking at) or, alternatively, we could say it is the magnitude of the vector defined by the point we are looking at) and argument x (the argument is the angle (expressed in radians) between the positive real axis and the line from 0 to the point we are looking at).

Now, it is self-evident that cos(x) + isin(x) represents exactly the same: a point on the unit circle defined by the angle x. But so that doesn’t prove Euler’s formula: it only illustrates it. So let’s go to one or the other proof of the formula to try to understand it somewhat better. I’ll refer to Wikipedia for proving Euler’s formula in extenso but let me just summarize it. The Wikipedia article (as I looked at it today) gives three proofs.

The first proof uses the power series expansion (yes, the Taylor/Maclaurin series indeed – more about that later) for the exponential function: eix = 1 + ix + (ix)2/2! + (ix)3/3! +… etcetera. We then substitute using i2 = -1, i3 = –i etcetera and so, when we then re-arrange the terms, we find the Maclaurin series for the cos(x) and sin(x) functions indeed. I will come back to these power series in another post.

The second proof uses one of the limit definitions for ex but applies it to the complex exponential function. Indeed, one can write ez (with z = x+iy) as ez = lim(1 + z/n)n for n going to infinity. The proof substitutes ix for z and then calculates the limit for very large (or infinite) n indeed. This proof is less obvious than it seems because we are dealing with power series here and so one has to take into account issues of convergence and all that.

The third proof also looks complicated but, in fact, is probably the most intuitive of the three proofs given because it uses the derivative definition of e. To be more precise, it takes the derivative of both sides of Euler’s formula using the polar coordinates expression for complex numbers. Indeed, eix is a complex number and, hence, can be written as some number z = r(cosθ+ isinθ), and so the question to solve here is: what’s r and θ? We need to write these two values as a function of x. How do we do that? Well… If we take the derivative of both sides, we get d(eix)/dx = ieix = (cosθ + isinθ)dr/dx + r[d(cosθ + isinθ)/dθ]dθ/dx. That’s just the chain rule for derivatives of course. Now, writing it all out and equating the real and imaginary parts on both sides of the expression yields following: dr/dx = 0 and dθ/dx = 1. In addition, we must have that, for x = 0, ei0 = [ei]0 = 1, so we have r(0) = 1 (the modulus of the complex number (1,0) is one) and θ(0) = 0 (the argument of (1,0) is zero). It follows that the functions r and θ are equal to r = 1 and θ = x, which proves the formula.

While these proofs are (relatively) easy to understand, the formula remains weird, as evidenced also from its special cases, like ei0 = ei = 1 = – eiπ = – eiπ or, equivalently, eiπ + 1 = 0, which is a formula which combines the five most basic quantities in mathematics: 0, 1, i, e and π. It is an amazing formula because we have two irrational numbers here, e and π, which have definitions which do not refer to each other at all (last time I checked, π was still being defined as the simple ratio of a circle’s circumference to its diameter, while the various definitions of e have nothing to do with circles), and so we combine these two seemingly unrelated numbers, also inserting the imaginary unit i (using iπ as an exponent for e) and we get minus 1 as a result (eiπ = – 1). Amazing indeed, isn’t it?

[…] Well… I’d say at least as amazing as the Taylor or Maclaurin expansion of a function – but I’ll save my thoughts on these for another post (even if I am using the results of these expansions in this post). In my view, what Euler’s formula shows is the amazing power of mathematical notation really – and the creativity behind. Indeed, let’s look at what we’re doing with complex numbers: we start from one or two definitions only and suddenly all kinds of wonderful stuff starts popping up. It goes more or less like this really:

We start off with these familiar x and y coordinates of points in a plane. Now we call the x-axis the real axis and then, just to distinguish them from the real numbers, we call the numbers on the y-axis imaginary numbers. Again, it is just to distinguish them from the real numbers because, in fact, imaginary numbers are not imaginary at all: they are as real as the real numbers – or perhaps we should say that the real numbers are as imaginary as the imaginary numbers because, when everything is said and done, the real numbers are mental constructs as well, aren’t they? Imaginary numbers just happen to lie on another line, perpendicular to our so-called real line, and so that’s why we add a little symbol i (the so-called imaginary unit) when we write them down. So we write 1i (or i tout court), 2i, 3i etcetera, or i/2 or whatever (it doesn’t matter if we write i before the real number or after – as long as we’re consistent).

Then we combine these two numbers – the real and imaginary numbers – to form a so-called complex number, which is nothing but a point (x, y) in this Cartesian plane. Indeed, while complex numbers are somewhat more complex than the numbers we’re used to in daily life, they are not out of this world I’d say: they’re just points in space, and so we can also represent them as vectors (‘arrows’) from the origin to (x, y).

But so this is what we are doing really: we combine the real and imaginary numbers by using the very familiar plus (+) sign, so we write z = x + iy. Now that is actually where the magic starts: we are not adding the same things here, like we would do when we are counting apples or so, or when we are adding integers or rational or real numbers in general. No, we are adding here two different things here – real and imaginary numbers – which, in fact, we cannot really add. Indeed, your mommy told you that you cannot compare apples with oranges, didn’t she? Well… That’s exactly what we do here really, and so we will keep these real and imaginary numbers separate in our calculations indeed: we will add the real parts of complex numbers with each other only, and the imaginary parts of them also with each other only.

Addition is quite straightforward: we just add the two vectors. Multiplication is somewhat more tricky but (geometrically) easy to interpret as well: the product of two complex numbers is a vector with a length which is equal to the sum of the lengths of the two vectors we are multiplying (i.e. the two complex numbers which make up the product) , and its angle with the real axis is the sum of the angles of the two original vectors. From this definition, many things follow, all equally amazing indeed, but one of these amazing facts is that i2 = -1, i3 = –i, i4 = 1, i5 = i, etcetera. Indeed: multiplying a complex number z = x + iy = (x, y) with the imaginary unit i amounts to rotating it 90° (counterclockwise) about the origin. So we are not defining i2 as being equal to minus 1 (many textbooks treat this equality as a definition indeed): it just comes as a fact which we can derive from the earlier definition of a complex product. Sweet, isn’t it?

So we have addition and multiplication now. We want to do much more of course. After defining addition and multiplication, we want to do complex powers, and so it’s here that this business with e pops up.

We first need to remind ourselves of the simple fact that the number e is just a real number: it’s equal to 2.718281828459045235360287471 etcetera. We have to write ‘etcetera’ because e is an irrational number, which – whatever the term ‘irrational’ may suggest in everyday language – simply means that e is not a fraction of any integer numbers (so irrational means ‘not rational’). e is also a transcendental number – a word which suggest all kinds of mystical properties but which, in mathematics, only means we cannot write it as a root of some polynomial (a polynomial with rational coefficients that is). So it’s a weird number. That being said, it is also the so-called ‘natural’ base for the exponential function. Huh? Why would mathematicians take such a strange number as a so-called ‘natural’ base? They must be irrational, no? Well… No. If we take e as the base for the exponential function ex (so that’s just this real (but irrational) number e to the power x, with x being the variable running along the x-axis: hence, we have a function here which takes a value from the set of real numbers and which yields some other real number), then we have a function here which is its own derivative: d(ex)/dx = ex. It is also the natural base for the logarithmic function and, as mentioned above, it kind of ‘pops up’ – quite ‘naturally’ indeed I’d say – in many other expressions, such as compound interest calculations for example or the general exponential function ax = ex lna. In other words, we need this and exp(x) and ln(x) functions to define powers of real numbers in general. So that’s why mathematicians call it ‘natural’.

While the example of compound interest calculations does not sound very exciting, all these formulas with e and exponential functions and what have you did inspire all these 18th century mathematicians – like Euler – who were in search of a logical definition of complex powers.

Let’s state the problem once again: we can do addition and multiplication of complex numbers but so the question is how to do complex powers. When trying to figure that one out, Euler obviously wanted to preserve the usual properties of powers, like axay = ax+y and, effectively, this property of the so-called ‘natural’ exponential function that d(ex)/dx = ex. In other words, we also want the complex exponential function to be its own derivative so d(ez)/dz should give us ez once again.

Now, while Euler was thinking of that (and of many other things too of course), he was well aware of the fact that you can expand ex into that power series which I mentioned above: ex = 1/0! + x/1! + (x)2/2! + (x)3/3! +… etcetera. So Euler just sat down, substituted the real number x with the imaginary number ix and looked at it: eix = 1 + ix + (ix)2/2! + (ix)3/3! +… etcetera. Now lo and behold! Taking into account that i2 = -1, i3 = –i, i4 = 1, i5 = i, etcetera, we can put that in and re-arrange the terms indeed and so Euler found that this equation becomes eix = (1 – x2/2! + x4/4! – -x6/6! +…) + i(x – x3/3! + x5/5! -… ). Now these two terms do correspond to the Maclaurin series for the cosine and sine function respectively, so there he had it: eix = cos(x) + isin(x). His formula: Euler’s formula!

From there, there was only one more step to take, and that was to write ez = ex+iy as exeiy, and so there we have our definition of a complex power: it is a product of two factors – ex and ei– both of which we have effectively defined now. Note that the ex factor is just a real number, even if we write it as ex: it acts as a sort of scaling factor for eiwhich, you will remember (as we pointed it out above already), is a point on the unit circle. More generally, it can be shown that eis the absolute value of ez (or the modulus or length or magnitude of the vector – whatever term you prefer: they all refer to the same), while y is the argument of the complex number ez (i.e. the angle of the vector ez with the real axis). [And, yes, for those who would still harbor some doubts here: eis just another complex number and, hence, a two-dimensional vector, i.e. just a point in the Cartesian plane, so we have a function which goes from the set of complex numbers here (it takes z as input) and which yields another complex number.]

Of course, you will note that we don’t have something like zw here, i.e. a complex base (i.e. z) with a complex exponent (i.e. w), or even a formula for complex powers of real numbers in general, i.e. a formula for aw with a any real number (so not only e but any real number indeed) and w a complex exponent. However, that’s a problem which can be solved easily through writing z and w in their so-called polar form, so we write z as z = ¦z¦eiθ = ¦z¦(cosθ + isinθ) and w as ¦w¦ eiσ =  ¦w¦(cosσ + isinσ) and then we can take it further from there. [Note that ¦z¦ and ¦w¦represent the modulus (i.e. the length) of z and w respectively, and the angles θ and σ are obviously the arguments of the same z and w respectively.] Of course, if z is a real number (so if y = 0), then the angle θ will obviously be zero (i.e. the angle of the real axis with itself) and so z will be equal to a real number (i.e. its real part only, as its imaginary part is zero) and then we are back to the case of a real base and a complex exponent. In other words, that covers the aw case.

[…] Wel… Easily? OK. I am simplifying a bit here – as I need to keep the length of this post manageable – but, in fact, it actually really is a matter of using these common properties of powers (such as ea+biec = e(a+c)+bi and it actually does all work out. And all of this magic did actually start with simply ‘adding’ the so-called ‘real’ numbers x on the x-axis with the so-called ‘imaginary’ numbers on the y-axis. 🙂

Post scriptum:

Penrose’s Road to Reality dedicates a whole chapter to complex exponentiation (Chapter 5). However, the development is not all that simple and straightforward indeed. The first step in the process is to take integer powers – and integer roots – of complex numbers, so that’s zn for n = 0, ±1, ±2, ±3… etcetera (or z1/2, z1/3, z1/4 if we’re talking integer roots). That’s easy because it can be solved through using the old formula of Abraham de Moivre: (cosθ + sinθ)n = cos(nθ) + isin(nθ) (de Moivre penned this down in 1707 already, more than 40 years before Euler looked at the matter). However, going from there to full-blown complex powers is, unfortunately, not so straightforward, as it involves a bit of a detour: we need to work with the inverse of the (complex) exponential function ez, i.e. the (complex) natural logarithm.

Now that is less easy than it sounds. Indeed, while the definition of a complex logarithm is as straightforward as the definition of real logarithms (lnz is a function for which elnz = z), the function itself is a bit more… well… complex I should say. For starters, it is a multiple-valued function: if we write the solution w = lnz as w = u+iv, then it is obvious that ew will be equal to eu+iv = eueiv and this complex number ew can then be written in its polar form ew = reiθ with r = eu and v = θ + 2nπ. Of course, ln(eu+iv) = u + iv and so the solution of w will look like w = lnr + i(θ + 2nπ) with n = 0, ±1, ±2, ±3 etcetera. In short, we have an infinite number of solutions for w (one for every n we choose) and so we have this problem of multiple-valuedness indeed. We will not dwell on this here (at least not in this post) but simply note that this problem is linked to the properties of the complex exponential function ez itself. Indeed, the complex exponential function ez has very different properties than the real exponential function ex. First, we should note that, unlike e(which, as we know goes from zero at the far end of the negative side of the real axis to infinity as x goes big on the positive side), eis a periodic function – so it oscillates and yields the same values after some time – with this ‘after some time’ being the periodicity of the function. Indeed, e= e+2πi and so its period 2πi (note that this period is an imaginary number – but so it’s a ‘real’ period, if you know what I mean :-)). In addition, and this is also very much unlike the real exponential function ex, ecan be negative (as well as assume all kinds of other complex values). For example, eiπ = -1, as we noted above already.

That being said, the problem of multiple-valuedness can be solved through the definition of a principal value of lnz and that, then, leads us to what we want here: a consistent definition of a complex power of a complex base (or the definition of a true complex exponential (and logarithmic) function in other words). To those who would want to see the details of this (i.e. my imaginary readers :-)), I would say that Penrose’s treatment of the matter in the above-mentioned Chapter 5 of The Road to Reality is rather cryptic – presumably because he has to keep his book around 1000 pages only (not a lot to explain all of the Laws of the Universe) and, hence, Brown & Churchill’s course (or whatever other course dealing with complex analysis) probably makes for easier reading.

[As for the problem of multiple-valuedness, we should probably also note the following: when taking the nth root of a complex number (i.e. z1/n with n = 2, 3, etcetera), we also obtain a set of n values ck (with k = 0, 1, 2,… n-1), rather than one value only. However, once we have one of these values, we have all of them as we can write these cas ck = r1/nei(θ/n+2kπ/n), (with the original complex number z equal to z = reiθ) then so we could also just consider the principal value c0 and, as such, consider the function as a single-valued one. In short, the problem of multiple-valued functions pops up almost everywhere in the complex space, but it is not an issue really. In fact, we encounter the problem of multiple-valuedness as soon as we extend the exponential function in the space of the real numbers and also allow rational and real exponents, instead of positive integers only. For example, 41/2 is equal to ±2, so we have two results here too and, hence, multiple values. Another example would be the 4th root of 16: we have four 4th roots of 16: +2, -2 and then two complex roots +2i and -2i. However, standard practice is that we only take the positive value into account in order to ensure a ‘well-behaved’ exponential function. Indeed, the standard definition of a real exponential function is bx = (elnb)x = elnbex, and so, if x = 1/n, we’ll only assign the positive 4th root to ex. Standard practice will also restrict the value of b to a positive real number (b > 0). These conventions not only ensures a positive result but also continuity of the function and, hence, the existence of a derivative which we can then use to do other things. By the way, the definition also shows – once again – why e is such a nice (or ‘natural’) number: we can use it to calculate the value for any exponential function (for any real base b > 0). But so we had mentioned that already, and it’s now really time to stop writing. I think the point is clear.]