Compactifying complex spaces

In this post, I’ll try to explain how Riemann surfaces (or topological spaces in general) are transformed into compact spaces. Compact spaces are, in essence, closed and bounded subsets of some larger space. The larger space is unbounded – or ‘infinite’ if you want (the term ‘infinite’ is less precise – from a mathematical point of view at least).

I am sure you have all seen it: the Euclidean or complex plane gets wrapped around a sphere (the so-called Riemann sphere), and the Riemann surface of a square root function becomes a torus (i.e. a donut-like object). And then the donut becomes a coffee cup (yes: just type ‘donut and coffee cup’ and look at the animation). The sphere and the torus (and the coffee cup of course) are compact spaces indeed – as opposed to the infinite plane, or the infinite Riemann surface representing the domain of a (complex) square root function. But what does it all mean?

Let me, for clarity, start with a note on the symbols that I’ll be using in this post. I’ll use a boldface z for the complex number z = (x, y) = reiθ in this post (unlike what I did in my previous posts, in which I often used standard letters for complex numbers), or for any other complex number, such as w = u + iv. That’s because I want to reserve the non-boldface letter z for the (real) vertical z coordinate in the three-dimensional (Cartesian or Euclidean) coordinate space, i.e. R3. Likewise, non-boldface letters such as x, y or u and v, denote other real numbers. Note that I will also use a boldface and a boldface to denote the set of real numbers and the complex space respectively. That’s just because the WordPress editor has its limits and, among other things, it can’t do blackboard bold (i.e. these double struck symbols which you usually see as symbols for the set of real and complex numbers respectively). OK. Let’s go for it now.

In my previous post, I introduced the concept of a Riemann surface using the multivalued square root function w = z1/2 = √z. The square root function has only two values. If we write z as z = rei θ, then we can write these two values as w1 = √r ei(θ/2) and w2 = √r ei(θ/2 ± π). Now, √r ei(θ/2 ± π) is equal to √r ei(±π)ei(θ/2) =  – √r ei(θ/2) and, hence, the second root is just the opposite of the first one, so w= – w1.

Introducing the concept of a Riemann surface using a ‘simple’ quadratic function may look easy enough but, in fact, this square root function is actually not the easiest one to start with. First, a simple single-valued function, such as w = 1/z (i.e. the function that is associated with the Riemann sphere) for example, would obviously make for a much easier point of departure. Secondly, the fact that we’re working with a limited number of values, as opposed to an infinite number of values (which is the case for the log z function for example) introduces this particularity of a surface turning back into itself which, as I pointed out in my previous post, makes the visualization of the surface somewhat tricky – to the extent it may actually prevent a good understanding of what’s actually going on.

Indeed, in the previous post I explained how the Riemann surface of the square root function can be visualized in the three-dimensional Euclidean space (i.e. R3). However, such representations only show the real part of z1/2, i.e. the vertical distance Re(z1/2) = √r cos(θ/2 + nπ), with n = 0 or ± 1. So these representations, like the one below for example, do not show the imaginary part, i.e.  Im(z1/2) = √r sin(θ/2 + nπ) (n = 0, ± 1).

That’s both good and bad. It’s good because, in a graph like this, you want one point to represent one point only, and so you wouldn’t get that if you would superimpose the plot with the imaginary part of wz1/2 on the plot showing the real part only. But it’s also bad, because one often forgets that we’re only seeing some part of the ‘real’ picture here, namely the real part, and so one often forgets to imagine the imaginary part. 🙂 

sqrt

The thick black polygonal line in the two diagrams in the illustration above shows how, on this Riemann surface (or at least its real part), the argument θ of  z = rei θ will go from 0 to 2π (and further), i.e. we’re making (more than) a full turn around the vertical axis, as the argument Θ of w =  z1/2 = √reiΘ makes half a turn only (i.e. Θ goes from 0 to π only). That’s self-evident because Θ = θ/2. [The first diagram in the illustration above represents the (flat) w plane, while the second one is the Riemann surface of the square root function, so it represents but so we have like two points for every z on the flat z plane: one for each root.]

All these visualizations of Riemann surfaces (and the projections on the z and w plane that come with them) have their limits, however. As mentioned in my previous post, one major drawback is that we cannot distinguish the two distinct roots for all of the complex numbers z on the negative real axis (i.e. all the points z = rei θ for which θ is equal to ±π, ±3π,…). Indeed, the real part of wz1/2, i.e. Re(w), is equal to zero for both roots there, and so, when looking at the plot, you may get the impression that we get the same values for w there, so that the two distinct roots of z (i.e. wand w2) coincide. They don’t: the imaginary part of  wand wis different there, so we need to look at the imaginary part of w too. Just to be clear on this: on the diagram above, it’s where the two sheets of the Riemann surface cross each other, so it’s like there’s an infinite number of branch points, which is not the case: the only branch point is the origin.

So we need to look at the imaginary part too. However, if we look at the imaginary part separately, we will have a similar problem on the positive real axis: the imaginary part of the two roots coincides there, i.e. Im(w) is zero, for both roots, for all the points z = rei θ for which θ = 0, 2π, 4π,… That’s what represented and written in the graph below.

branch point

The graph above is a cross-section, so to say, of the Riemann surface  w = z1/2 that is orthogonal to the z plane. So we’re looking at the x axis from -∞ to +∞ along the y axis so to say. The point at the center of this graph is the origin obviously, which is the branch point of our function w = z1/2, and so the y axis goes through it but we can’t see it because we’re looking along that axis (so the y-axis is perpendicular to the cross-section).

This graph is one I made as I tried to get some better understanding of what a ‘branch point’ actually is. Indeed, the graph makes it perfectly clear – I hope 🙂 – that we really have to choose between one of the two branches of the function when we’re at the origin, i.e. the branch point. Indeed, we can pick either the n = 0 branch or the n = ±1 branch of the function, and then we can go in any direction we want as we’re traveling on that Riemann surface, but so our initial choice has consequences: as Dr. Teleman (whom I’ll introduce later) puts it, “any choice of w, followed continuously around the origin, leads, automatically, to the opposite choice as we turn around it.” For example, if we take the w1 branch (or the ‘positive’ root as I call it – even if complex numbers cannot be grouped into ‘positive’ or ‘negative’ numbers), then we’ll encounter the negative root wafter one loop around the origin. Well… Let me immediately qualify that statement: we will still be traveling on the wbranch but so the value of w1 will be the opposite or negative value of our original was we add 2π to arg z = θ. Mutatis mutandis, we’re in a similar situation if we’d take the w2 branch. Does that make sense?

Perhaps not, but I can’t explain it any better. In any case, the gist of the matter is that we can switch from the wbranch to the wbranch at the origin, and also note that we can only switch like that there, at the branch point itself: we can’t switch anywhere else. So there, at the branch point, we have some kind of ‘discontinuity’, in the sense that we have a genuine choice between two alternatives.

That’s, of course, linked to the fact that one cannot define the value of our function at the origin: 0 is not part of the domain of the (complex) square root function, or of the (complex) logarithmic function in general (remember that our square root function is just a special case of the log function) and, hence, the function is effectively not analytic there. So it’s like what I said about the Riemann surface for the log z function: at the origin, we can ‘take the elevator’ to any other level, so to say, instead of having to walk up and down that spiral ramp to get there. So we can add or subtract ± 2nπ to θ without any sweat.

So here it’s the same. However, because it’s the square root function, we’ll only see two buttons to choose from in that elevator, and our choice will determine whether we get out at level Θ = α (i.e. the wbranch) or at level Θ = α ± π (i.e. the wbranch). Of course, you can try to push both buttons at the same time but then I assume that the elevator will make some kind of random choice for you. 🙂 Also note that the elevator in the log z parking tower will probably have a numpad instead of buttons, because there’s infinitely many levels to choose from. 🙂

OK. Let’s stop joking. The idea I want to convey is that there’s a choice here. The choice made determines whether you’re going to be looking at the ‘positive’ roots of z, i.e. √r(cosΘ+sinΘ) or at the ‘negative’ roots of z, i.e. √r(cos(Θ±π)+isin(Θ±π)), or, equivalently (because Θ = θ/2) if you’re going to be looking at the values of w for θ going from 0 to 2π, or the values of w for θ going from 2π to 4π.

Let’s try to imagine the full picture and think about how we could superimpose the graphs of both the real and imaginary part of w. The illustration below should help us to do so: the blue and red image should be shifted over and across each other until they overlap completely. [I am not doing it here because I’d have to make one surface transparent so you can see the other one behind – and that’s too much trouble now. In addition, it’s good mental exercise for you to imagine the full picture in your head.]  

Real and imaginary sheets

It is important to remember here that the origin of the complex z plane, in both images, is at the center of these cuboids (or ‘rectangular prisms’ if you prefer that term). So that’s what the little red arrow points is pointing at in both images and, hence, the final graph, consisting of the two superimposed surfaces (the imaginary and the real one), should also have one branch point only, i.e. at the origin.

[…]

I guess I am really boring my imaginary reader here by being so lengthy but so there’s a reason: when I first tried to imagine that ‘full picture’, I kept thinking there was some kind of problem along the whole x axis, instead of at the branch point only. Indeed, these two plots suggest that we have two or even four separate sheets here that are ‘joined at the hip’ so to say (or glued or welded or stitched together – whatever you want to call it) along the real axis (i.e. the x axis of the z plane). In such (erroneous) view, we’d have two sheets above the complex z plane (one representing the imaginary values of √z and one the real part) and two below it (again one with the values of the imaginary part of √z and one representing the values of the real part). All of these ‘sheets’ have a sharp fold on the x axis indeed (everywhere else they are smooth), and that’s where they join in this (erroneous) view of things.

Indeed, such thinking is stupid and leads nowhere: the real and imaginary parts should always be considered together, and so there’s no such thing as two or four sheets really: there is only one Riemann surface with two (overlapping) branches. You should also note that where these branches start or end is quite arbitrary, because we can pick any angle α to define the starting point of a branch. There is also only one branch point. So there is no ‘line’ separating the Riemann surface into two separate pieces. There is only that branch point at the origin, and there we decide what branch of the function we’re going to look at: the n = 0 branch (i.e. we consider arg w = Θ to be equal to θ/2) or the n = ±1 branch (i.e. we take the Θ = θ/2 ± π equation to calculate the values for wz1/2).

OK. Enough of these visualizations which, as I told you above already, are helpful only to some extent. Is there any other way of approaching the topic?

Of course there is. When trying to understand these Riemann surfaces (which is not easy when you read Penrose because he immediately jumps to Riemann surfaces involving three or more branch points, which makes things a lot more complicated), I found it useful to look for a more formal mathematical definition of a Riemann surface. I found such more formal definition in a series of lectures of a certain Dr. C. Teleman (Berkeley, Lectures on Riemann surfaces, 2003). He defines them as graphs too, or surfaces indeed, just like Penrose and others, but, in contrast, he makes it very clear, right from the outset, that it’s really the association (i.e. the relation) between z and w which counts, not these rather silly attempts to plot all these surfaces in three-dimensional space.

Indeed, according to Dr. Teleman’s introduction to the topic, a Riemann surface S is, quite simply, a set of ‘points’ (z, w) in the two-dimensional complex space C= C x(so they’re not your typical points in the complex plane but points with two complex dimensions), such that w and z are related with each other by a holomorphic function w = f(z), which itself defines the Riemann surface. The same author also usefully points out that this holomorphic function is usually written in its implicit form, i.e. as P(z, w) = 0 (in case of a polynomial function) or, more generally, as F(z, w) = 0.

There are two things you should note here. The first one is that this eminent professor suggests that we should not waste too much time by trying to visualize things in the three-dimensional R3 = R x R x R space: Riemann surfaces are complex manifolds and so we should tackle them in their own space, i.e. the complex Cspace. The second thing is linked to the first: we should get away from these visualizations, because these Riemann surfaces are usually much and much more complicated than a simple (complex) square root function and, hence, are usually not easy to deal with. That’s quite evident when we consider the general form of the complex-analytical (polynomial) P(z, w) function above, which is P(z, w) = wn + pn-1(z)wn-1 + … + p1(z)w + p0(z), with the pk(z) coefficients here being polynomials in z themselves.

That being said, Dr. Teleman immediately gives a ‘very simple’ example of such function himself, namely w = [(z– 1) + ((zk2)]1/2. Huh? If that’s regarded as ‘very simple’, you may wonder what follows. Well, just look him up I’d say: I only read the first lecture and so there are fourteen more. 🙂

But he’s actually right: this function is not very difficult. In essence, we’ve got our square root function here again (because of the 1/2 exponent), but with four branch points this time, namely ± 1 and ± k (i.e. the positive and negative square roots of 1 and krespectively, cf. the (z– 1)  and (z– k2) terms in the argument of this function), instead of only one (the origin).

Despite the ‘simplicity’ of this function, Dr. Teleman notes that “we cannot identify this shape by projection (or in any other way) with the z-plane or the w-plane”, which confirms the above: Riemann surfaces are usually not simple and, hence, these ‘visualizations’ don’t help all that much. However, while not ‘identifying the shape’ of this particular square root function, Dr. Teleman does make the following drawing of the branch points:

Compactification 1

This is also some kind of cross-section of the Riemann surface, just like the one I made above for the ‘super-simple’ w = √z function: the dotted lines represent the imaginary part of w = [(z– 1) + (z– k2)]1/2, and the non-dotted lines are the real part of the (double-valued) w function. So that’s like ‘my’ graph indeed, except that we’ve got four branch points here, so we can make a choice between one of the two branches of at each of them.

[Note that one obvious difficulty in the interpretation of Dr. Teleman’s little graph above is that we should not assume that the complex numbers k and k are actually lying on the same line as 1 and -1 (i.e. the real line). Indeed, k and k are just standard complex numbers and most complex numbers do not lie on the real line. While that makes the interpretation of that simple graph of Dr. Teleman somewhat tricky, it’s probably less misleading than all these fancy 3D graphs. In order to proceed, we can either assume that this z axis is some polygonal line really, representing line segments between these four branch points or, even better, I think we should just accept the fact we’re looking at the z plane here along the z plane itself, so we can only see it as a line and we shouldn’t bother about where these points k and –k are located. In fact, their absolute value may actually be smaller than 1, in which case we’d probably want to change the order of the branch points in Dr. Teleman’s little graph).]

Dr. Teleman doesn’t dwell too long on this graph and, just like Penrose, immediately proceeds to what’s referred to as the compactification of the Riemann space, so that’s this ‘transformation’ of this complex surface into a donut (or a torus as it’s called in mathematics). So how does one actually go about that?

Well… Dr. Teleman doesn’t waste too many words on that. In fact, he’s quite cryptic, although he actually does provide much more of an explanation than Penrose does (Penrose’s treatment of the matter is really hocus-pocus I feel). So let me start with a small introduction of my own once again.

I guess it all starts with the ‘compactification’ of the real line, which is visualized below: we reduce the notion of infinity to a ‘point’ (this ‘point’ is represented by the symbol ∞ without a plus or minus sign) that bridges the two ‘ends’ of the real line (i.e. the positive and negative real half-line). Like that, we can roll up single lines and, by extension, the whole complex plane (just imagine rolling up the infinite number of lines that make up the plane I’d say :-)). So then we’ve got an infinitely long cylinder.

374px-Real_projective_line

But why would we want to roll up a line, or the whole plane for that matter? Well… I don’t know, but I assume there are some good reasons out there: perhaps we actually do have some thing going round and round, and so then it’s probably better to transform our ‘real line’ domain into a ‘real circle’ domain. The illustration below shows how it works for a finite sheet, and I’d also recommend my imaginary reader to have a look at the Riemann Project website (http://science.larouchepac.com/riemann/page/23), where you’ll find some nice animations (but do download Wolfram’s browser plugin if your Internet connection is slow: downloading the video takes time). One of the animations shows how a torus is, indeed, ideally suited as a space for a phenomenon characterized by two “independent types of periodicity”, not unlike the cylinder, which is the ‘natural space’ for “motion marked by a single periodicity”.

plane to torus

However, as I explain in a note below this post, the more natural way to roll or wrap up a sheet or a plane is to wrap it around a sphere, rather than trying to create a donut. Indeed, if we’d roll the infinite plane up in a donut, we’ll still have a line representing infinity (see below) and so it looks quite ugly: if you’re tying ends up, it’s better you tie all of them up, and so that’s what you do when you’d wrap the plane up around a sphere, instead of a torus.

From plane to torus

OK. Enough on planes. Back to our Riemann surface. Because the square root function w has two values for each z, we cannot make a simple sphere: we have to make a torus. That’s because a sphere has one complex dimension only, just like a plane, and, hence, they are topologically equivalent so to say. In contrast, a double-valued function has two ‘dimensions’ so to say and, hence, we have to transform the Riemann surface into something which accommodates that, and so that’s a torus (or a coffee cup :-)). In topological jargon, a torus has genus one, while the complex plane (and the Riemann sphere) has genus zero.

[Please do note that this will be the case regardless of the number of branch points. Indeed, Penrose gives the example of the function w = (1 – z3)1/2, which has three branch points, namely the three cube roots of the 1 – zexpression (these three roots are obviously equal to the three cube roots of unity). However, ‘his’ Riemann surface is also a Riemann surface of a square root function (albeit one with a more complicated form than the ‘core’ w = z1/2 example) and, hence, he also wraps it up as a donut indeed, instead of a sphere or something else.]

I guess that you, my imaginary reader, have stopped reading all of this nonsense. If you haven’t, you’re probably thinking: why don’t we just do it? How does it work? What’s the secret?

Frankly, the illustration in Penrose’s Road to Reality (i.e. Fig. 8.2 on p. 137) is totally useless in terms of understanding how it’s being done really. In contrast, Dr. Teleman is somewhat more explicit and so I’ll follow him here as much as I can while I try to make sense of it (which is not as easy as you might think). 

The short story is the following: Dr. Teleman first makes two ‘cuts’ (or ‘slits’) in the z plane, using the four branch points as the points where these ‘cuts’ start and end. He then uses these cuts to form two cylinders, and then he joins the ends of these cylinders to form that torus. That’s it. The drawings below illustrate the proceedings. 

Cuts

Compactification 3

Huh? OK. You’re right: the short story is not correct. Let’s go for the full story. In order to be fair to Dr. Teleman, I will literally copy all what he writes on what is illustrated above, and add my personal comments and interpretations in square brackets (so when you see those square brackets, that’s [me] :-)). So this is what Dr. Teleman has to say about it:

The function w = [(z– 1) + ((z– k2)]1/2 behaves like the [simple] square root [function] near ± 1 and ± k. The important thing is that there is no continuous single-valued choice of w near these points [shouldn’t he say ‘on’ these points, instead of ‘near’?]: any choice of w, followed continuously round any of the four points, leads to the opposite choice upon return.

[The formulation may sound a bit weird, but it’s the same as what happens on the simple z1/2 surface: when we’re on one of the two branches, the argument of w changes only gradually and, going around the origin, starting from one root of z (let’s say the ‘positive’ root w1), we arrive, after one full loop around the origin on the z plane (i.e. we add 2π to arg z = θ), at the opposite value, i.e. the ‘negative’ root w= -w1.] 

Defining a continuous branch for the function necessitates some cuts. The simplest way is to remove the open line segments joining 1 with k and -1 with –k. On the complement of these segments [read: everywhere else on the z plane], we can make a continuous choice of w, which gives an analytic function (for z ≠ ±1, ±k). The other branch of the graph is obtained by a global change of sign. [Yes. That’s obvious: the two roots are each other’s opposite (w= –w1) and so, yes, the two branches are, quite simply, just each other’s opposite.]

Thus, ignoring the cut intervals for a moment, the graph of w breaks up into two pieces, each of which can be identified, via projection, with the z-plane minus two intervals (see Fig. 1.4 above). [These ‘projections’ are part and parcel of this transformation business it seems. I’ve encountered more of that stuff and so, yes, I am following you, Dr. Teleman!]

Now, over the said intervals [i.e. between the branch points], the function also takes two values, except at the endpoints where those coincide. [That’s true: even if the real parts of the two roots are the same (like on the negative real axis for our  z1/2 s), the imaginary parts are different and, hence, the roots are different for points between the various branch points, and vice versa of course. This is actually one of the reasons why I don’t like Penrose’s illustration on this matter: his illustration suggests that this is not the case.]

To understand how to assemble the two branches of the graph, recall that the value of w jumps to its negative as we cross the cuts. [At first, I did not get this, but so it’s the consequence of Dr. Teleman ‘breaking up the graph into tow pieces’. So he separates the two branches indeed, and he does so at the ‘slits’ he made, so that’s between the branch points. It follows that the value of w will indeed jump to its opposite value as we cross them, because we’re jumping on the other branch there.]

Thus, if we start on the upper sheet and travel that route, we find ourselves exiting on the lower sheet. [That’s the little arrows on these cuts.] Thus, (a) the far edges of the cuts on the top sheet must be identified with the near edges of the cuts on the lower sheet; (b) the near edges of the cuts on the top sheet must be identified with the far edges on the lower sheet; (c) matching endpoints are identified; (d) there are no other identifications. [Point (d) seems to be somewhat silly but I get it: here he’s just saying that we can’t do whatever we want: if we glue or stitch or weld all of these patches of space together (or should I say copies of patches of space?), we need to make sure that the points on the edges of these patches are the same indeed.]

A moment’s thought will convince us that we cannot do all this in in R3, with the sheets positioned as depicted, without introducing spurious crossings. [That’s why Brown and Churchill say it’s ‘physically impossible.’] To rescue something, we flip the bottom sheet about the real axis.  

[Wow! So that’s the trick! That’s the secret – or at least one of them! Flipping the sheet means rotating it by 180 degrees, or multiplying all points twice with i, so that’s i2 = -1 and so then you get the opposite values. Now that’s a smart move!] 

The matching edges of the cuts are now aligned, and we can perform the identifications by stretching each of the surfaces around the cut to pull out a tube. We obtain a picture representing two planes (ignore the boundaries) joined by two tubes (see Fig. 1.5.a above).

[Hey! That’s like the donut-to-coffee-cup animation, isn’t it? Pulling out a tube? Does that preserve angles and all that? Remember it should!]

For another look at this surface, recall that the function z → R2/z identifies the exterior of the circle ¦z¦ < R with the punctured disc z: ¦z¦ < R and z ≠ 0 (it’s a punctured disc so its center is not part of the disc). Using that, we can pull the exteriors of the discs, missing from the picture above, into the picture as punctured discs, and obtain a torus with two missing points as the definitive form of our Riemann surface (see Fig. 1.5.b).

[Dr. Teleman is doing another hocus-pocus thing here. So we have those tubes with an infinite plane hanging on them, and so it’s obvious we just can’t glue these two infinite planes together because it wouldn’t look like a donut 🙂. So we first need to transform them into something more manageable, and so that’s the punctured discs he’s describing. I must admit I don’t quite follow him here, but I can sort of sense – a little bit at least – what’s going on.] 

[…]

Phew! Yeah, I know. My imaginary reader will surely feel that I don’t have a clue of what’s going on, and that I am actually not quite ready for all of this high-brow stuff – or not yet at least. He or she is right: my understanding of it all is rather superficial at the moment and, frankly, I wish either Penrose or Teleman would explain this compactification thing somewhat better. I also would like them to explain why we actually need to do this compactification thing, why it’s relevant for the real world.

Well… I guess I can only try to move forward as good as I can. I’ll keep you/myself posted.

Note: As mentioned above, there is more than one way to roll or wrap up the complex plane, and the most natural way of doing this is to do it around a sphere, i.e. the so-called Riemann sphere, which is illustrated below. This particular ‘compactification’ exercise is equivalent to a so-called stereographic projection: it establishes a one-on-one relationship between all points on the sphere and all points of the so-called extended complex plane, which is the complex plane plus the ‘point’ at infinity (see my explanation on the ‘compactification’ of the real line above).

Riemann_sphereStereographic_projection_in_3D

But so Riemann surfaces are associated with complex-analytic functions, right? So what’s the function? Well… The function with which the Riemann sphere is associated is w = 1/z. [1/z is equal to z = z*/¦z¦, with z* = x – iy, i.e. the complex conjugate of z = x + iy, and ¦z¦ the modulus or absolute value of z, and so you’ll recognize the formulas for the stereographic projection here indeed.]

OK. So what? Well… Nothing much. This mapping from the complex z plane to the complex w plane is conformal indeed, i.e. it preserves angles (but not areas) and whatever else that comes with complex analyticity. However, it’s not as straightforward as Penrose suggests. The image below (taken from Brown and Churchill) shows what happens to lines parallel to the x and y axis in the z plane respectively: they become circles in the w plane. So this particular function actually does map circles to circles (which is what holomorphic functions have to do) but only if we think of straight lines as being particular cases of circles, namely circles “of infinite radius”, as Penrose puts it.

inverse of z function

Frankly, it is quite amazing what Penrose expects in terms of mental ‘agility’ of the reader. Brown and Churchill are much more formal in their approach (lots of symbols and equations I mean, and lots of mathematical proofs) but, to be honest, I find their stuff easier to read, even if their textbook is a full-blown graduate level course in complex analysis.

I’ll conclude this post here with two more graphs: they give an idea of how the Cartesian and polar coordinate spaces can be mapped to the Riemann sphere. In both cases, the grid on the plane appears distorted on the sphere: the grid lines are still perpendicular, but the areas of the grid squares shrink as they approach the ‘north pole’.

CartesianStereoProj

PolarStereoProj

The mathematical equations for the stereographic projection, and the illustration above, suggest that the w = 1/z function is basically just another way to transform one coordinate system into another. But then I must admit there is a lot of finer print that I don’t understand – as yet that is. It’s sad that Penrose doesn’t help out very much here.

Riemann surfaces (II)

This is my second post on Riemann surfaces, so they must be important. [At least I hope so, because it takes quite some time to understand them. :-)]

From my first post on this topic, you may or may not remember that a Riemann surface is supposed to solve the problem of multivalued complex functions such as, for instance, the complex logarithmic function (log z = ln r + i(θ + 2nπ) or the complex exponential function (zc = ec log z). [Note that the problem of multivaluedness for the (complex) exponential function is a direct consequence of its definition in terms of the (complex) logarithmic function.]

In that same post, I also wrote that it all looked somewhat fishy to me: we first use the function causing the problem of multivaluedness to construct a Riemann surface, and then we use that very same surface as a domain for the function itself to solve the problem (i.e. to reduce the function to a single-valued (analytic) one). Penrose does not have any issues with that though. In Chapter 8 (yes, that’s where I am right now: I am moving very slowly on his Road to Reality, as it’s been three months of reading now, and there are 34 chapters!), he writes that  “Complex (analytic) functions have a mind of their own, and decide themselves what their domain should be, irrespective of the region of the complex plane which we ourselves may initially have allotted to it. While we may regard the function’s domain to be represented by the Riemann surface associated with the function, the domain is not given ahead of time: it is the explicit form of the function itself that tells us which Riemann surface the domain actually is.” 

Let me retrieve the graph of the Riemannian domain for the log z function once more:

Riemann_surface_log

For each point z in the complex plane (and we can represent z both with rectangular as well as polar coordinates: z = x + iy = reiθ), we have an infinite number of log z values: one for each value of n in the log z = ln r + i(θ + 2nπ) expression (n = 0, ±1, ±2, ±3,…, ±∞). So what we do when we promote this Riemann surface as a domain for the log z function is equivalent to saying that point z is actually not one single point z with modulus r and argument θ + 2nπ, but an infinite collection of points: these points all have the same modulus ¦z¦ = r but we distinguish the various ‘representations’ of z by treating θ, θ ± 2π, θ ±+ 4π, θ ± 6π, etcetera, as separate argument values as we go up or down on that spiral ramp. So that is what is represented by that infinite number of sheets, which are separated from each other by a vertical distance of 2π. These sheets are all connected at or through the origin (at which the log z function is undefined: therefore, the origin is not part of the domain), which is the branch point for this function. Let me copy some formal language on the construction of that surface here:

“We treat the z plane, with the origin deleted, as a thin sheet Rwhich is cut along the positive half of the real axis. On that sheet, let θ range from 0 to 2π. Let a second sheet  Rbe cut in the same way and placed in front of the sheet R0. The lower edge of the slit in Ris then joined to the upper edge of the slit in R1. On  R1, the angle θ ranges from 2π to 4π; so, when z is represented by a point on R1, the imaginary component of log z ranges from 2π to 4π.” And then we repeat the whole thing, of course: “A sheet Ris then cut in the same way and placed in front of R1. The lower edge of the slit in R1. is joined to the upper edge of the slit in this new sheet, and similarly for sheets R3R4… A sheet R-1R-2, R-3,… are constructed in like manner.” (Brown and Churchill, Complex Variables and Applications, 7th edition, p. 335-336)

The key phrase above for me is this “when z is represented by a point on R1“, because that’s what it is really: we have an infinite number of representations of z here, namely one representation of z for each branch of the log z function. So, as n = 0, ±1, , ±2, ±3 etcetera, we have an infinite number of them indeed. You’ll also remember that each branch covers a range from some random angle α to α + 2π. Imagine a continuous curve around the origin on this Riemann surface: as we move around, the angle of z changes from 0 to 2θ on sheet R0, and then from 2π to 4π on sheet Rand so on and so on.

The illustration above also illustrates the meaning of a branch point. Imagine yourself walking on that surface and approaching the origin, from any direction really. At the origin itself, you can choose what to do: either you take the elevator up or down to some other level or, else, the elevator doesn’t work and so then you have to walk up or down that ramp to get to another level. If you choose to walk along the ramp, the angle θ changes gradually or, to put it in mathematical terms, in a continuous way. However, if you took the elevator and got out at some other level, you’ll find that you’ve literally ‘jumped’ one or more levels. Indeed, remember that log z = ln r + i(θ + 2nπ) and so ln r, the horizontal distance from the origin didn’t change, but you did add some multiple of 2π to the vertical distance, i.e. the imaginary part of the log z value. 

Let us now construct a Riemann surface for some other multiple-valued functions. Let’s keep it simple and start with the square root of z, so c = 1/2, which is nothing else than a specific example of the complex exponential function zc = zc = ec log z: we just take a real number for c here. In fact, we’re taking a very simple rational number value for c: 1/2 = 0.5. Taking the square, cube, fourth or  nth root of a complex number is indeed nothing but a special case of the complex exponential function. The illustration below (taken from Wikipedia) shows us the Riemann surface for the square root function.

Riemann_sqrt

As you can see, the spiraling surface turns back into itself after two turns. So what’s going on here? Well… Our multivalued function here does not have an infinite number of values for each z: it has only two, namely √r ei(θ/2) and √r ei(θ/2 + π). But what’s that? We just said that the log function – of which this function is a special case – had an infinite number of values? Well… To be somewhat more precise:  z1/2 actually does have an infinite number of values for each z (just like any other complex exponential function), but it has only two values that are different from each other. All the others coincide with one of the two principal ones. Indeed, we can write the following:

w = √z = z1/2 =  e(1/2) log z e(1/2)[ln r + i(θ + 2nπ)] = r1/2 ei(θ/2 + nπ) = √r ei(θ/2 + nπ) 

(n = 0, ±1,  ±2,  ±3,…)

For n = 0, this expression reduces to z1/2 = √r eiθ/2. For n = ±1, we have  z1/2 = √r ei(θ/2 + π), which is different than the value we had for n = 0. In fact, it’s easy to see that this second root is the exact opposite of the first root: √r ei(θ/2 + π) = √r eiθ/2eiπ = – √r eiθ/2). However, for n = 2, we have  z1/2 = √r ei(θ/2 + 2π), and so that’s the same value (z1/2 = √r eiθ/2) as for n = 0. Indeed, taking the value of n = 2 amounts to adding 2π to the argument of w and so get the same point as the one we found for n = 0. [As for the plus or minus sign, note that, for n = -1, we have  z1/2 = √r ei(θ/2 -π) = √r ei(θ/2 -π+2π) =  √r ei(θ/2 +π) and, hence, the plus or minus sign for n does not make any difference indeed.]

In short, as mentioned above, we have only two different values for w = √z = z1/2 and so we have to construct two sheets only, instead of an infinite number of them, like we had to do for the log z function. To be more precise, because the sheet for n = ±2 will be the same sheet as for n = 0, we need to construct one sheet for n = 0 and one sheet for n = ±1, and so that’s what shown above: the surface has two sheets (one for each branch of the function) and so if we make two turns around the origin (one on each sheet), we’re back at the same point, which means that, while we have a one-to-two relationship between each point z on the complex plane and the two values z1/2 for this point, we’ve got a one-on-one relationship between every value of z1/2 and each point on this surface.

For ease of reference in future discussions, I will introduce a personal nonsensical convention here: I will refer to (i) the n = 0 case as the ‘positive’ root, or as w1, i.e. the ‘first’ root, and to (ii) the n = ± 1 case as the ‘negative’ root, or w2, i.e. the ‘second’ root. The convention is nonsensical because there is no such thing as positive or negative complex numbers: only their real and imaginary parts (i.e. real numbers) have a sign. Also, these roots also do not have any particular order: there are just two of them, but neither of the two is like the ‘principal’ one or so. However, you can see where it comes from: the two roots are each other’s exact opposite w= u2 + iv= —w= -u1 – iv1. [Note that, of course, we have w1w = w12 = w2w = w2= z, but that the product of the two distinct roots is equal to —z. Indeed, w1w2 = w2w1 = √rei(θ/2)√rei(θ/2 + π) = rei(θ+π) = reiθeiπ = -reiθ = -z.]

What’s the upshot? Well… As I mentioned above already, what’s happening here is that we treat z = rei(θ+2π) as a different ‘point’ than z = reiθ. Why? Well… Because of that square root function. Indeed, we have θ going from 0 to 2π on the first ‘sheet’, and then from 2π0 to 4π on the second ‘sheet’. Then this second sheet turns back into the first sheet and so then we’re back at normal and, hence, while θ going from 0π  to 2π is not the same as θ going from 2π  to 4π, θ going from 4π  to 6π  is the same as θ going from 0 to 2π (in the sense that it does not affect the value of w = z1/2). That’s quite logical indeed because, if we denote w as w = √r eiΘ (with Θ = θ/2 + nπ, and n = 0 or ± 1), then it’s clear that arg w = Θ will range from 0 to 2π if (and only if) arg z = θ ranges from 0 to 4π. So as the argument of w makes one loop around the origin – which is what ‘normal’ complex numbers do – the argument of z makes two loops. However, once we’re back at Θ = 2π, then we’ve got the same complex number w again and so then it’s business as usual.

So that will help you to understand why this Riemann surface is said to have two complex dimensions, as opposed to the plane, which has only one complex dimension.

OK. That should be clear enough. Perhaps one question remains: how do you construct a nice graph like the one above?

Well, look carefully at the shape of it. The vertical distance reflects the real part of √z for n = 0, i.e. √r cos(θ/2). Indeed, the horizontal plane is the the complex z plane and so the horizontal axes are x and y respectively (i.e. the x and y coordinates of z = x + iy). So this vertical distance equals 1 when x = 1 and y = 0 and that’s the highest point on the upper half of the top sheet on this plot (i.e. the ‘high-water mark’ on the right-hand (back-)side of the cuboid (or rectangular prism) in which this graph is being plotted). So the argument of z is zero there (θ = 0). The value on the vertical axis then falls from one to zero as we turn counterclockwise on the surface of this first sheet, and that’s consistent with a value for θ being equal to π there (θ = π), because then we have cos(π/2) = 0. Then we go underneath the z plane and make another half turn, so we add another π radians to the value θ and we arrive at the lowest point on the lower half of the bottom sheet on this plot, right under the point where we started, where θ = 2π and, hence, Re(√z) = √r cos(θ/2) (for n = 0) = cos(2π/2) = cos(2π/2) = -1.

We can then move up again, counterclockwise on the bottom sheet, to arrive once again at the spot where the bottom sheet passes through the top sheet: the value of θ there should be equal to θ = 3π, as we have now made three half turns around the origin from our original point of departure (i.e. we added three times π to our original angle of departure, which was θ = 0) and, hence, we have Re(√z) = √r cos(3θ/2) = 0 again. Finally, another half turn brings us back to our point of departure, i.e. the positive half of the real axis, where θ has now reached the value of θ = 4π, i.e. zero plus two times 2π. At that point, the argument of w (i.e. Θ) will have reached the value of 2π, i.e. 4π/2, and so we’re talking the same w = z1/2 as when we started indeed, where we had Θ = θ/2 = 0.

What about the imaginary part? Well… Nothing special really (as for now at least): a graph of the imaginary part of √z would be equally easy to establish: Im(√z) = √r sin(θ/2) and, hence, rotating this plot 180 degrees around the vertical axis will do the trick.

Hmm… OK. What’s next? Well… The graphs below show the Riemann surfaces for the third and fourth root of z respectively, i.e. z1/3 and z1/4 respectively. It’s easy to see that we now have three and four sheets respectively (instead of two only), and that we have to take three and four full turns respectively to get back at our starting point, where we should find the same values for z1/3 and z1/4 as where we started. That sounds logical, because we always have three cube roots of any (complex) numbers, and four fourth roots, so we’d expect to need the same number of sheets to differentiate between these three or four values respectively.

Riemann_surface_cube_rootRiemann_surface_4th_root

In fact, the table below may help to interpret what’s going on for the cube root function. We have three cube roots of z: w1, wand w3. These three values are symmetrical though, as indicated be the red, green and yellow colors in the table below: for example, the value of w for θ ranging from 4π to 6π for the n = 0 case (i.e. w1) is the same as the value of w for θ ranging from 0 to 2π for the n = 1 case (or the n = -2 case, which is equivalent to the n = 1 case).

Cube roots

So the origin (i.e. the point zero) for all of the above surfaces is referred to as the branch point, and the number of turns one has to make to get back at the same point determines the so-called order of the branch point. So, for w = z1/2, we have a branch point of order 2; for for w = z1/3, we have a branch point of order 3; etcetera. In fact, for the log z function, the branch point does not have a finite order: it is said to have infinite order.

After a very brief discussion of all of this, Penrose then proceeds and transforms a ‘square root Riemann surface’ into a torus (i.e. a donut shape). The correspondence between a ‘square root Riemann surface’ and a torus does not depend on the number of branch points: it depends on the number of sheets, i.e. the order of the branch point. Indeed, Penrose’s example of a square root function is w = (1 – z3)1/2, and so that’s a square root function with three branch points (the three roots of unity), but so these branch points are all of order two and, hence, there are two sheets only and, therefore, the torus is the appropriate shape for this kind of ‘transformation’. I will come back to that in the next post.

OK… But I still don’t quite get why this Riemann surfaces are so important. I must assume it has something to do with the mystery of rolled-up dimensions and all that (so that’s string theory), but I guess I’ll be able to shed some more light on that question only once I’ve gotten through that whole chapter on them (and the chapters following that one).  I’ll keep you posted. 🙂

Post scriptum: On page 138 (Fig. 8.3), Penrose shows us how to construct the spiral ramp for the log z function. He insists on doing this by taking overlapping patches of space, such as the L(z) and Log z branch of the log z function, with θ going from 0 to 2π for the L(z) branch) and from -π to +π for the Log z branch (so we have an overlap here from 0 to +π). Indeed, one cannot glue or staple patches together if the patch surfaces don’t overlap to some extent… unless you use sellotape of course. 🙂 However, continuity requires some overlap and, hence, just joining the edges of patches of space with sellotape, instead of gluing overlapping areas together, is not allowed. 🙂

So, constructing a model of that spiral ramp is not an extraordinary intellectual challenge. However, constructing a model of the Riemann surfaces described above (i.e. z1/2, z1/3, z1/4 or, more in general, constructing a Riemann surface for any rational power of z, i.e. any function w = zn/m, is not all that easy: Brown and Churchill, for example, state that is actually ‘physically impossible’ to model that (see Brown and Churchill, Complex Variables and Applications (7th ed.), p. 337).

Huh? But so we just did that for z1/2, z1/3 and z1/4, didn’t we? Well… Look at that plot for w = z1/2 once again. The problem is that the two sheets cut through each other. They have to do that, of course, because, unlike the sheets of the log z function, they have to join back together again, instead of just spiraling endlessly up or down. So we just let these sheets cross each other. However, at that spot (i.e. the line where the sheets cross each other), we would actually need two representations of z. Indeed, as the top sheet cuts through the bottom sheet (so as we’re moving down on that surface), the value of θ will be equal to π, and so that corresponds to a value for w equal to w = z1/2 = √r eiπ/2 (I am looking at the n = 0 case here). However, when the bottom sheet cuts through the top sheet (so if we’re moving up instead of down on that surface), θ’s value will be equal to 3π (because we’ve made three half-turns now, instead of just one) and, hence, that corresponds to a value for w equal to w = z1/2 = √r e3iπ/2, which is obviously different from √r eiπ/2. I could do the same calculation for the n = ±1 case: just add ±π to the argument of w.

Huh? You’ll probably wonder what I am trying to say here. Well, what I am saying here is that plot of the surface gives us the impression that we do not have two separate roots w1 and won the (negative) real axis. But so that’s not the case: we do have two roots there, but we can’t distinguish them with that plot of the surface because we’re only looking at the real part of w.

So what?

Well… I’d say that shouldn’t worry us all that much. When building a model, we just need to be aware that it’s a model only and, hence, we need to be aware of the limitations of what we’re doing. I actually build a paper model of that surface by taking two paper disks: one for the top sheet, and one for the bottom sheet. Then I cut those two disks along the radius and folded and glued both of them like a Chinese hat (yes, like the one the girl below is wearing). And then I took those two little paper Chinese hats, put one of them upside down, and ‘connected’ them (or should I say, ‘stitched’ or ‘welded’ perhaps? :-)) with the other one along the radius where I had cut into these disks. [I could go through the trouble of taking a digital picture of it but it’s better you try it yourself.]

ChineseHat

Wow! I did not expect to be used as an illustration in a blog on math and physics! 🙂

🙂 OK. Let’s get somewhat more serious again. The point to note is that, while these models (both the plot as well as the two paper Chinese hats :-)) look nice enough, Brown and Churchill are right when they note that ‘the points where two of the edges are joined are distinct from the points where the two other edges are joined’. However, I don’t agree with their conclusion in the next phrase, which states that it is ‘thus physically impossible to build a model of that Riemann surface.’ Again, the plot above and my little paper Chinese hats are OK as a model – as long as we’re aware of how we should interpret that line where the sheets cross each other: that line represents two different sets of points.

Let me go one step further here (in an attempt to fully exhaust the topic) and insert a table here with the values of both the real and imaginary parts of √z for both roots (i.e. the n = 0 and n = ± 1 case). The table shows what is to be expected: the values for the n = ± 1 case are the same as for n = 0 but with the opposite sign. That reflects the fact that the two roots are each other’s opposite indeed, so when you’re plotting the two square roots of a complex number z = reiθ, you’ll see they are on opposite sides on a circle with radius √r. Indeed, rei(θ/2 + π) = rei(θ/2)eiπ = –rei(θ/2). [If the illustration below is too small to read the print, then just click on it and it should expand.]

values of square root of z

The grey and green colors in the table have the same role as the red, green and yellow colors I used to illustrated how the cube roots of z come back periodically. We have the same thing here indeed: the values we get for the n = 0 case are exactly the same as for the n = ± 1 case but with a difference in ‘phase’ I’d say of one turn around the origin, i.e. a ‘phase’ difference of 2π. In other words, the value of √z in the n = 0 case for θ going from 0 to 2π is equal to the value of √z in the n = ± 1 case but for θ going from 2π to 4π and, vice versa, the value of √z in the n = ±1 case for θ going from 0 to 2π is equal to the value of √z in the n = 0 case for θ going from 2π to 4π. Now what’s the meaning of that? 

It’s quite simple really. The two different values of n mark the different branches of the w function, but branches of functions always overlap of course. Indeed, look at the value of the argument of w, i.e. Θ: for the n = 0 case, we have 0 < Θ < 2π, while for the n = ± 1 case, we have -π < Θ < +π. So we’ve got two different branches here indeed, but they overlap for all values Θ between 0 and π and, for these values, where Θ1 = Θ2, we will obviously get the same value for w, even if we’re looking at two different branches (Θ1 is the argument of w1, and Θ2 is the argument of w2). 

OK. I guess that’s all very self-evident and so I should really stop here. However, let me conclude by noting the following: to understand the ‘fully story’ behind the graph, we should actually plot both the surface of the imaginary part of √z as well as the surface of the real part of of √z, and superimpose both. We’d obviously get something that would much more complicated than the ‘two Chinese hats’ picture. I haven’t learned how to master math software (such as Maple for instance), as yet, and so I’ll just copy a plot which I found on the web: it’s a plot of both the real and imaginary part of the function w = z2. That’s obviously not the same as the w = z1/2 function, because w = z2 is a single-valued function and so we don’t have all these complications. However, the graph is illustrative because it shows how two surfaces – one representing the real part and the other the imaginary part of a function value – cut through each other thereby creating four half-lines (or rays) which join at the origin. 

complex parabola w = z^2

So we could have something similar for the w = z1/2 function if we’d have one surface representing the imaginary part of z1/2 and another representing the  real part of z1/2. The sketch below illustrates the point. It is a cross-section of the Riemann surface along the x-axis (so the imaginary part of z is zero there, as the values of θ are limited to 0, π, 2π, 3π, back to 4π = 0), but with both the real as well as the imaginary part of  z1/2 on it. It is obvious that, for the w = z1/2 function, two of the four half-lines marking where the two surfaces are crossing each other coincide with the positive and negative real axis respectively: indeed, Re( z1/2) = 0 for θ = π and 3π (so that’s the negative real axis), and Im(z1/2) = 0 for θ = 0, 2π and 4π (so that’s the positive real axis).

branch point

The other two half-lines are orthogonal to the real axis. They follow a curved line, starting from the origin, whose orthogonal projection on the z plane coincides with the y axis. The shape of these two curved lines (i.e. the place where the two sheets intersect above and under the axis) is given by the values for the real and imaginary parts of the √z function, i.e. the vertical distance from the y axis is equal to ± (√2√r)/2.

Hmm… I guess that, by now, you’re thinking that this is getting way too complicated. In addition, you’ll say that the representation of the Riemann surface by just one number (i.e. either the real or the imaginary part) makes sense, because we want one point to represent one value of w only, don’t we? So we want one point to represent one point only, and that’s not what we’re getting when plotting both the imaginary as well as the real part of w in a combined graph. Well… Yes and no. Insisting that we shouldn’t forget about the imaginary part of the surface makes sense in light of the next post, in which I’ll say a think or two about ‘compactifying’ surfaces (or spaces) like the one above. But so that’s for the next post only and, yes, you’re right: I should stop here.

Analytic continuation

In my previous post, I promised to say something about analytic continuation. To do so, let me first recall Taylor’s theorem: if we have some function f(z) that is analytic in some domain D, then we can write this function as an infinite power series:

f(z) = ∑ an(z-z0)n

with n = 0, 1, 2,… going all the way to infinity (n = 0 → ∞) and the successive coefficients an equal to  an = [f(n)(z0)]/n! (with f(n)(z0) denoting the derivative of the nth order, and n! the factorial function n! = 1 x 2 x 3 x … x n).

I should immediately add that this domain D will always be an open disk, as illustrated below. The term ‘open’ means that the boundary points (i.e. the circle itself) are not part of the domain. This open disk is the so-called circle of convergence for the complex function f(z) = 1/(1-z), which is equivalent to the (infinite) power series f(z) = 1 + z + z+ z3 + z4 +… [A clever reader will probably try to check this using Taylor’s theorem above, but I should note the exercise involves some gymnastics. Indeed, the development involves the use of the identity 1 + z + z+ z3 + … + zn = (1 – zn+1)/(1 – z).]

singularity

This power series converges only when the absolute value of z is (strictly) smaller than 1, so only when ¦z¦ < 1. Indeed, the illustration above shows the singularity at the point 1 (or the point (1, 0) if you want) on the real axis: the denominator of the function 1/(1-z) effectively becomes zero there. But so that’s one point only and, hence, we may ask ourselves why this domain should be bounded a circle going through this one point. Why not some square or rectangle or some other weird shape avoiding this point? That question takes a few theorems to answer, and so I’ll just say that this is just one of the many remarkable things about analytic functions: if a power series such as the one above converges to f(z) within some circle whose radius is the distance from z(in this case, zis the origin and so we’re actually dealing with a so-called Maclaurin expansion here, i.e. an oft-used special case of the Taylor expansion) to the nearest point zwhere f fails to be analytic (that point zis equal to 1 in this case), then this circle will actually be the largest circle centered at zsuch that the series converges to f(z) for all z interior to it.

PuffThat’s quite a mouthful so let me rephrase it. What is being said here is that there’s usually a condition of validity for the power series expansion of a function: if that condition of validity is not fulfilled, then the function cannot be represented by the power series. In this particular case, the expansion of f(z) = 1/(1-z) = 1 + z + z+ z3 + z4 +… is only valid when ¦z¦ < 1, and so there is no larger circle about  z0 (i.e. the origin in this particular case) such that at each point interior to it, the Taylor series (or the Maclaurin series in this case) converges to f(z).

That being said, we can usually work our way around such singularities, especially when they are isolated, such as in this example (there is only this one point 1 that is causing trouble), and that is where the concept of analytic continuation comes in. However, before I explain this, I should first introduce Laurent’s theorem, which is like Taylor’s theorem but it applies to functions which are not as ‘nice’ as the functions for which Taylor’s theorem holds (i.e. functions that are not analytic everywhere), such as this 1/(1-z) function indeed. To be more specific, Laurent’s theorem says that, if we have a function f which is analytic in an annular domain (i.e. the red area in the illustration below) centered at some point z(in the illustration below, that’s point c) then f(z) will have a series representation involving both positive and negative powers of the term (z – z0).

256px-Laurent_series

More in particular, f(z) will be equal to f(z) = ∑ [an(z – z0)n] + ∑ [bn/(z-z0)n], with n = 0, 1,…, ∞ and with an and  bn coefficients involving complex integrals which I will not write them down here because WordPress lacks a good formula editor and so it would look pretty messy. An alternative representation of the Laurent series is to write f(z) as f(z) = ∑ [cn(z – z0)n], with c= (1/2πi) ∫C [f(z)/(z – z0)n+1]dz (n = 0, ±1, ±2,…, ±∞). Well – so here I actually did write down the integral. I hope it’s not too messy 🙂 .

It’s relatively easy to verify that this Laurent series becomes the Taylor series if there would be no singularities, i.e. if the domain would cover the whole disk (so if there would be red everywhere, even at the origin point). In that case, the cn coefficient becomes (1/2πi) ∫C [f(z)/(z – z0)-1+1]dz for n = -1 and we can use the fact that, if f(z) is analytic everywhere, the integral ∫C [f(z)dz will be zero along any contour in the domain of f(z). For n = -2, the integrand becomes f(z)/(z – z0)-2+1 = f(z)(z – z0) and that’s an analytic function as well because the function z – zis analytic everywhere and, hence, the product of this (analytic) function with f(z) will also be analytic everywhere (sums, products and compositions of analytic functions are also analytic). So the integral will be zero once again. Similarly, for n = -3, the integrand f(z)/(z – z0)-3+1 = f(z)(z – z0)2 is analytic and, hence, the integral is again zero. In short, all bn coefficients (i.e. all ‘negative’ powers) in the Laurent series will be zero, except for n = 0, in which case bn = b0 = a0. As for the acoefficients, one can see they are equal to the Taylor coefficients by using what Penrose refers to as the ‘higher-order’ version of the Cauchy integral formula: f(n)(z0)/n! = (1/2πi) ∫C [f(z)/(z – z0)n+1]dz.

It is also easy to verify that this expression also holds for the special case of a so-called punctured disk, i.e. an annular domain for which the ‘hole’ at the center is limited to the center point only, so this ‘annular domain’ then consists of all points z around z0 for which 0 < ¦z – z0¦ < R. We can then write the Laurent series as f(z) = ∑ an(z-z0)+ b1/(z – z0) + b2/(z – z0)+…+ bn/(z – z0)+… with n = 0, 1, 2,…, ∞.

OK. So what? Well… The point to note is that we can usually deal with singularities. That’s what the so-called theory of residues and poles is for. The term pole is a more illustrious term for what is, in essence, an isolated singular point: it has to do with the shape of the (modulus) surface of f(z) near this point, which is, well… shaped like a tent on a (vertical) pole indeed. As for the term residue, that’s a term used to denote this coefficient b1 in this power series above. The value of the residue at one or more isolated singular points can be used to evaluate integrals (so residues are used for solving integrals), but we won’t go into any more detail here, especially because, despite my initial promise, I still haven’t explained what analytic continuation actually is. Let me do that now.

For once, I must admit that Penrose’s explanation here is easier to follow than other texts (such as the Wikipedia article on analytic continuation, which I looked at but which, for once, seems to be less easy to follow than Penrose’s notes on it), so let me closely follow his line of reasoning here.

If, instead of the origin, we would use a non-zero point z0 for our expansion of this function f(z) = 1/(1-z) = 1 + z + z+ z3 + z4 +… (i.e. a proper Taylor expansion, instead of the Maclaurin expansion around the origin), then we would, once again, find a circle of convergence for this function which would, once again, be bounded by the singularity at point (1, 0), as illustrated below. In fact, we can move even further out and expand this function around the (non-zero) point zand so on and so on. See the illustration: it is essential that the successive circles of convergence around the origin, z0, zetcetera overlap when ‘moving out’ like this.

analytic continuation

So that’s this concept of ‘analytic continuation’. Paraphrasing Penrose, what’s happening here is that the domain D of the analytic function f(z) is being extended to a larger region D’ in which the function f(z) will also be analytic (or holomorphic – as this is the term which Penrose seems to prefer over ‘analytic’ when it comes to complex-valued functions).

Now, we should note something that, at first sight, seems to be incongruent: as we wander around a singularity like that (or, to use the more mathematically correct term, a pole of the function) to then return to our point of departure, we may get (in fact, we are likely to get) different function values ‘back at base’. Indeed, the illustration below shows what happens when we are ‘wandering’ around the origin for the log z function. You’ll remember (if not, see the previous posts) that, if we write z using polar coordinates (so we write z as z = reiθ), then log z is equal to log z = lnr + i(θ + 2nπ). So we have a multiple-valued function here and we dealt with that by using branches, i.e. we limited the values which the argument of z (arg z = θ) could take to some range α < θ < α + 2π. However, when we are wandering around the origin, we don’t limit the range of θ. In fact, as we are wandering around the origin, we are effectively constructing this Riemann surface (which we introduced in one of our previous posts also), thereby effectively ‘gluing’ successive branches of the log z function together, and adding 2πto the value of our log z function as we go around. [Note that the vertical axis in the illustration below keeps track of the imaginary part of log z only, i.e. the part with θ in it only. If my imaginary reader would like to see the real part of log z, I should refer him to the post with the entry on Riemann surfaces.]

Imaginary_log_analytic_continuation

But what about the power series? Well… The log z function is just like any other analytic function and so we can and do expand it as we go. For example, if we expand the log z function about the point (1, 0), we get log z = (z – 1) – (1/2)(z – 1)+ (1/3)(z – 1)– (1/4)(z – 1)+… etcetera. But as we wonder around, we’ll move into a different branch of the log z function and, hence, we’ll get a different value when we get back to that point. However, I will leave the details of figuring that one out to you 🙂 and end this post, because the intention here is just to illustrate the principle, and not to copy some chapter out of a math course (or, at least, not to copy all of it let’s say :-)).

If you can’t work out it out, you can always try to read the Wikipedia article on analytic continuation. While less ‘intuitive’ than Penrose’s notes on it, it’s definitely more complete, even if does not quite exhaust the topic. Wikipedia defines analytic continuation as “a technique to extend the domain of a given analytic function by defining further values of that function, for example in a new region where an infinite series representation in terms of which it is initially defined becomes divergent.” The Wikipedia article also notes that, “in practice, this continuation is often done by first establishing some functional equation on the small domain and then using this equation to extend the domain: examples are the Riemann zeta function and the gamma function.” But so that’s sturdier stuff which Penrose does not touch upon – for now at least, but I expect him to develop such things in later Road to Reality chapters).

Post scriptum: Perhaps this is an appropriate place to note that, at first sight, singularities may look like no big deal: so we have a infinitesimally small hole in the domain of function log z or 1/z s or whatever, so what? Well… It’s probably useful to note that, if we wouldn’t have that ‘hole’ (i.e. the singulariy), any integral of this function (I mean the integral of this function along any closed contour around that point, i.e. ∫C f(z)dz, would be equal to zero, but when we do have that little hole, like for f(z) = 1/z, we don’t have that result. In this particular case (i.e. f(z) = 1/z), you should note that the integral ∫C (1/z)dz, for any closed contour around the origin, equals 2πi, or that, just to give one more example here, that the value of the integral ∫C [f(z)/(z – z0)]dz is equal to 2πf(z0). Hence, even if f(z) would be analytic over the whole open disk, including the origin, the ‘quotient function’ f(z)/z will not be analytic at the origin and, hence, the value of the integral of this ‘quotient function’ f(z)/z around the origin will not be zero but equal to 2πi times the value of the original f(z) function at the origin, i.e. 2πi times f(0). Vice versa, if we find that the value of the integral of some function around a closed contour – and I mean any closed contour really – is not equal to zero, we know we’ve got a problem somewhere and so we should look out for one or more infinitesimally small little ‘holes’ somewhere in the domain. Hence, singularities, and this complex theory of poles and residues which shows us how we can work with them, is extremely relevant indeed: it’s surely not a matter of just trying to get some better approximation for this or that value or formula or so. 🙂

In light of the above, it is now also clear that the term ‘residue’ is well chosen: this coefficient bis equal to ∫C f(z)dz divided by 2π(I take the case of a Maclaurin expansion here) and, hence, there would be no singularity, this integral (and, hence, the coefficient b1) would be equal to zero. Now, because of the singularity, we have a coefficient b≠ 0 and, hence, using the term ‘residue’ for this ‘remnant’ is quite appropriate.

Complex integrals

Roger Penrose packs a lot in his chapter on complex-number calculus (Road to Reality, Chapter 7). He summarily introduces the concept of contour integration and then proceeds immediately to discuss power series representations of complex functions as well as fairly advanced ways to deal with singularities (see the section on analytic continuation). Brown and Churchill use not less than three chapters to develop this (Integrals (chapter 4), Series (chapter 5), and Residues and Poles (chapter 6)), and that’s probably what is needed for some kind of understanding of it all. Let’s start with integrals. However, let me first note here that Wordpress does not seem to have a formula editor (so it is not like MS Word) and, hence, I have to keep the notation simple: I’ll use the symbol ∫for a contour (or line) integral along a curve C, and the symbol ∫[a, b] for an integral on a (closed) interval [a, b].

OK. Here we go. First, it is important to note that Penrose, and Brown and Churchill, are talking about complex integrals, i.e. integrals of complex-valued functions, whose value itself is (usually) a complex number too. That is very different from the line integrals I was exposed to when reading Feynman’s Lectures on Physics. Indeed, Feynman’s Lectures (Volume II, on Electromagnetism) offer a fine introduction to contour integrals, where the function to be integrated is either a scalar field (e.g. electric potential) or, else, some vector field (e.g. magnetic field, gravitational field). Now, vector fields are two-dimensional things: they have an x- and a y-coordinate and, hence, we may think the functions involving vectors are complex too. They are but, that being said, the integrals involved all yield real-number values because the integrand is likely to be a dot product of vectors (and dot products of vectors, as opposed to cross products, yield a real number). I won’t go into the details here but, for those who’d want to have such details, the Wikipedia article offers a fine description (including some very nice animations) of what integration over a line or a curve in such fields actually means. So I won’t repeat that here. I can only note what Brown and Churchill say about them: these (real-valued) integrals can be interpreted as areas under the curve, and they would usually also have one or the other obvious physical meaning, but complex integrals do usually not have such ‘helpful geometric or physical interpretation.’

So what are they then? Let’s first start with some examples of curves.

curve examples

The illustration above makes it clear that, in practice, the curves which we are dealing with are usually parametric curves. In other words, the coordinates of all points z of the curve C can be represented as some function of a real-number parameter: z(t) = x(t) + iy(t). We can then define a complex integral as the integral of a complex-valued function f(z) of a complex variable z along a curve C from point zto point zand write such integral as ∫f(z)dz.

Moreover, if C can be parametrized as z(t), we will have some (real) number a and b such that  z1= z(a) and z2= z(b) and, taking into account that dz = z'(t)dt with z'(t)=dz/dt (i.e. the derivative of the (complex-valued) function z(t) with respect to the (real) parameter t), we can write ∫f(z)dz as:

f(z)dz = ∫[a, b] f[z(t)]z'(t)dt 

OK, so what? Well, there are a lot of interesting things to be said about this, but let me just summarize some of the main theorems. The first important theorem does not seem to associated with any particular mathematician (unlike Cauchy or Goursat, which I’ll introduce in a minute) but is quite central: if we have some (complex-valued) function f(z) which would happen to be continuous in some domain D, then all of the following statements will be true if one of them is true:

(I) f(z) has an antiderivative F(z) in D ; (II) the integrals of f(z) along contours lying entirely in D and extending from any fixed point z1to any fixed point zall have the same value; and, finally, (III) the integrals of f(z) around closed contours lying entirely in D all have value zero.

This basically means that the integration of f(z) from z1to z2 is not dependent on the path that is taken. But so when do we have such path independence? Well… You may already have guessed the answer to that question: it’s when the function is analytic or, in other words, when these Cauchy-Riemann equations u= vy and u= – vare satisfied (see my other post on analytic (or holomorphic) complex-valued functions). That’s, in a nutshell, what’s stated in the so-called Cauchy-Goursat theorem, and it should be noted that it is an equivalence really, so we also have the vice versa statement: if the integrals of f(z) around closed contours in some domain D are zero, then we know that f(z) is holomorphic.

In short, we’ll always be dealing with ‘nice’ functions and then we can show that the s0-called ‘fundamental’ theorem of calculus (i.e. the one that links integrals with derivatives, or – to be somewhat more precise – with the antiderivative of the integrand) also applies to complex-valued valued functions. We have:

f(z)dz = ∫[a, b] f[z(t)]z'(t)dt = F[z(b)] – F[z(a)]

or, more in general: ∫f(z)dz = F(z2) – F(z1)

We also need to note the Cauchy integral formula: if we have a function f that is analytic inside and on a closed contour C, then the value of this function for any point zwill be equal to:

f(z0) = (1/2πi) ∫C [f(z)/(z – z0)]dz

This may look like just another formula, but it’s quite spectacular really: it basically says that the function value of any point zwithin a region enclosed by a curve is completely determined by the values of this function on this curve. Moreover, integrating both sides of this equation repeatedly leads to similar formulas for the derivatives of the first, second, third, and higher order of f: f'(z0) = (1/2πi) ∫[f(z)/(z – z0)2]dz, f”(z0) = (1/2πi) ∫[f(z)/(z – z0)3]dz or, more in general:

f(n)(z0) = (n!/2πi) ∫[f(z)/(z – z0)n+1]dz (n = 1, 2, 3,…)

This formula is also known as Cauchy’s differentiation formula. It is a central theorem in complex analysis really, as it leads to many other interesting theorems, including Gauss’s mean value theorem, Liouville’s theorem, the maximum (and miniumum) modulus principle It is also essential for the next chapter in Brown and Churchill’s course: power series representations of complex functions. However, I will stop here because I guess this ‘introduction’ to complex integrals is already confusing enough.

Post scriptum: I often wondered why one would label one theorem as ‘fundamental’, as it implies that all the other theorems may be important but, obviously, somewhat less fundamental. I checked it out and it turns out there is some randomness here. The Wikipedia article boldly states that the fundamental theorem of algebra (which states that every non-constant single-variable polynomial with complex coefficients has at least one complex roots) is not all that ‘fundamental’ for modern algebra: its title just reflects the fact that there was a time when algebra focused almost exclusively on studying polynomials. The same might be true for the fundamental theorem of arithmetic (i.e. the unique(-prime)-factorization theorem), which states that every integer greater than 1 is either a prime itself or the product of prime numbers, e.g. 1200 = (24)(31)(52).

That being said, the fundamental theorem of calculus is obviously pretty ‘fundamental’ indeed. It leads to many results that are indeed key to understanding and solving problems in physics. One of these is the Divergence Theorem (or Gauss’s Theorem), which states that the outward flux of a vector field through a closed surface is equal to the volume integral of the divergence over the region inside the surface. Huh? Well… Yes. It pops up in any standard treatment of electromagnetism. There are others (like Stokes’ Theorem) but I’ll leave it at that for now, especially because these are theorems involving real-valued integrals.

Riemann surfaces (I)

In my previous post on this blog, I once again mentioned the issue of multiple-valuedness. It is probably time to deal with the issue once and for all by introducing Riemann surfaces.

Penrose attaches a lot of importance to these Riemann surfaces (so I must assume they are very important). In contrast, in their standard textbook on complex analysis, Brown and Churchill note that the two sections on Riemann surfaces are not essential reading, as it’s just ‘a geometric device’ to deal with multiple-valuedness. But so let’s go for it.

I already signaled that complex powers w = zc are multiple-valued functions of z and so that causes all kinds of problems, because we can’t do derivatives and integrals and all that. In fact,  zc = ec log z and so we have two components on the right-hand side of this equation. The first one is the (complex) exponential function ec, i.e. the real number e raised to a complex power c. We already know (see the other posts below) that this is a periodic function with (imaginary) period 2πie= ec+2πi eiec = 1ec. While this periodic component of zc is somewhat special (as compared to exponentiation in real analysis), it is not this periodic component but the log z component which is causing the problem of multiple-valuedness. [Of course, it’s true that the problem of multiple-valuedness of the log function is, in fact, a logical consequence of the periodicity of the complex power function, but so you can figure that out yourself I guess.] So let’s look at that log z function once again.

If we write z in its polar form z = reiθ, then log z will be equal to log z = ln r + i(θ+2nπ) with n = 0, ±1, ±2,… Hence, if we write log z in rectangular coordinates (i.e. log z = x + iy) , then we note that the x component (i.e.the real part) of log z is equal to ln r and, hence, x is just an ordinary real number with some fixed value (x = ln r). However, the y component (i.e. the imaginary part of log z) does not have any fixed value: θ is just one of the values, but so are θ+2π and θ – 2π and θ+4π etcetera. In short, we have an infinite number of values for y, and so that’s the issue: what do we do with all these values? It’s not a proper function anymore.

Now, this problem of multiple-valuedness is usually solved by just picking a so-called principal value for log z, which is written as Log z = ln r + iθ, and which is defined by mathematicians by imposing the condition that θ takes a value in the interval between -π and +π only (hence, -π < θ < π). In short, the mathematicians usually just pretend that the 2nπthing doesn’t matter.

However, this is not trivial: as we are imposing these restrictions on the value of Θ, we are actually defining some new single-valued function Log z = ln r + iθ. This Log z function, then, is a complex-valued analytic function with two real-valued components: x = ln r and y = θ. So, while x = ln r can take any value on the real axis, we let θ range from -π to +π only (in the usual counterclockwise or ‘positive’ direction, because that happens to be the convention). If we do this, we get a principal value for zas well: P.V. zc = ec Log z, and so we’ve ‘solved’ the problem of multiple values for the function ztoo in this way.

What we are doing here has a more general significance: we are taking a so-called branch out of a multiple-valued function, in order to make it single-valued and, hence, analytic. To illustrate what is really going on here, let us go back to the original multiple-valued log z = ln r + i(θ+2nπ) function and let’s do away with this integer n by writing log z in the more general form log z = ln r + iΘ. Of course, Θ is equal to θ+2nπ but so we’ll just forget about the θ and, most importantly, about the n, and allow the y component (i.e. the imaginary part) of the imaginary number log z = x + iy to take on any value Θ in the real field. In other words, we treat this angle Θ just like any other ordinary real number. We can now define branches of log z again, but in a more general way: we can pick any value α and say that’s a branch point, as it will define a range α <  Θ < α + 2π in which, once again, we limit the possible values of log z to just one.

For example, if we choose α = -π, then Θ will range from -π to +π and so then we’re back to log z’s principal branch, i.e. Log z. However, let us now, instead of taking this Log z branch, define another branch – we’ll call it the L(z) branch – by choosing α = 0 and, hence, letting Θ range from 0 to 2π. So we have 0 < Θ < 2π and, of course, you’ll note that this range overlaps with the range that is being used for the principal branch of log z (i.e. Log z). It does, and it’s not a problem. Indeed, for values 0 < Θ < π (i.e. the overlapping half-plane) we get the same set of values Log z = L(z) for log z, and so we are talking the same function indeed.

OK. I guess we understand that. So what? Well… The fact is that we have found a very very nice way of illustrating the multiple-valuedness of the log z function and – more importantly – a nice way of ‘solving’ it too. Have a look at the beautiful 3D graph below. It represents the log z function. [Well… Let me be correct and note that, strictly speaking, this particular surface seems to represent the imaginary part of the log z function only, but that’s OK at this stage.]

Riemann_surface_log

Huh? What’s happening here? Well, this spiral surface represents the log z function by ‘gluing’ successive log z branches together. I took the illustration from Wikipedia’s article on the complex logarithm and, to explain how this surface has been constructed, let’s start at the origin, which is located right in the center of this graph, between the yellow-green and the red-pinkish sheets (so the horizontal (x, y) plane we start from is not the bottom of this rectangular prism: you should imagine it at its center).

From there, we start building up the first ‘level’ of this graph (i.e. the yellowish level above the origin) as the angle Θ sweeps around the origin,  in counterclockwise direction, across the upper half of the complex z plane. So it goes from 0 to π and, when Θ crosses the negative side of the real axis, it has added π to its original value. With ‘original value’, I mean its value when it crossed the positive real axis the previous time. As we’ve just started, Θ was equal to 0. We then go from π to 2π, across the lower half of the complex plane, back to the positive real axis: that gives us the first ‘level’ of this spiral staircase (so the vertical distance reflects the value of Θ indeed, which is the imaginary part of log z) . Then we can go around the origin once more, and so Θ goes from 2π to 4π, and so that’s how we get the second ‘level’ above the origin – i.e. the greenish one. But – hey! – how does that work? The angle 2π is the same as zero, isn’t it? And 4π as well, no?Well… No. Not here. It is the same angle in the complex plane, but is not the same ‘angle’ if we’re using it here in this log z = ln r + iΘ function.

Let’s look at the first two levels (so the yellow-green ones) of this 3D graph once again. Let’s start with Θ = 0 and keep Θ fixed at this zero value for a while. The value of log z is then just the real component of this log z = ln r + iΘ function, and so we have log z = ln r + i0 = ln r. This ln r function (or ln(x) as it is written below) is just the (real) logarithmic function, which has the familiar form shown below. I guess there is no need to elaborate on that although I should, perhaps, remind you that r (or x in the graph below) is always some positive real number, as it’s the modulus of a vector – or a vector length if that’s easier to understand. So, while ln(r) can take on any (real-number) value between -∞ and +∞, the argument r is always a positive real number.

375px-Logarithm_derivative

Let us now look at what happens with this log z function as Θ moves from 0 to 2π, first through the upper half of the complex z plane, to Θ = π first, and then further to 2π through the lower half of the complex plane. That’s less easy to visualize, but the illustration below might help. The circles in the plane below (which is the z plane) represent the real part of log z: the parametric representation of these circles is: Re(log z) = ln r = constant. In short, when we’re on these circles, going around the origin, we keep r fixed in the z plane (and, hence, ln r is constant indeed) but we let the argument of z (i.e. Θ) vary from 0 to 2π and, hence, the imaginary part of log z (which is equal to Θ)  will also vary. On the rays it is the other way around: we let r vary but we keep the argument Θ of the complex number z = reiθ fixed. Hence, each ray is the parametric representation of Im(log z) = Θ = constant, so Θ is some fixed angle in the interval 0 < π < 2π.

Logez02

Let’s now go back to that spiral surface and construct the first level of that surface (or the first ‘sheet’ as it’s often referred to) once again. In fact, there is actually more than way to construct such spiral surface: while the spiral ramp above seems to depict the imaginary part of log z only, the vertical distance on the illustration below includes both the real as well as the imaginary part of log z (i.e. Re log z + Im log z = ln r + Θ).

Final graph of log z

Again, we start at the origin, which is, again, the center of this graph (there is a zero (0) marker nearby, but that’s actually just the value of Θ on that ray (Θ = 0), not a marker for the origin point). If we move outwards from the center, i.e. from the origin, on the horizontal two-dimensional z = x + iy = (x,y) plane but along the ray Θ = 0, then we again have log z = ln r + i0 = ln r. So, looking from above, we would see an image resembling the illustration above: we move on a circle around the origin if we keep r constant, and we move on rays if we keep Θ constant. So, in this case, we fix the value of Θ at 0 and move out on a ray indeed and, in three dimensions, the shape of that ray reflects the ln r function. As we then become somewhat more adventurous and start moving around the origin, rather than just moving away from it, the iΘ term in this ln r + iΘ function kicks in and the imaginary part of w (i.e. Im(log z) = y = Θ) grows. To be precise, the value 2π gets added to y with every loop around the origin as we go around it. You can actually ‘measure’ this distance 2π ≈ 6.3 between the various ‘sheets’ on the spiral surface along the vertical coordinate axis (that is if you could read the tiny little figures along the vertical coordinate axis in these 3D graphs, which you probably can’t).

So, by now you should get what’s going on here. We’re looking at this spiral surface and combining both movements now. If we move outwards, away from this center, keeping Θ constant, we can see that the shape of this spiral surface reflects the shape of the ln r function, going to -∞ as we are close to the center of the spiral, and taking on more moderate (positive) values further away from it. So if we move outwards from the center, we get higher up on this surface. We can also see that we also move higher up this surface as we move (counterclockwise) around the origin, rather than away from it. Indeed, as mentioned above, the vertical coordinate in the graph above (i.e. the measurements along the vertical axis of the spiral surface) is equal to the sum of Re(log) and Im(log z). In other words, the ‘z’ coordinate in the Euclidean three-dimensional (x, y, z) space which the illustrations above are using is equal to ln r + Θ, and, hence, as 2π gets added to the previous value of Θ with every turn we’re making around the origin, we get to the next ‘level’ of the spiral, which is exactly 2π higher than the previous level. Vice versa, 2π gets subtracted from the previous value of Θ as we’re going down the spiral, i.e. as we are moving clockwise (or in the ‘negative’ direction as it is aptly termed).

OK. This has been a very lengthy explanation but so I just wanted to make sure you got it. The horizontal plane is the z plane, so that’s all the points z = x + iy = reiθ, and so that’s the domain of the log z function. And then we have the image of all these points z under the log z function, i.e. the points w = ln r + iΘ right above or right below the z points on the horizontal plane through the origin.

Fine. But so how does this ‘solve’ the problem of multiple-valuedness, apart from ‘illustrating’ it? Well… From the title of this post, you’ll have inferred – and rightly so – that the spiral surface which we have just constructed is one of these so-called Riemann surfaces.

We may look at this Riemann surface as just another complex surface because, just like the complex plane, it is a two-dimensional manifold. Indeed, even if we have represented it in 3D, it is not all that different from a sphere as a non-Euclidean two-dimensional surface: we only need two real numbers (r and Θ) to identify any point on this surface and so it’s two-dimensional only indeed (although it has more ‘structure’ than the ‘flat’ complex plane we are used to) . It may help to note that there are other surfaces like this, such as the ones below, which are Riemann surfaces for other multiple-valued functions: in this case, the surfaces below are Riemann surfaces for the (complex) square root function (f(z) = z1/2) and the (complex) arcsin(z) function.

Riemann_surface_sqrtRiemann_surface_arcsin

Nice graphs, you’ll say but, again, what is this all about? These graphs surely illustrate the problem of multiple-valuedness but so how do they help to solve it? Well… The trick is to use such Riemann surface as a domain really: now that we’ve got this Riemann surface, we can actually use it as a domain and then log z (or z1/2 or arcsin(z) if we use these other Riemann surfaces) will be a nice single-valued (and analytic) function for all points on that surface. 

Huh? What? […] Hmm… I agree that it looks fishy: we first use the function itself to construct a ‘Riemannian’ surface, and then we use that very same surface as a ‘Riemannian’ domain for the function itself? Well… Yes. As Penrose puts it: “Complex (analytic) functions have a mind of their own, and decide themselves what their domain should be, irrespective of the region of the complex plane which we ourselves may initially have allotted to it. While we may regard the function’s domain to be represented by the Riemann surface associated with the function, the domain is not given ahead of time: it is the explicit form of the function itself that tells us which Riemann surface the domain actually is.”

I guess we’ll have to judge the value of this bright Riemannian idea (Bernhardt Riemann had many bright ideas during his short lifetime it seems) when we understand somewhat better why we’d need these surfaces for solving physics problems. Back to Penrose. 🙂

Post scriptum: Brown and Churchill seem to approach the matter of how to construct a Riemann surface somewhat less rigorously than I do, as they do not provide any 3D illustrations but just talk about joining thin sheets, by cutting them along the positive half of the real axis and then joining the lower edge of the slit of the first sheet to the upper edge of the slit in the second sheet. This should be done, obviously, by making sure there is no (additional) tearing of the original sheet surfaces and all that (so we’re talking ‘continuous deformations’ I guess), but so that could be done, perhaps, without creating that ‘tornado vortex’ around the vertical axis, which you can clearly see in that gray 3D graph above. If we don’t include the ln r term in the definition of the ‘z’ coordinate in the Euclidean three-dimensional (x, y, z) space which the illustrations above are using, then we’d have a spiral ramp without a ‘hole’ in the center. However, that being said, in order to construct a ‘proper’ two-dimensional manifold, we would probably need some kind function of r in the definition of ‘z’. In fact, we would probably need to write r as some function of Θ in order to make sure we’ve got a proper analytic mapping. I won’t go into detail here (because I don’t know the detail) but leave it to you to check it out on the Web: just check on various parametric representations of spiral ramps: there’s usually (and probably always) a connection between Θ and how, and also how steep, spiral ramps climb around their vertical axis.

Complex functions and power series

As I am going back and forth between this textbook on complex analysis (Brown and Churchill, Complex Variables and Applications) and Roger Penrose’s Road to Reality, I start to wonder how complete Penrose’s ‘Complete Guide to the Laws of the Universe actually is or, to be somewhat more precise, how (in)accessible. I guess the advice of an old friend – a professor emeritus in nuclear physics, so he should know! – might have been appropriate. He basically said I should not try to take any shortcuts (because he thinks there aren’t any), and that I should just go for some standard graduate-level courses on physics and math, instead of all these introductory texts that I’ve been trying to read (such as Roger Penrose’s books – but it’s true I’ve tried others too). The advice makes sense, if only because such standard courses are now available on-line. Better still: they are totally free. One good example is the Physics OpenCourseWare (OCW) from MIT: I just went on their website (ocw.mit.edu/courses/physics) and I was truly amazed.

Roger Penrose is not easy to read indeed: he also takes almost 200 pages to explain complex analysis, i.e. as many pages as the Brown and Churchill textbook,  but I find the more formal treatment of the subject-matter in the math handbook easier to read than Penrose’s prose. So, while I won’t drop Penrose as yet (this time I really do not want to give up), I will probably to (continue to) invest more time in other books – proper textbooks really – than in reading Penrose. In fact, I’ve started to look at Penrose’s prose as a more creative approach, but one that makes sense only after you’ve gone through all of the ‘basics’. And so these ‘basics’ are probably easier to grasp by reading some tried and tested textbooks on math and physics first.

That being said, let me get back to the matter on hand by making good on at least one of the promises I made in the previous posts, and that is to say something more about the Taylor expansion of analytic functions. I wrote in one of these posts that this Taylor expansion is something truly amazing. It is, in my humble view at least. We have all these (complex-valued) functions of complex variables out there – such as ez, log z, zc, complex polynomials, complex trigonometric and hyperbolic functions, and all of the possible combinations of the aforementioned – and so all of these functions can be represented by a (infinite) sum of powers f(z) = Σ an(z-z0)n (with n going from 0 to infinity and with zbeing some arbitrary point in the function’s domain). So that’s the Taylor power series.

All complex functions? Well… Yes. Or no. All analytic functions. I won’t go into the details (if only because it is hard to integrate mathematical formulas with the XML editor I am using here) but so it is an amazing result, which leads to many other amazing results. In fact, the proof of Taylor’s Theorem is, in itself, rather marvelous (yes, I went through it) as it involves other spectacular formulas (such as the Cauchy integral formula). However, I won’t go into this here. Just take it for granted:  Taylor’s Theorem is great stuff!

But so the function has to be analytic – or well-behaved as I’d say. Otherwise we can’t use Taylor’s Theorem and, hence, this power series expansion doesn’t work. So let’s define (complex) analyticity: a function w = f(z) = f(x+iy) = u(x) + i(y) is analytic (a often-used synonym is holomorphic) if its partial derivatives ux, uy, vand vy exist and respect the so-called Cauchy-Riemann equations: ux = vy and u= -vx.

These conditions are restrictive (much more restrictive than the conditions for analyticity for real-valued functions). Indeed, there are many complex functions which look good at first sight – if only because there’s no problem whatsoever with their real-valued components u(x,y) and v(x,y) in real analysis/calculus – but which do not satisfy these Cauchy-Riemann conditions. Hence, they are not ‘well-behaved’ in the complex space (in Penrose’s words: they do not conform to the ‘Eulerian notion’ of a function), and so they are of relatively little use – for solving complex problems that is!

A function such as f(z) = 2x + ixy2 is an example: there are no complex numbers for which the Cauchy-Riemann conditions hold (check it out: the Cauchy-Riemann conditions are xy = 1 and y = 0, and these two equations contradict each other). Hence, we can’t do much with this function really. For other functions, such as x2 + iy2, the Cauchy-Riemann conditions are only satisfied in very limited subsets of the functions’ domain: in this particular case, the Cauchy-Riemann conditions only hold when y = x. We also have functions for which the Cauchy-Riemann conditions hold everywhere except in one or more singularities. The very simple function f(z) = 1/z is an example of this: it is easy to see we have a problem when z = 0, because the function value is not determined there.

As for the last category of functions, one would expect there is an easy way out, using limits or something. And there is. Singularities are not a big problem and we can work our way around them. I found out that ‘working our way around them’ usually involves a so-called Laurent series representation of the function, which is a more generalized version of the Taylor expansion involving not only positive but also negative powers.

One of the other things I learned is how to solve contour integrals. Solving contour integrals is the equivalent, in the complex world, of integrating a real-valued function over an interval [a, b] on the real line. Contours are curves in the complex plane. They can be simple and closed (like a circle or an ellipse for instance), and usually they are, but then they don’t have to be simple and closed: they can self-intersect, for example, or they can go around some point or around some other curve more than once (and, yes, that makes a big difference: when you go around twice or more, you’re talking a different curve really).

But so these things can all be solved relatively easily – everything is relative of course 🙂 – if (and only if) the functions involved are analytic and/or if the singularities involved are isolated. In fact, we can extend the definition of analytic functions somewhat and define meromorphic functions: meromorphic functions are functions that are analytic throughout their domain except for one or more isolated singular points (also referred to as poles for some strange reason).

Holomorphic (and meromorphic) functions w = f(z) can be looked at as transformations: they map some domain D in the (complex) z plane to some region (referred to as the image of D) in the (complex) w plane. What makes them holomorphic is the fact that they preserve angles – as illustrated below.

342px-Conformal_map.svg

If you have read the first post on this blog, then you have seen this illustration already. Let me therefore present something better. The image below illustrates the function w = f(z) = z2 or, vice versa, the function z = √w = w1/2 (i.e. the square root of w). Indeed, that’s a very well-behaved function in the complex plane: every complex number (including negative real numbers) has two square roots in the complex plane, and so that’s what is shown below.   

hans squaredhans_squared-2

Huh? What’s this?

It’s simple: the illustration above uses color (in this case, a simple gray scale only really) to connect the points in the square region of the domain (i.e. the z plane) with an equally square region in the w plane (i.e. the image of the square region in the z plane). You can verify the properties of the z = w1/2 function indeed. At z=i we easily recognize a spot on the right ear of this person: it’s the w=−1 point in the w plane. Now, the same spot is found at z=−i. This reflects the fact that i2 = (-i)=−1. Similarly, this guy’s mouth, which represents the region near w=−i, is found near the two square roots of −in the z plane, which are z=±(1−i )/√2. In fact, every part of this man’s face is found at two places in the z plane, except for the spot between his eyes, which corresponds to w=0, and also to z=0 under this transformation. Finally, you can see that this transformation is holomorphic: all (local) angles are preserved. In that sense, it’s just like a conformal map of the Earth indeed. [Note, however, that I am glossing over the fact that z = w1/2 is multiple-valued: for each value of w, we have two square roots in the z-plane. That actually creates a bit of a problem when interpreting the image above. See the post scriptum at the bottom of this post for more text on this.]

[…] OK. This is fun. [And, no, it’s not me: I found this picture on the site of a Swedish guy called Hans Lundmark, and so I give him credit for making complex analysis so much fun: just Google him to find out more.] However, let’s get somewhat more serious again and ask ourselves why we’d need holomorphism?

Well… To be honest, I am not quite sure because I haven’t gone through the rest of the course material yet – or through all these other chapters in Penrose’s book (I’ve done 10 now, so there’s 24 left). That being said, I do note that, besides all of the niceties I described above (like easy solutions for contour integrals), it is also ‘nice’ that the real and imaginary parts of an analytic function automatically satisfy the Laplace equation.

Huh? Yes. Come on! I am sure you have heard about the Laplace equation in college: it is that partial differential equation which we encounter in most physics problems. In two dimensions (i.e. in the complex plane), it’s the condition that δ2f/δx+ δ2f/δyequals zero. It is a condition which pops us in electrostatics, fluid dynamics and many other areas of physical research, and I am sure you’ve seen simple examples of it. 

So, this fact alone (i.e. the fact that analytic functions pop up everywhere in physics) should be justification enough in itself I guess. Indeed, the first nine chapters of Brown and Churchill’s course are only there because of the last three, which focus on applications of complex analysis in physics. But is there anything more to it? 

Of course there is. Roger Penrose would not dedicate more than 200 pages to all of the above if it was not for more serious stuff than some college-level problems in physics, or to explain fluid dynamics or electrostatics. Indeed, after explaining why hypercomplex numbers (such as quaternions) are less useful than one might expect (Chapter 11 of his Road to Reality is about hypercomplex numbers and why they are useful/not useful), he jumps straight into the study of higher-dimensional manifolds (Chapter 12) and symmetry groups (Chapter 13). Now I don’t understand anything of that, as yet that is, but I sure do understand I’ll need to work my way through it if I ever want to understand what follows after: spacetime and Minkowskian geometry, quantum algebra and quantum field theory, and then the truly exotic stuff, such as supersymmetry and string theory. [By the way, from what I just gathered from the Internet, string theory has not been affected by the experimental confirmation of the existence of the Higgs particle, as it is said to be compatible with the so-called Standard Model.]

So, onwards we go! I’ll keep you posted. However, as I look at that (long) list of MIT courses, it may take some time before you hear from me again. 🙂

Post scriptum:

The nice picture of this Swedish guy is also useful to illustrate the issue of multiple-valuedness, which is an issue that pops up almost everywhere when you’re dealing with complex functions. Indeed, if we write w in its polar form w = reiθ, then its square root can be written as z = w1/2 = (√r)ei(θ/2+kπ), with k equal to either 0 or 1. So we have two square roots indeed for each w: each root has a length (i.e. its modulus or absolute value) equal to √r (i.e the positive square root of r) but their arguments are θ/2 and θ/2 + π respectively, and so that’s not the same. It means that, if z is a square root of some w in the w plane, then -z will also be a square root of w. Indeed, if the argument of z is equal to θ/2, then the argument of -z will be π/2 + π = π/2 + π – 2π = π/2 – π (we just rotate the vector by 180 degrees, which corresponds to a reflection through the origin). It means that, as we let the vector w = reiθ move around the origin – so if we let θ make a full circle starting from, let’s say, -π/2 (take the value w = –i for instance, i.e. near the guy’s mouth) – then the argument of the image of w will only go from (1/2)(-π/2) = -π/4 to (1/2)(-π/2 + 2π) = 3π/4. These two angles, i.e. -π/4 and 3π/4, correspond to the diagonal y=-x in the complex plane, and you can see that, as we go from -π/4 to 3π/4 in the z-plane, the image over this 180 degree swoop does cover every feature of this guy’s face – and here I mean not half of the guy’s face, but all of it. Continuing in the same direction (i.e. counterclockwise) from 3π/4 back to -π/4 just repeats the image. I will leave it to you to find out what happens with the angles on the two symmetry axes (y = x and y = -x).