Spaces

The term ‘space’ is all over the place when reading math. There are are all kinds of spaces in mathematics and the terminology is quite confusing. Let’s first start with the definition of a space: a space (in mathematics) is, quite simply, just a set of things (elements or objects) with some kind of structure (and, yes, there is also a definition for the extremely general notion of ‘structure’: a structure on a set consists of ‘additional mathematical objects’ that relate to the set in some manner attach – I am not sure that helps but so there you go).

The elements of the set can be anything. From what I read, I understand that a topological space might be the most general notion of a mathematical space. The Wikipedia article on it defines a topological space as “a set of points, along with a set of neighborhoods for each points, that satisfies a set of axioms relating points and neighborhoods.” It also states that “other spaces, such as manifolds and metric spaces, are (nothing but) specializations of topological spaces with extra structures or constraints.” However, the symbolism involved in explaining the concept is complex and probably prevents me from understanding the finer points. I guess I’d need to study topology for that – but so I am only doing a course in complex analysis right now. 🙂

Let’s go to something something more familiar: the metric spaces. A metric space is a set where a notion of distance (i.e a metric) between elements of the set is defined. That makes sense, doesn’t it?

We have Euclidean and non-Euclidean metric spaces. We all know Euclidean spaces as that is what use for the simple algebra and calculus we all had to learn as a teenager. They can have two, three or more dimensions, but there is only one Euclidean space in any dimension (a line, the plane or, more in general, the Cartesian multidimensional coordinate system). In Euclidean geometry, we have the parallel postulate: within a two-dimensional plane, for any given line L and a point p which is not on that line, there is only one line through that point – the parallel line –which does not intersect with the given line. Euclidean geometry is flat – well… Kind of.

Non-Euclidean metric spaces are a study object in their own but – to put it simply – in non-Euclidean geometry, we do not have the parallel postulate. If we do away with that, then there are two broad possibilities: either we have more than one parallel line or, else, we have none. Let’s start with the latter, i.e. no parallels. The most often quoted example of this is the sphere. A sphere is a two-dimensional surface where lines are actually great circles which divide the sphere in two equal hemispheres, like the equator or the meridians through the poles. They are the shortest path between the two points and so that corresponds to the definition of a line on a sphere indeed. Now, any ‘line’ (any geodesic that is) through a point p off a ‘line’ (geodesic) L will intersect with L, and so there are no parallel lines on a sphere – at least not as per the mathematical definition of a parallel line. Indeed, parallel lines do not intersect. I have to note that it is all a bit confusing because the so-called parallels of latitude on a globe are small circles, not great circles, and, hence, they are not the equivalent of a line in spherical geometry. In short, the parallels of latitude are not parallel lines – in the mathematical sense of the word that is. Does that make sense? For me it makes sense enough, so I guess I should move on.

Other Euclidean geometric facts, such as the angles of a triangle summing up to 180°, cannot be observed on a sphere either: the angles of a triangle on a sphere add up to more than 180° (for the big triangle illustrated below, they add up to 90 + 90 + 50 = 230°). That’s typical of Riemannian or elliptic geometry in general and so, yes, the sphere is an example of Riemannian or elliptic geometry.

Triangles_(spherical_geometry)

Of course, it is probably useful to remind ourselves that a sphere is a two-dimensional surface in elliptic geometry, even if we’re visualizing it in a three-dimensional space when we’re discussing its properties. Think about the flatlander walking on the surface on the globe: for him, the sphere is two-dimensional indeed, and so it’s not us – our world is not flat: we think in 3D – but the flatlander who is living in a spherically geometric world. It’s probably useful to introduce the term ‘manifold’ here. Spheres – and other surfaces, like the saddle-shaped surfaces we will introduce next – are (two-dimensional) manifolds. Now what’s that? I won’t go into the etymology of the term because that doesn’t help I feel: apparently, it has nothing to do with the verb to fold so it’s not something with many folds or so – although a two-dimensional manifold can look like something folded. A manifold is, quite simply, a topological space that near each point resembles Euclidean space, so we can define a metric on them indeed and do all kinds of things with that metric – locally that is. If the flatlander does these things – like measuring angles and lengths and what have you – close enough to where he is, then he won’t notice he’s living in a non-Euclidean space. That ‘fact’ is also shown above: the angles of a small (i.e. a local) triangle do add up to 180° – approximately that is.

Fine. Let’s move on again.

The other type of non-Euclidean geometry is hyperbolic or Lobachevskian geometry. Hyperbolic geometry is the geometry of saddle-shaped surfaces. Many lines (in fact, an infinite number of them) can be drawn parallel to a given line through a given point (the illustration below shows just one), and the angles of a triangle add up to less than 180°.

2000px-Hyperbolic_triangle

OK… Enough about metric spaces perhaps – except for noting that, when physicists talk about curved space, they obviously mean that the the space we are living in (i.e. the universe) is non-Euclidean: gravity curves it. And let’s add one or two other points as well. Anyone who has read something about Einstein’s special relativity theory will remember that the mathematician Hermann Minkowski added a time dimension to the three ordinary dimensions of space, creating a so-called Minkowski space, which is actually four-dimensional spacetime. So what’s that in mathematical terms? The ‘points’ in Minkowski’s four-dimensional spacetime are referred to as ‘events’. They are also referred to as four-vectors. The important thing to note here is that it’s not an Euclidean space: it is pseudo-Euclidean. Huh?

Let’s skip that for now and not make it any more complicated than it already. Let us just note that the Minkowski space is not only a metric space (and a manifold obviously, which resembles Euclidean space locally, which is why we don’t notice its curvature really): a Minkowski space is also a vector space. So what’s a vector space? The formal definition is clear: a vector space V over a (scalar) field F is a set of elements (vectors) together with two binary operations: addition and scalar multiplication. So we can add the vectors (that is the elements of the vector space) and scale them with a scalar, i.e. an element of the field F (that’s actually where the word ‘scalar’ comes from: something that scales vectors). The addition and scalar multiplication operations need to satisfy a number of axioms but these are quite straightforward (like associativity, commutativity, distributivity, the existence of an additive and multiplicative inverse, etcetera). The scalars are usually real numbers: in that case, the field F is equal to R, the set of real numbers (sorry I can’t use blackboard bold here for symbols so I am just using a bold capital R for the set of the real numbers), and the vector space is referred to as a real vector space. However, they can also be complex numbers: in that case, the field F is equal to C, the set of complex numbers, and the vector space is, obviously, referred to as a complex vector space.

N-tuples of elements of F itself (a1, a2,…, an) are a very straightforward example of a vector space: Euclidean spaces can be denoted as R (the real line), R(the Euclidean plane), R3 (Euclidean three-dimensional space), etcetera. Another example, is the set C of complex numbers: that’s a vector space too, and not only over the real numbers (F = R), but also over itself (F = C). OK. Fair enough, I’d say. What’s next?

It becomes somewhat more complicated when the ‘vectors’ (i.e. the elements in the vector space) are mathematical functions: indeed, a space can consists of functions (a function is just another object, isn’t it?), and function spaces can also be vector spaces because we can perform (pointwise) addition and scalar multiplication on functions. Vector spaces can consist of other mathematical objects too. In short, the notion of a ‘vector’ as some kind of arrow defined by some point in space does not cover the true mathematical notion of a vector, which is very general (an element of a ‘vector field’ as defined above). The same goes for fields: we usually think a field consists of numbers (real, complex, or whatever other number one can think of), but a field can also consist of vectors. In fact, vector fields are as common as scalar fields in in physics (think of vectors representing the speed and direction of a moving fluid for example, as opposed to its local temperature – which is a scalar quantity).

Quite confusing, isn’t it? Can we have a vector space over a vector field?

Tricky question. From what I read so far, I am not sure actually. I guess the answer is both yes and no. The second binary operation on the vector space is scalar multiplication, so the field F needs to be a scalar field – not a vector field. Indeed, the formal definition of a vector field is quite formal: F has to be a scalar field. But then complex numbers z can be looked at not only as scalars in their own right (that’s the focus of the complex analysis course that I am currently reading) but also as vectors in their own right. OK, we’re talking a special kind of vectors here (vectors from the origin to the point z) but so what? And then we already noted that C is a vector field over itself, didn’t we?

So how do we define scalar multiplication in this case? Can we use the ‘scalar’ product (aka as the dot product, or the inner product) between two vectors here (as opposed to the cross-product, aka as the vector product tout court)? I am not sure. Not at all actually. Perhaps it is the usual product between two complex numbers – (x+iy)(u+iv) = (xu-yv) + i(xv+yu) – which, unlike the standard ‘scalar’ product between two vectors, returns another complex number as a result (as opposed to the dot product (x,y)·(u,v), which is equal to the real number xu + yv). The Brown & Churchill course on complex analysis which I am reading just notes that “this product [of two complex numbers] is, evidently, neither the scalar nor the vector product used in ordinary vector analysis”. However, because the ‘scalar’ product returns a single (real) number as a result, while the product of two complex numbers is – quite obviously – a complex number in itself, I must assume it’s the above-mentioned ‘usual product between two complex numbers’ that is to be used for ‘scalar’ multiplication in this case (i.e. the case of defining C as a vector field over itself). In addition, the geometric interpretation of multiplying two complex numbers show that it actually is a matter of scaling: the ‘length’ of the complex number zw (i.e. its absolute value) is equal to the product of the absolute value of z and w respectively and, as for its argument (i.e. the angle from the real line), the argument of zw is equal to the sum of the arguments of z and w respectively. In short, this looks very much like what scalar multiplication is supposed to do.

Let’s see if we can confirm the above (i.e. C being a vector field over itself, with scalar multiplication being defined as the product between two complex numbers) at some later point in time. For the moment, I think I’ve had quite enough on – for the moment at least. Indeed, while I note there are many other unexplored spaces, I will not attempt to penetrate these as for now. For example, I note the frequent reference to Hilbert spaces but, from what I understand, this is also some kind of vector space, but then with even more additional structure and/or a more general definition of the ‘objects’ which make up its elements. Its definition involves Cauchy sequences of vectors – and that’s a concept I haven’t studied as yet. To make things even more complicated, there are also Banach spaces – and lots of other things really. So I will need to look at that but, again, for the moment I’ll just leave the matter alone and get back into the nitty-gritty of complex analysis. Doing so will probably clarify all of the more subtle points mentioned above.

[…] Wow! It’s been quite a journey so far, and I am still in the first chapters of the course only!

PS: I did look it up just now (i.e. a few days later than when I wrote the text above) and my inference is correct: C is a complex vector space over itself, and the formula to be used for scalar multiplication is the standard formula for multiplying complex numbers: (x+iy)(u+iv) = (xu-yv) + i(xv+yu). C2, or Cn in general, are other examples of complex vector spaces (i.e. a vector space over C).

Advertisements

Euler’s formula

I went trekking (to the Annapurna Base Camp this time) and, hence, left the math and physics books alone for a week or two. When I came back, it was like I had forgotten everything, and I wasn’t able to re-do the exercises. Back to the basics of complex numbers once again. Let’s start with Euler’s formula:

eix = cos(x) + isin(x)

In his Lectures on Physics, Richard Feynman calls this equation ‘one of the most remarkable, almost astounding, formulas in all of mathematics’, so it’s probably no wonder I find it intriguing and, indeed, difficult to grasp. Let’s look at it. So we’ve got the real (but irrational) number e in it. That’s a fascinating number in itself because it pops up in different mathematical expressions which, at first sight, have nothing in common with each other. For example, e can be defined as the sum of the infinite series e = 1/0! + 1/2! + + 1/3! + 1/4! + … etcetera (n! stands for the factorial of n in this formula), but one can also define it as that unique positive real number for which d(et)/dt = et (in other words, as the base of an exponential function which is its own derivative). And, last but not least, there are also some expressions involving limits which can be used to define e. Where to start? More importantly, what’s the relation between all these expressions and Euler’s formula?

First, we should note that eix is not just any number: it is a complex number – as opposed to the more simple ex expression, which denotes the real exponential function (as opposed to the complex exponential function ez). Moreover, we should note that eix is a complex number on the unit circle. So, using polar coordinates, we should say that eix  is a complex number with modulus 1 (the modulus is the absolute value of the complex number (i.e. the distance from 0 to the point we are looking at) or, alternatively, we could say it is the magnitude of the vector defined by the point we are looking at) and argument x (the argument is the angle (expressed in radians) between the positive real axis and the line from 0 to the point we are looking at).

Now, it is self-evident that cos(x) + isin(x) represents exactly the same: a point on the unit circle defined by the angle x. But so that doesn’t prove Euler’s formula: it only illustrates it. So let’s go to one or the other proof of the formula to try to understand it somewhat better. I’ll refer to Wikipedia for proving Euler’s formula in extenso but let me just summarize it. The Wikipedia article (as I looked at it today) gives three proofs.

The first proof uses the power series expansion (yes, the Taylor/Maclaurin series indeed – more about that later) for the exponential function: eix = 1 + ix + (ix)2/2! + (ix)3/3! +… etcetera. We then substitute using i2 = -1, i3 = –i etcetera and so, when we then re-arrange the terms, we find the Maclaurin series for the cos(x) and sin(x) functions indeed. I will come back to these power series in another post.

The second proof uses one of the limit definitions for ex but applies it to the complex exponential function. Indeed, one can write ez (with z = x+iy) as ez = lim(1 + z/n)n for n going to infinity. The proof substitutes ix for z and then calculates the limit for very large (or infinite) n indeed. This proof is less obvious than it seems because we are dealing with power series here and so one has to take into account issues of convergence and all that.

The third proof also looks complicated but, in fact, is probably the most intuitive of the three proofs given because it uses the derivative definition of e. To be more precise, it takes the derivative of both sides of Euler’s formula using the polar coordinates expression for complex numbers. Indeed, eix is a complex number and, hence, can be written as some number z = r(cosθ+ isinθ), and so the question to solve here is: what’s r and θ? We need to write these two values as a function of x. How do we do that? Well… If we take the derivative of both sides, we get d(eix)/dx = ieix = (cosθ + isinθ)dr/dx + r[d(cosθ + isinθ)/dθ]dθ/dx. That’s just the chain rule for derivatives of course. Now, writing it all out and equating the real and imaginary parts on both sides of the expression yields following: dr/dx = 0 and dθ/dx = 1. In addition, we must have that, for x = 0, ei0 = [ei]0 = 1, so we have r(0) = 1 (the modulus of the complex number (1,0) is one) and θ(0) = 0 (the argument of (1,0) is zero). It follows that the functions r and θ are equal to r = 1 and θ = x, which proves the formula.

While these proofs are (relatively) easy to understand, the formula remains weird, as evidenced also from its special cases, like ei0 = ei = 1 = – eiπ = – eiπ or, equivalently, eiπ + 1 = 0, which is a formula which combines the five most basic quantities in mathematics: 0, 1, i, e and π. It is an amazing formula because we have two irrational numbers here, e and π, which have definitions which do not refer to each other at all (last time I checked, π was still being defined as the simple ratio of a circle’s circumference to its diameter, while the various definitions of e have nothing to do with circles), and so we combine these two seemingly unrelated numbers, also inserting the imaginary unit i (using iπ as an exponent for e) and we get minus 1 as a result (eiπ = – 1). Amazing indeed, isn’t it?

[…] Well… I’d say at least as amazing as the Taylor or Maclaurin expansion of a function – but I’ll save my thoughts on these for another post (even if I am using the results of these expansions in this post). In my view, what Euler’s formula shows is the amazing power of mathematical notation really – and the creativity behind. Indeed, let’s look at what we’re doing with complex numbers: we start from one or two definitions only and suddenly all kinds of wonderful stuff starts popping up. It goes more or less like this really:

We start off with these familiar x and y coordinates of points in a plane. Now we call the x-axis the real axis and then, just to distinguish them from the real numbers, we call the numbers on the y-axis imaginary numbers. Again, it is just to distinguish them from the real numbers because, in fact, imaginary numbers are not imaginary at all: they are as real as the real numbers – or perhaps we should say that the real numbers are as imaginary as the imaginary numbers because, when everything is said and done, the real numbers are mental constructs as well, aren’t they? Imaginary numbers just happen to lie on another line, perpendicular to our so-called real line, and so that’s why we add a little symbol i (the so-called imaginary unit) when we write them down. So we write 1i (or i tout court), 2i, 3i etcetera, or i/2 or whatever (it doesn’t matter if we write i before the real number or after – as long as we’re consistent).

Then we combine these two numbers – the real and imaginary numbers – to form a so-called complex number, which is nothing but a point (x, y) in this Cartesian plane. Indeed, while complex numbers are somewhat more complex than the numbers we’re used to in daily life, they are not out of this world I’d say: they’re just points in space, and so we can also represent them as vectors (‘arrows’) from the origin to (x, y).

But so this is what we are doing really: we combine the real and imaginary numbers by using the very familiar plus (+) sign, so we write z = x + iy. Now that is actually where the magic starts: we are not adding the same things here, like we would do when we are counting apples or so, or when we are adding integers or rational or real numbers in general. No, we are adding here two different things here – real and imaginary numbers – which, in fact, we cannot really add. Indeed, your mommy told you that you cannot compare apples with oranges, didn’t she? Well… That’s exactly what we do here really, and so we will keep these real and imaginary numbers separate in our calculations indeed: we will add the real parts of complex numbers with each other only, and the imaginary parts of them also with each other only.

Addition is quite straightforward: we just add the two vectors. Multiplication is somewhat more tricky but (geometrically) easy to interpret as well: the product of two complex numbers is a vector with a length which is equal to the sum of the lengths of the two vectors we are multiplying (i.e. the two complex numbers which make up the product) , and its angle with the real axis is the sum of the angles of the two original vectors. From this definition, many things follow, all equally amazing indeed, but one of these amazing facts is that i2 = -1, i3 = –i, i4 = 1, i5 = i, etcetera. Indeed: multiplying a complex number z = x + iy = (x, y) with the imaginary unit i amounts to rotating it 90° (counterclockwise) about the origin. So we are not defining i2 as being equal to minus 1 (many textbooks treat this equality as a definition indeed): it just comes as a fact which we can derive from the earlier definition of a complex product. Sweet, isn’t it?

So we have addition and multiplication now. We want to do much more of course. After defining addition and multiplication, we want to do complex powers, and so it’s here that this business with e pops up.

We first need to remind ourselves of the simple fact that the number e is just a real number: it’s equal to 2.718281828459045235360287471 etcetera. We have to write ‘etcetera’ because e is an irrational number, which – whatever the term ‘irrational’ may suggest in everyday language – simply means that e is not a fraction of any integer numbers (so irrational means ‘not rational’). e is also a transcendental number – a word which suggest all kinds of mystical properties but which, in mathematics, only means we cannot write it as a root of some polynomial (a polynomial with rational coefficients that is). So it’s a weird number. That being said, it is also the so-called ‘natural’ base for the exponential function. Huh? Why would mathematicians take such a strange number as a so-called ‘natural’ base? They must be irrational, no? Well… No. If we take e as the base for the exponential function ex (so that’s just this real (but irrational) number e to the power x, with x being the variable running along the x-axis: hence, we have a function here which takes a value from the set of real numbers and which yields some other real number), then we have a function here which is its own derivative: d(ex)/dx = ex. It is also the natural base for the logarithmic function and, as mentioned above, it kind of ‘pops up’ – quite ‘naturally’ indeed I’d say – in many other expressions, such as compound interest calculations for example or the general exponential function ax = ex lna. In other words, we need this and exp(x) and ln(x) functions to define powers of real numbers in general. So that’s why mathematicians call it ‘natural’.

While the example of compound interest calculations does not sound very exciting, all these formulas with e and exponential functions and what have you did inspire all these 18th century mathematicians – like Euler – who were in search of a logical definition of complex powers.

Let’s state the problem once again: we can do addition and multiplication of complex numbers but so the question is how to do complex powers. When trying to figure that one out, Euler obviously wanted to preserve the usual properties of powers, like axay = ax+y and, effectively, this property of the so-called ‘natural’ exponential function that d(ex)/dx = ex. In other words, we also want the complex exponential function to be its own derivative so d(ez)/dz should give us ez once again.

Now, while Euler was thinking of that (and of many other things too of course), he was well aware of the fact that you can expand ex into that power series which I mentioned above: ex = 1/0! + x/1! + (x)2/2! + (x)3/3! +… etcetera. So Euler just sat down, substituted the real number x with the imaginary number ix and looked at it: eix = 1 + ix + (ix)2/2! + (ix)3/3! +… etcetera. Now lo and behold! Taking into account that i2 = -1, i3 = –i, i4 = 1, i5 = i, etcetera, we can put that in and re-arrange the terms indeed and so Euler found that this equation becomes eix = (1 – x2/2! + x4/4! – -x6/6! +…) + i(x – x3/3! + x5/5! -… ). Now these two terms do correspond to the Maclaurin series for the cosine and sine function respectively, so there he had it: eix = cos(x) + isin(x). His formula: Euler’s formula!

From there, there was only one more step to take, and that was to write ez = ex+iy as exeiy, and so there we have our definition of a complex power: it is a product of two factors – ex and ei– both of which we have effectively defined now. Note that the ex factor is just a real number, even if we write it as ex: it acts as a sort of scaling factor for eiwhich, you will remember (as we pointed it out above already), is a point on the unit circle. More generally, it can be shown that eis the absolute value of ez (or the modulus or length or magnitude of the vector – whatever term you prefer: they all refer to the same), while y is the argument of the complex number ez (i.e. the angle of the vector ez with the real axis). [And, yes, for those who would still harbor some doubts here: eis just another complex number and, hence, a two-dimensional vector, i.e. just a point in the Cartesian plane, so we have a function which goes from the set of complex numbers here (it takes z as input) and which yields another complex number.]

Of course, you will note that we don’t have something like zw here, i.e. a complex base (i.e. z) with a complex exponent (i.e. w), or even a formula for complex powers of real numbers in general, i.e. a formula for aw with a any real number (so not only e but any real number indeed) and w a complex exponent. However, that’s a problem which can be solved easily through writing z and w in their so-called polar form, so we write z as z = ¦z¦eiθ = ¦z¦(cosθ + isinθ) and w as ¦w¦ eiσ =  ¦w¦(cosσ + isinσ) and then we can take it further from there. [Note that ¦z¦ and ¦w¦represent the modulus (i.e. the length) of z and w respectively, and the angles θ and σ are obviously the arguments of the same z and w respectively.] Of course, if z is a real number (so if y = 0), then the angle θ will obviously be zero (i.e. the angle of the real axis with itself) and so z will be equal to a real number (i.e. its real part only, as its imaginary part is zero) and then we are back to the case of a real base and a complex exponent. In other words, that covers the aw case.

[…] Wel… Easily? OK. I am simplifying a bit here – as I need to keep the length of this post manageable – but, in fact, it actually really is a matter of using these common properties of powers (such as ea+biec = e(a+c)+bi and it actually does all work out. And all of this magic did actually start with simply ‘adding’ the so-called ‘real’ numbers x on the x-axis with the so-called ‘imaginary’ numbers on the y-axis. 🙂

Post scriptum:

Penrose’s Road to Reality dedicates a whole chapter to complex exponentiation (Chapter 5). However, the development is not all that simple and straightforward indeed. The first step in the process is to take integer powers – and integer roots – of complex numbers, so that’s zn for n = 0, ±1, ±2, ±3… etcetera (or z1/2, z1/3, z1/4 if we’re talking integer roots). That’s easy because it can be solved through using the old formula of Abraham de Moivre: (cosθ + sinθ)n = cos(nθ) + isin(nθ) (de Moivre penned this down in 1707 already, more than 40 years before Euler looked at the matter). However, going from there to full-blown complex powers is, unfortunately, not so straightforward, as it involves a bit of a detour: we need to work with the inverse of the (complex) exponential function ez, i.e. the (complex) natural logarithm.

Now that is less easy than it sounds. Indeed, while the definition of a complex logarithm is as straightforward as the definition of real logarithms (lnz is a function for which elnz = z), the function itself is a bit more… well… complex I should say. For starters, it is a multiple-valued function: if we write the solution w = lnz as w = u+iv, then it is obvious that ew will be equal to eu+iv = eueiv and this complex number ew can then be written in its polar form ew = reiθ with r = eu and v = θ + 2nπ. Of course, ln(eu+iv) = u + iv and so the solution of w will look like w = lnr + i(θ + 2nπ) with n = 0, ±1, ±2, ±3 etcetera. In short, we have an infinite number of solutions for w (one for every n we choose) and so we have this problem of multiple-valuedness indeed. We will not dwell on this here (at least not in this post) but simply note that this problem is linked to the properties of the complex exponential function ez itself. Indeed, the complex exponential function ez has very different properties than the real exponential function ex. First, we should note that, unlike e(which, as we know goes from zero at the far end of the negative side of the real axis to infinity as x goes big on the positive side), eis a periodic function – so it oscillates and yields the same values after some time – with this ‘after some time’ being the periodicity of the function. Indeed, e= e+2πi and so its period 2πi (note that this period is an imaginary number – but so it’s a ‘real’ period, if you know what I mean :-)). In addition, and this is also very much unlike the real exponential function ex, ecan be negative (as well as assume all kinds of other complex values). For example, eiπ = -1, as we noted above already.

That being said, the problem of multiple-valuedness can be solved through the definition of a principal value of lnz and that, then, leads us to what we want here: a consistent definition of a complex power of a complex base (or the definition of a true complex exponential (and logarithmic) function in other words). To those who would want to see the details of this (i.e. my imaginary readers :-)), I would say that Penrose’s treatment of the matter in the above-mentioned Chapter 5 of The Road to Reality is rather cryptic – presumably because he has to keep his book around 1000 pages only (not a lot to explain all of the Laws of the Universe) and, hence, Brown & Churchill’s course (or whatever other course dealing with complex analysis) probably makes for easier reading.

[As for the problem of multiple-valuedness, we should probably also note the following: when taking the nth root of a complex number (i.e. z1/n with n = 2, 3, etcetera), we also obtain a set of n values ck (with k = 0, 1, 2,… n-1), rather than one value only. However, once we have one of these values, we have all of them as we can write these cas ck = r1/nei(θ/n+2kπ/n), (with the original complex number z equal to z = reiθ) then so we could also just consider the principal value c0 and, as such, consider the function as a single-valued one. In short, the problem of multiple-valued functions pops up almost everywhere in the complex space, but it is not an issue really. In fact, we encounter the problem of multiple-valuedness as soon as we extend the exponential function in the space of the real numbers and also allow rational and real exponents, instead of positive integers only. For example, 41/2 is equal to ±2, so we have two results here too and, hence, multiple values. Another example would be the 4th root of 16: we have four 4th roots of 16: +2, -2 and then two complex roots +2i and -2i. However, standard practice is that we only take the positive value into account in order to ensure a ‘well-behaved’ exponential function. Indeed, the standard definition of a real exponential function is bx = (elnb)x = elnbex, and so, if x = 1/n, we’ll only assign the positive 4th root to ex. Standard practice will also restrict the value of b to a positive real number (b > 0). These conventions not only ensures a positive result but also continuity of the function and, hence, the existence of a derivative which we can then use to do other things. By the way, the definition also shows – once again – why e is such a nice (or ‘natural’) number: we can use it to calculate the value for any exponential function (for any real base b > 0). But so we had mentioned that already, and it’s now really time to stop writing. I think the point is clear.]

No royal road to reality

I got stuck in Penrose’s Road to Reality in chapter 5 already. That is not very encouraging – because the book has 34 chapters, and every chapter builds on the previous one, and usually also looks more difficult than the previous one.

In Chapter 5, Penrose introduces complex algebra. As I tried to get through it, I realized I had to do some more self-study. Indeed, while Penrose claims no other books or courses are needed to get through his, I do not find this to be the case. So I bought a fairly standard course in complex analysis (James Ward Brown and Ruel V. Churchill, Complex variables and applications) and I’ve done chapter 1 and 2 now. Although these first two chapter do not nothing else than introduce the subject-matter, I find the matter rather difficult and the terminology confusing. Examples:

1. The term ‘scalar’ is used to denote real numbers. So why use the term ‘scalar’ if the word ‘real’ is available as well? And why not use the term ‘real field’ instead of ‘scalar field’? Well… The term ‘real field’ actually means something else. A scalar field associates a (real) number to every point in space. So that’s simple: think of temperature or pressure. The term ‘scalar’ is said to be derived from ‘scaling’: a scalar is that what scales vectors. Indeed, scalar multiplication of a vector and a real number multiplies the magnitude of the vector without changing its direction. So what is a real field then? Well… A (formally) real field is a field that can be extended with a (not necessarily unique) ordering which makes it an ordered field. Does that help? Somewhat I guess. But why the qualifier ‘formally real’? I checked and there is no such thing as an ‘informally real’ field. I guess it’s just to make sure we know what we are talking about, as ‘real’ is a word with many meanings.

2. So what’s a field in mathematics? It is an algebraic structure: a set of ‘things’ (like numbers) with operations defined on it, including the notions of addition, subtraction, multiplication, and division. As mentioned above, we have scalar fields and vector fields. In addition, we also have fields of complex numbers. We also have fields with some less likely candidates for addition and multiplication, such as functions (one can add and multiply functions with each other). In short, anything which satisfies the formal definition of a field – and here I should note that the above definition of a field is not formal – is a field. For example, the set of rational numbers satisfies the definition of a field too. So what is the formal definition? First of all, a field is a ring. Huh? Here we are in this abstract classification of algebraic structures: commutative groups, rings, fields, etcetera (there are also modules – a type of algebraic structure which I had never ever heard of before). To put it simply – because we have to move on of course – a ring (no one seems to know where that word actually comes from) has addition and multiplication only, while a field has division too. In other words, a ring does not need to have multiplicative inverses. Huh?  It’s simply really: the integers form a ring, but the equation 2x = 1 does not have a solution in integers (x = ½) and, hence, the integers do not form a field. The same example shows why rational numbers do.

3. But what about a vector field? Can we do division with vectors? Yes, but not by zero – but that is not a problem as that is understood in the definition of a field (or in the general definition of division for that matter). In two-dimensional space, we can represent vectors by complex numbers: z = (x,y), and we have a formula for the so-called multiplicative inverse of a complex number: z-1 = (x/x2+y2, -y/x2+y2). OK. That’s easy. Let’s move on to more advanced stuff.

4. In logic, we have the concept of well-formed formulas (wffs). In math, we have the concept of ‘well-behaved’: we have well-behaved sets, well-behaved spaces and lots of other well-behaved things, including well-behaved functions, which are, of course, those of interest to engineers and scientists (and, hence, in light of the objective of understanding Penrose’s Road to Reality, to me as well). I must admit that I was somewhat surprised to learn that ‘well-behaved’ is one of the very few terms in math that have no formal definition. Wikipedia notes that its definition, in the science of mathematics that is, depends on ‘mathematical interest, fashion, and taste’. Let me quote in full here: “To ensure that an object is ‘well-behaved’ mathematicians introduce further axioms to narrow down the domain of study. This has the benefit of making analysis easier, but cuts down on the generality of any conclusions reached. […] In both pure and applied mathematics (optimization, numerical integration, or mathematical physics, for example), well-behaved means not violating any assumptions needed to successfully apply whatever analysis is being discussed. The opposite case is usually labeled pathological.” Wikipedia also notes that “concepts like non-Euclidean geometry were once considered ill-behaved, but are now common objects of study.”

5. So what is a well-behaved function? There is actually a whole hierarchy, with varying degrees of ‘good’ behavior, so one function can be more ‘well-behaved’ than another. First, we have smooth functions: a smooth function has derivatives of all orders (as for its name, it’s actually well chosen: the graph of a smooth function is actually, well, smooth). Then we have analytic functions: analytic functions are smooth but, in addition to being smooth, an analytic function is a function that can be locally given by a convergent power series. Huh? Let me try an alternative definition: a function is analytic if and only if its Taylor series about x0 converges to the function in some neighborhood for every x0 in its domain. That’s not helping much either, is it? Well… Let’s just leave that one for now.

In fact, it may help to note that the authors of the course I am reading (J.W. Brown and R.V. Churchill, Complex Variables and Applications) use the terms analytic, regular and holomorphic as interchangeable, and they define an analytic function simply as a function which has a derivative everywhere. While that’s helpful, it’s obviously a bit loose (what’s the thing about the Taylor series?) and so I checked on Wikipedia, which clears the confusion and also defines the terms ‘holomorphic’ and ‘regular’:

“A holomorphic function is a complex-valued function of one or more complex variables that is complex differentiable in a neighborhood of every point in its domain. The existence of a complex derivative in a neighborhood is a very strong condition, for it implies that any holomorphic function is actually infinitely differentiable and equal to its own Taylor series. The term analytic function is often used interchangeably with ‘holomorphic function’ although the word ‘analytic’ is also used in a broader sense to describe any function (real, complex, or of more general type) that can be written as a convergent power series in a neighborhood of each point in its domain. The fact that the class of complex analytic functions coincides with the class of holomorphic functions is a major theorem in complex analysis.”

Wikipedia also adds following: “Holomorphic functions are also sometimes referred to as regular functions or as conformal maps. A holomorphic function whose domain is the whole complex plane is called an entire function. The phrase ‘holomorphic at a point z0’ means not just differentiable at z0, but differentiable everywhere within some neighborhood of z0 in the complex plane.”

6. What to make of all this? Differentiability is obviously the key and, although there are many similarities between real differentiability and complex differentiability (both are linear and obey the product rule, quotient rule, and chain rule), real-valued functions and complex-valued functions are different animals. What are the conditions for differentiability? For real-valued functions, it is a matter of checking whether or not the limit defining the derivative exists and, of course, a necessary (but not sufficient) condition is continuity of the function.

For complex-valued functions, it is a bit more sophisticated because we’ve got so-called Cauchy-Riemann conditions applying here. How does that work? Well… We write f(z) as the sum of two functions: f(z) = u(x,y) + iv(x,y). So the real-valued function u(x,y) yields the real part of f(z), while v(x,y) yields the imaginary part of f(z). The Cauchy-Riemann equations (to be interpreted as conditions really) are the following: ux = vy and uy = -v(note the minus sign in the second equation).

That looks simple enough, doesn’t it? However, as Wikipedia notes (see the quote above), differentiability at a point z0 is not enough (to ensure the existence of the derivative of f(z) at that point). We need to look at some neighborhood of the point z0 and see if these first-order derivatives (ux, uy, vx and vy) exist everywhere in that neighborhood and satisfy these Cauchy-Riemann equations. So we need to look beyond the point z0 itself hen doing our analysis: we need to  ‘approach’ it from various directions before making any judgment. I know this sounds like Chinese but it became clear to me when doing the exercises.

7. OK. Phew!  I got this far – but so that’s only chapter 1 and 2 of Brown & Churchill’s course !  In fact, chapter 2 also includes a few sections on so-called harmonic functions and harmonic conjugates. Let’s first talk about harmonic functions. Harmonic functions are even better behaved than holomorphic or analytic functions. Well… That’s not the right way to put it really. A harmonic function is a real-valued analytic function (its value could represent temperature, or pressure – just as an example) but, for a function to qualify as ‘harmonic’, an additional condition is imposed. That condition is known as Laplace’s equation: if we denote the harmonic function as H(x,y), then it has to have second-order derivatives which satisfies Hxx(x,y) + Hyy(x,y) = 0.

Huh? Laplace’s equation, or harmonic functions in general, plays an important role in physics, as the condition that is being imposed (the Laplace equation) often reflects a real-life physical constraint and, hence, the function H would describe real-life phenomena, such as the temperature of a thin plate (with the points on the plate defined by the (x,y) coordinates), or electrostatic potential. More about that later. Let’s conclude this first entry with the definition of harmonic conjugates.

8. As stated above, a harmonic function is a real-valued function. However, we also noted that a complex function f(z) can actually be written as a sum of a real and imaginary part using two real-valued functions u(x,y) and v(x,y). More in particular, we can write f(z) = u(x,y) + iv(x,y), with i the imaginary number (0,1). Now, if u and v would happen to be harmonic functions (but so that’s an if of course – see the Laplace condition imposed on their second-order derivatives in order to qualify for the ‘harmonic’ label) and, in addition to that, if their first-order derivatives would happen to satisfy the Cauchy-Riemann equations (in other  words, f(z) should be a well-behaved analytic function), then (and only then) we can label v as the harmonic conjugate of u.

What does that mean? First, one should note that when v is a harmonic conjugate of u in some domain, it is not generally true that u is a harmonic conjugate of v. So one cannot just switch the functions. Indeed, the minus sign in the Cauchy–Riemann equations makes the relationship asymmetric. But so what’s the relevance of this definition of a harmonic conjugate? Well… There is a theorem that turns the definition around: ‘A function f(z) = u(x,y) + iv(x,y) is analytic (or holomorphic to use standard terminology) in a domain D if and only if v is a harmonic conjugate of u. In other words, introducing the definition of a harmonic conjugate (and the conditions which their first- and second-order derivatives have to satisfy), allows us to check whether or not we have a well-behaved complex-valued function (and with ‘well-behaved’ I mean analytic or holomorphic).    

9. But, again, why do we need holomorphic functions? What’s so special about them? I am not sure for the moment, but I guess there’s something deeper in that one phrase which I quoted from Wikipedia above: “holomorphic functions are also sometimes referred to as regular functions or as conformal maps.” A conformal mapping preserves angles, as you can see on the illustration below, which shows a rectangular grid and its image under a conformal map: f maps pairs of lines intersecting at 90° to pairs of curves still intersecting at 90°. I guess that’s very relevant, although I do not know why exactly as for now. More about that in later posts.

342px-Conformal_map.svg