# A Royal Road to quantum physics?

Pre-script (dated 26 June 2020): This post has become less relevant because my views on all things quantum-mechanical have evolved significantly as a result of my progression towards a more complete realist (classical) interpretation of quantum physics. In addition, some of the material was removed by a dark force. Hence, we recommend you read our recent papers. I keep blog posts like these mainly because I want to keep track of where I came from. I might review them one day, but I currently don’t have the time or energy for it. đ

Original post:

It is said that, when Ptolemy asked Euclid to quickly explain him geometry, Euclid told the King that there was no ‘Royal Road’ to it, by which he meant it’s just difficult and takes a lot of time to understand.

Physicists will tell you the same about quantum physics. So, I know that, at this point, I should just study Feynman’s third LecturesÂ Volume and shut up for a while. However, before I get lost while playing with state vectors, S-matrices, eigenfunctions, eigenvalues and what have you, I’ll try that Royal Road anyway, building on my previous digression on Hamiltonian mechanics.

So… What was that about? Well…Â If you understood anything from my previous post, it should be that both the Lagrangian and Hamiltonian function use the equations for kinetic and potential energy to derive the equations of motion for a system. The key difference between the Lagrangian and Hamiltonian approach was that the Lagrangian approach yieldsÂ one differential equationâwhich had to be solved to yield a functional form for x as a function of time, while the Hamiltonian approach yielded twoÂ differential equationsâwhich had to be solved to yield a functional form for both position (x) and momentum (p). In other words, Lagrangian mechanics is a model that focuses on the position variable(s) only, while, in Hamiltonian mechanics, we also keep track of the momentum variable(s). Let me briefly explain the procedure again, so we’re clear on it:

1. We write down a function referred to as the Lagrangian function. The function is L = T â V with T and V the kinetic and potential energy respectively. T has to be expressed as a function of velocity (v) and V has to be expressed as a function of position (x). You’ll say: of course! However, it is an important point to note, otherwise the following step doesn’t make sense. So we take the equations for kinetic and potential energy and combine them to form a function L = L(x, v).

2. We then calculate the so-called Lagrangian equation, in which we use that function L. To be precise: what we have to do is calculate its partial derivatives and insert these in the following equation:

It should be obvious now why I stressed we should write L as a function of velocity and position, i.e. as L = L(x, v). Otherwise those partial derivatives don’t make sense. As to whereÂ this equation comes from, don’t worry about it: I did not explainÂ whyÂ this works. I didn’t do that here, and I also didn’t do it in my previous post. What we’re doing here is just explaining howÂ it goes, not why.

3. If we’ve done everything right, we should get a second-order differential equation which, as mentioned above, we should then solve for x(t). That’s what ‘solving’ a differential equation is about: find a functional form that satisfies the equation.

Let’s now look at theÂ Hamiltonian approach.

1. We write down a function referred to as the Hamiltonian function. It looks similar to the Lagrangian, except that we sum kinetic and potential energy, and that T has to be expressed as a function of the momentum p. So we have a function H = T + V = H(x, p).

2.Â We then calculate the so-called Hamiltonian equations, which is a set ofÂ twoÂ equations, rather than just one equation. [We have two for the one-dimensional situation that we are modeling here: it’s a different story (i.e. we will have more equations) if we’d have more degrees of freedom of course.] It’s the same as in the Lagrangian approach: it’s just a matter of calculating partial derivatives, and insert them in the equations below.Â Again, note that I am notÂ explainingÂ whyÂ this Hamiltonian hocus-pocus actually works. I am just saying howÂ it works.

3.Â If we’ve done everything right, we should get two first-order differential equations which we should then solve for x(t) and p(t). Now, solving aÂ setÂ of equations may or may not be easy, depending on your point of view. If you wonder how it’s done, there’s excellent stuff on the Web that will show you how (such as, for instance,Â Paul’s Online Math Notes).

Now, I mentioned in my previous post that the Hamiltonian approach to modeling mechanics is very similar to the approach that’s used in quantum mechanics and that it’s therefore the preferred approach in physics. I also mentioned that, in classical physics, position and momentum are also conjugate variables, and I also showed how we can calculate the momentum as a conjugate variable from the Lagrangian: p = âL/âv. However, I did not dwell on what conjugate variables actually are in classical mechanics. I won’t do that here either. Just accept that conjugate variables, in classical mechanics, are also defined as pairsÂ of variables. They’re not related through some uncertainty relation, like in quantum physics, but they’re related because they can both be obtained asÂ the derivatives of a function which I haven’t introduced as yet. That function isÂ referred to as theÂ action, but… Well… Let’s resist the temptation to digress any further here. If you really want to know what action isâin physics, that is… đ Well… Google it, I’d say. What you should take home from this digression is that position and momentum are also conjugate variables in classical mechanics.

Let’s now move on to quantum mechanics. You’ll see that the ‘similarity’ in approach is… Well… Quite relative, I’d say. đ

Position and momentum in quantum mechanics

As you know by now (I wrote at least a dozen posts on this), the concept of position and momentum in quantum mechanics is veryÂ different from that in classical physics: we do not have x(t) and p(t) functions which give a unique, precise and unambiguous value for x and p when we assign a value to the time variable and plug it in. No.Â What we have in quantum physics is some weird wave function, denoted by the Greek lettersÂ Ï (phi) orÂ Ï (psi) or, using Greek capitals,Â ÎŠ andÂ Îš. To be more specific, the psi usually denotes the wave function in the so-called position spaceÂ (so we write Ï = Ï(x)), and the phi will usually denote the wave function in the so-called momentum space (so we write ÏÂ = Ï(p)). That sounds more complicated than it is, obviously, but I just wanted to respect terminology here. Finally, note that the Ï(x) and Ï(p) wave functions are related through the Uncertainty Principle: they’re conjugate variables, and we have this ÎxÎp = Ä§/2 equation, in which the Î is some standard deviation from some mean value. I should not go into more detail here: you know that by now, don’t you?

While the argument of these functions is some real number, the wave functions themselves are complex-valued, so they have a real andÂ complex amplitude. I’ve also illustrated that a couple of times already but, just to make sure, take a look at the animation below, so you know what we are sort of talking about:

1. The A and B situations represent a classical oscillator: we know exactlyÂ where the red ball is at any point in time.
2. The C to H situations give us a complex-valued amplitude, with the blue oscillation as the real part, and the pink oscillation as the imaginary part.

So we have such wave function both for x and p. Note that the animation above suggests we’re only looking at the wave function for x butâtrust meâwe have a similar one for p, and they’re related indeed. [To see how exactly, I’d advise you to go through the proof of the so-called Kennard inequality.]Â So… What do we do with that?

The position and momentum operators

When we want to know where a particle actually is, or what its momentum is, we need to do something with this wave functionÂ ÏÂ orÂ Ï. Let’s focus on the position variable first. While the wave function itself is said to have ‘no physical interpretation’ (frankly, I don’t know what that means: I’d think everything has some kind of interpretation (and what’s physical and non-physical?), but let’s not get lost in philosophy here), we know that the square of the absolute value of the probability amplitude yields a probability density. SoÂ |Ï(x)|2Â gives us a probability density function or, to put it simply, the probability to find our ‘particle’ (or ‘wavicle’ if you want) at point x. Let’s now do something more sophisticated and write down theÂ expectedÂ value of x, which is usually denoted by â©xâȘ (although that invites confusion with Dirac’s bra-ket notation, but don’t worry about it):

Don’t panic. It’s just an integral. Look at it. Ï* is just the complex conjugate (i.e. aÂ â ib if Ï = a +Â ib) and you will (or should) remember that the product of a complex number with its (complex) conjugate gives us the square of its absolute value:Â Ï*Ï =Â |Ï(x)|2. What about that x? Can we just insert that there, in-between Ï* and Ï ? Good question. The answer is: yes, of course!Â That x is just some real number and we can put it anywhere. However, it’s still a good question because, while multiplication of complex numbers is commutative (hence, Â z1z2Â = z2z1), the order of our operatorsÂ â which we will introduce soonÂ â can oftenÂ notÂ be changed without consequences, so it is something to note.

For the rest, that integral above is quite obvious and it should really not puzzle you: we just multiply a value with its probability of occurring and integrate over the whole domain to get an expected valueÂ â©xâȘ. Nothing wrong here. Note that we get some real number. [You’ll say: of course! However, I always find it useful to check that when looking at those things mixing complex-valued functions with real-valued variables or arguments. A quick check on the dimensions of what we’re dealing helps greatly in understanding what we’re doing.]

So… You’ve surely heard about the position and momentum operators already. Is that, then, what it is? Doing some integral on some function to get an expected value? Well… No. But there’s a relation. However, let me first make a remark on notation, because that can be quite confusing. The position operatorÂ is usually written with a hat on top of the variable â likeÂ áș â but so I don’t find a hat with every letter with the editor tool for this blog and, hence, I’ll use a bold letter x and p to denote the operator. Don’t confuse it with me using a bold letter for vectors though ! Now, back to the story.

Let’s first give an example of an operator you’re already familiar with in order to understand what an operator actually is. To put it simply: an operator is an instruction to do something with a function. For example: â/ât is an instruction to differentiate some function with regard to the variable t (which usually stands for time). TheÂ â/ât operator is obviously referred to as a differentiation operator. When we put a function behind, e.g. f(x, t), we get âf(x, t)/ât, which is just another function in x and t.

So we have the same here: x in itself is just an instruction: you need to put a function behind in order to get some result. So you’ll see it as xÏ. In fact, it would be useful to use brackets probably, likeÂ x[Ï], especially because I can’t put those hats on the letters here, but I’ll stick to the usual notation, which doesÂ notÂ use brackets.

Likewise, we have a momentum operator: p = âiÄ§â/âx. […] Let it sink in. [..]

What’s this?Â Don’t worry about it. I know: that looks like a veryÂ different animal than that x operator.Â I’ll explain later. Just note, for the moment, that the momentum operator (also) involves a (partial) derivative and, hence, we refer to it as aÂ differentialÂ operator (as opposed to differentiation operator). The instructionÂ p = âiÄ§â/âx basically means: differentiate the function with regard to x and multiply with iÄ§ (i.e. the product of Planck’s constant and the imaginary unitÂ i). Nothing wrong with that. Just calculate a derivative and multiply with a tiny imaginary (complex) number.

Now, back to the position operator x. As you can see, that’s a very simple operatorâmuch simpler than the momentum operator in any case. The position operator applied to Ï yields, quite simply, the xÏ(x) factor in the integrand above. So we just get a new function xÏ(x) when we apply x toÂ Ï, of which the values are simply the product of x and Ï(x). Hence, we write xÏ = xÏ.

Really? Is it that simple? Yes. For now at least. đ

Back to the momentum operator. Where does that come from? That story is not so simple. [Of course not. It can’t be. Just look at it.] Because we have to avoid talking about eigenvalues and all that, my approach to the explanation will be quite intuitive. [As for ‘my’ approach, let me note that it’s basically the approach as used in the Wikipedia article on it. :-)] Just stay with me for a while here.

Let’s assumeÂ Ï is given byÂ Ï = ei(kxâÏt). So that’s a nice periodic function, albeit complex-valued.Â Now, we know that functional form doesn’t make all that much sense because it corresponds to the particle being everywhere, because the square of its absolute value is some constant. In fact, we know it doesn’t even respect the normalization condition: all probabilities have to add up to 1. However, that being said, we also know that we can superimpose an infinite number of such waves (all with different k and Ï) to get a more localized wave train, and then re-normalize the result to make sure the normalization condition is met. Hence, let’s just go along with this idealized example and see where it leads.

We know the wave number k (i.e. its ‘frequency in space’, as it’s often described) is related to the momentum p through the de Broglie relation: p =Â Ä§k. [Again, you should think about a whole bunch of these waves and, hence, some spread in k corresponding to some spread in p, but just go along with the story for now and don’t try to make it even more complicated.]Â Now, if we differentiate with regard to x, and then substitute, we getÂ âÏ/âx =Â âei(kxâÏt)/âx = ikei(kxâÏt)Â = ikÏ, or

So whatÂ isÂ this? Well… On the left-hand side, we have the (partial) derivative of a complex-valued function (Ï) with regard to x. Now, that derivative is, more likely than not, also some complex-valued function. And if you don’t believe me, just look at the right-hand side of the equation, where we have that iÂ and Ï. In fact, the equation just shows that, when we take that derivative, we get our original function Ï but multiplied byÂ ip/Ä§. Hey! We’ve got a differential equation here, don’t we? Yes. And the solution for it is… Well… The natural exponential. Of course!Â That should be no surprise because we started out with a natural exponential as functional form! So that’s not the point. WhatÂ isÂ the point, then?Â Well… If we bring that i/Ä§ factor to the other side, we get:

(âi/Ä§)(âÏ/âx) = pÏ

[If you’re confused about the âi, remember that iâ1Â = 1/i = âi.] So… We’ve got pÏ on the right-hand side now. So… Well… That’s like xÏ, isn’t it? Yes. đ If we define the momentum operator as p =Â (âi/Ä§)(â/âx), then we get pÏ = pÏ. So that’s the same thing as for the position operator. It’s just that pÂ is… Well… A more complex operator, as it has that âi/Ä§ factor in it. And, yes, of course it also involves an instruction to differentiate, which also sets it apart from the position operator, which is just an instruction to multiply the function with its argument.

I am sure you’ll find this funnyâperhaps even fishyâbusiness. And, yes, I have the same questions: what does it all mean? I can’t answer that here. As for now, just accept that this position and momentum operator are what they are, and that I can’t do anything about that. But… I hear you sputter: what about their interpretation? Well… Sorry… I could say that the functions xÏ andÂ pÏ are so-called linear maps but that is not likely to help you much in understanding what these operators really do. YouÂ â and I for sure đÂ â will indeed have to go through that story of eigenvalues to a somewhat deeper understanding of what these operators actually are. That’s just how it is. As for now, I just have toÂ move on. Sorry for letting you down here. đ

EnergyÂ operators

Now that we sort of ‘understand’ those position and momentum operators (or their mathematical form at least),Â it’s time to introduce the energy operators. Indeed, in quantum mechanics, we’ve also got an operator for (a) kinetic energy, and for (b) potential energy. These operators are also denoted with a hat above the T and V symbol. All quantum-mechanical operators are like that, it seems. However, because of the limitations of the editor tool here, I’ll also use a bold T and V respectively. Now, I am sure you’ve had enough of this operators, so let me just jot them down:

1. V = V, so that’s just an instruction to multiply a function with V = V(x, t). That’s easy enough because that’s just like the position vector.
2. As for T, that’sÂ more complicated. It involves that momentum operator p, which was also more complicated, remember? Let me just give you the formula:

T = pÂ·p/2m = p2/2m.

So we multiply the operator p with itself here. What does that mean? Well… Because the operator involves a derivative, it means we have to take the derivative twice and… No ! Well… Let me correct myself: yes and no. đ That pÂ·p product is, strictly speaking, a dot product between two vectors, and so it’s not just a matter of differentiating twice. Now that we are here, we may just as well extend the analysis a bit and assume that we also have a y and z coordinate, so we’ll have a position vector r = (x, y, z). [Note that r is a vector here, not an operator. !?! Oh… Well…] Extending the analysis to three (or more) dimensions means that we should replace the differentiation operator by the so-called gradient or delÂ operator:Â â = (â/âx, â/ây, â/âz). And now that dot product pÂ·pÂ will, among other things, yield another operator which you’re surely familiar with: the Laplacian. Let me remind you of it:

Hence, we can write the kinetic energy operator T as:

I quickly copied this formula from Wikipedia, which doesn’t have the limitation of the WordPress editor tool, and so you see it now the way you should see it, i.e. with the hat notation. đ

[…]

In case you’re despairing, hang on ! We’re almost there. đ We can, indeed, now define the Hamiltonian operator that’s used in quantum mechanics. While the HamiltonianÂ functionÂ was the sum of the potential and kinetic energy functions in classical physics, in quantum mechanics we add the two energy operators. You’ll grumble and say: that’s not the same as adding energies. And you’re right: adding operators is not the same as adding energy functions. Of course it isn’t. đ But just stick to the story, please, and stop criticizing. [Oh – just in case you wonder where that minus sign comes from: i2Â =Â â1, of course.]

Adding the two operators togetherÂ yields the following:

So. Yes. That’s the famous Hamiltonian operator.

OK. So what?

Yes…. Hmm… What do we do with that operator? Well… We apply it to the function and so we write HÏ = … Hmm…

Well… What?Â

Well… I am not writing this post just to give some definitions of the type of operators that are used in quantum mechanics and then just do obvious stuff by writing it all out. No. I am writing this post to illustrateÂ how things work.

OK. So how does it work then?Â

Well… It turns out that, in quantum mechanics, we have similar equations as in classical mechanics. Remember that I just wrote down the set of (two) differential equations when discussing Hamiltonian mechanics? Here I’ll do the same. The Hamiltonian operator appears in an equation of which you’ve surely heard of and which, just like me, you’d love to understandâand then I mean: understand it fully, completely, and intuitively. […]Â Yes. It’s the SchrĂ¶dinger equation:

Note, once again, I am not saying anything about where this equation comes from. It’s like jotting down that Lagrange equation, or the set of Hamiltonian equations: I am not saying anything about theÂ whyÂ of all this hocus pocus. I am just saying how it goes. So we’ve got another differential equation here, and we have to solve it. If we all write it out using the above definition of the Hamiltonian operator, we get:

If you’re still with me, you’ll immediately wonder about that ÎŒ. Well… Don’t. It’s the mass really, but the so-called reduced mass. Don’t worry about it. Just googleÂ it if you want to know more about this concept of a ‘reduced’ mass: it’s a fine point which doesn’t matter here really.Â The point is the grand result.

But… So… What is the grand result? What are we looking at here?Â Well… Just as I said above: that SchrĂ¶dinger equation is a differential equation, just like those equations we got when applying the Lagrangian and Hamiltonian approach to modeling a dynamic system in classical mechanics, and, hence, just like what we (were supposed to) do there, we have to solve it. đÂ Of course, it looks much more daunting than our Lagrangian or Hamiltonian differential equations, because we’ve got complex-valued functions here, and you’re probably scared of thatÂ iÄ§ factor too. But you shouldn’t be.Â When everything is said and done, we’ve got a differential equation here that we need to solve for Ï. In other words, we need to find functional forms for Ï that satisfy the above equation. That’s it. Period.

So how do these solutions look like? Well, they look like those complex-valued oscillating things in the very first animation above. Let me copy them again:

So… That’s it then? Yes. I won’t say anything more about it here, because (1) this post has become way too long already, and so I won’t dwell on the solutions of that SchrĂ¶dinger equation, and because (2) I do feel it’s about time I really start doing what it takes, and that’s to work on all of the math that’s necessary to actuallyÂ doÂ all that hocus-pocus. đ

Post scriptum: As for understanding the SchrĂ¶dinger equation “fully, completely, and intuitively”, I am not sure that’s actually possible. But I am trying hard and so let’s see. đ I’ll tell you after I mastered the math. But something inside of me tells me there’s indeedÂ no Royal Road to it. đ

Post scriptum 2Â (dated 16 November 2015): Iâve added this post scriptum, more than a year later after writing all of the above, because I now realize how immature it actually is. If you really want to know more about quantum math, then you should read my more recent posts, like the one on the Hamiltonian matrix. It’s not that anything that I write above is wrongâit isn’t. But… Well… It’s just that I feel that I’ve jumped the gun. […] But then that’s probably not a bad thing. đ

Some content on this page was disabled on June 17, 2020 as a result of a DMCA takedown notice from Michael A. Gottlieb, Rudolf Pfeiffer, and The California Institute of Technology. You can learn more about the DMCA here:

Some content on this page was disabled on June 17, 2020 as a result of a DMCA takedown notice from Michael A. Gottlieb, Rudolf Pfeiffer, and The California Institute of Technology. You can learn more about the DMCA here:

# Newtonian, Lagrangian and Hamiltonian mechanics

Post scriptumÂ (dated 16 November 2015): You’ll smile because… Yes, I am startingÂ this post with aÂ postÂ scriptum, indeed. đ I’ve added it, a year later or so, because, before you continue to read, you should note I amÂ notÂ going to explain the Hamiltonian matrix here, as it’s used in quantum physics. That’s the topic of another post, which involves far more advanced mathematical concepts. If you’re here for that, don’t read this post. Just go to my post on the matrixÂ indeed. đÂ But so here’s my original post.Â I wrote it to tie up some loose end. đ

As an economist, I thought I knew a thing or two about optimization. Indeed, when everything is said and done, optimization is supposed to an economist’s forte, isn’t it? đ Hence, I thought I sort of understood what a Lagrangian would represent in physics, and I also thought I sort of intuitively understood why and how it could be used it to model the behavior of a dynamic system. In short, I thought that Lagrangian mechanics would be all about optimizing something subject to some constraints. Just like in economics, right?

[…] Well… When checking it out, I found that the answer is: yes, and no. And, frankly, the honest answer is more no than yes. đ Economists (like me), and all social scientists (I’d think), learn only about one particular type of Lagrangian equations: the so-calledÂ Lagrange equations of the first kind. This approach models constraints as equations that are to be incorporated in an objective function (which is also referred to as a Lagrangianâand that’s where the confusion starts because it’s different from the Lagrangian that’s used in physics, which I’ll introduce below) using so-called Lagrange multipliers. If you’re an economist, you’ll surely remember it: it’s a problem written as “maximize f(x, y) subject to g(x, y) = c”, and we solve it by finding the so-called stationary points (i.e. the points for which the derivative is zero) of the (Lagrangian) objective function f(x, y) + Î»[g(x, y)Â â c].

Now, it turns out that, in physics, they use so-calledÂ Lagrange equations of the second kind, which incorporate the constraints directly by what Wikipedia refers to as a “judicious choice of generalized coordinates.”

Generalized coordinates? Don’t worry about it: while generalized coordinates are defined formally as “parameters that describe the configuration of the system relative to some reference configuration”, they are, in practice, those coordinates that make the problem easy to solve. For example, for a particle (or point) that moves on a circle, we’d not use the Cartesian coordinates x and y but just the angle that locates the particles (or point). That simplifies matters because then we only need to find one variable. In practice,Â the number of parameters (i.e. the number of generalized coordinates) will be defined by the number of degrees of freedom of the system, and we know what that means: it’s the number of independent directions in which the particle (or point) can move. Now, those independent directions may or may not include the x, y and z directions (they may actually excludeÂ one of those), and they also may or may not includeÂ rotational and/or vibratory movements. We went over that when discussing kinetic gas theory, so I won’t say more about that here.

So… OK… That was my first surprise: the physicist’s Lagrangian is different from the social scientist’s Lagrangian.Â

The second surprise was thatÂ all physics textbooks seem to dislike the Lagrangian approach. Indeed, they opt for a related but different function when developing a model of a dynamic system: it’s a functionÂ referred to as the Hamiltonian. The modeling approach which uses the Hamiltonian instead of the Lagrangian is, of course, referred to as Hamiltonian mechanics. We may think the preference for the Hamiltonian approach has to do with William Rowan Hamilton being Anglo-Irish, while Joseph-Louis Lagrange (born asÂ Giuseppe Lodovico Lagrangia)Â was Italian-French but… No. đ

And then we have good oldÂ Newtonian mechanics as well, obviously. In case you wonder what that is: it’s the modeling approach that we’ve been using all along. đ But I’ll remind you of what it is in a moment: it amounts to making sense of some situation by using Newton’s laws of motion only, rather than a more sophisticated mathematical argument using more abstract concepts, such as energy, orÂ action.

Introducing Lagrangian and Hamiltonian mechanics is quite confusing because the functions that are involved (i.e. the so-called Lagrangian and Hamiltonian functions) look very similar: we write the Lagrangian as theÂ differenceÂ between the kinetic and potential energy of a system (L = TÂ â V), while the Hamiltonian is the sum of both (H = T + V).Â Now, I could make this post very simple and just ask you to note that both approaches are basically ‘equivalent’ (in the sense that they lead to the same solutions, i.e. the same equations of motion expressed as a function of time) and that a choice between them is just a matter of preferenceâlike choosing between an English versus a continental breakfast. đ Of course, an English breakfast has usually some extra bacon, or a sausage, so you get more but… Well… Not necessarily something better. đ So that would be the end of this digression then, and I should be done.Â However, I must assume you’re a curious person, just like me, and, hence, you’ll say that, while being ‘equivalent’, they’re obviously not the same. So how do the two approaches differ exactly?

Let’s try to get a somewhat intuitive understanding of it all by taking, once again, the example of a simple harmonic oscillator, as depicted below. It could be a mass on a spring. In fact, our example will, in fact, be that of an oscillating mass on a spring. Let’s also assume there’s no damping, because that makes the analysis soooooooo much easier.

Of course, we alreadyÂ knowÂ all of the relevant equations for this system just from applying Newton’s laws (so that’s Newtonian mechanics). We did that in a previous post. [I can’t remember which one, but I am sure I’ve done this already.] Hence, we don’t reallyÂ need the Lagrangian or Hamiltonian. But, of course, that’s the point of this post: I want toÂ illustrateÂ how these other approaches to modeling a dynamic system actually work, and so it’s good we have the correct answer already so we can make sure we’re not going off track here. So… Let’s go… đ

I. Newtonian mechanics

Let me recapitulate the basics of a mass on a spring which, in jargon, is called a harmonic oscillator. Hooke’s law is there: the force on the mass is proportional to its distance from the zero point (i.e. the displacement), and the direction of the force is towards the zero pointânot away from it, and so we have a minus sign. In short, we can write:

F = âkx (i.e. Hooke’s law)

Now, Newton‘s Law (Newton’sÂ secondÂ law to be precise) says that F is equal to the mass times the acceleration: F = ma. So we write:

F = ma = m(d2x/dt2) = âkx

So that’s just Newton’s law combined with Hooke’s law.Â We know this is a differential equation for which there’s a general solution with the following form:

x(t) = AÂ·cos(Ït +Â Î±)

If you wonder why… Well… I can’t digress on that here again: just note, from that differential equation, that we apparently need a function x(t) that yields itself when differentiated twice. So that must be some sinusoidal function, like sine or cosine, because these do that. […] OK… Sorry, but I must move on.

As for the new ‘variables’ (A, Ï and Î±),Â A depends on the initial condition and is the (maximum) amplitude of the motion. We also already know from previous posts (or, more likely, because you already know a lot about physics) that A is related to the energy of the system. To be precise: the energy of the system is proportional to theÂ squareÂ of the amplitude: E â A2. As forÂ Ï, theÂ angularÂ frequency, that’s determined by the spring itself and the oscillating mass on it:Â Ï = (k/m)1/2Â = 2Ï/T = 2ÏfÂ (with T the period, and f the frequency expressed in oscillations per second, as opposed to the angular frequency, which is the frequency expressed in radians per second). Finally, I should note that Î± is just a phase shift which depends on how we define our t = 0 point: if x(t) is zero at t = 0, then that cosine function should be zero and thenÂ Î± will be equal to Â±Ï/2.

OK. That’s clear enough. What about the ‘operational currency of the universe’, i.e. the energy of the oscillator? Well… I told you already/ We don’t need the energy concept here to find the equation of motion. In fact, that’s what distinguishes this ‘Newtonian’ approach from the Lagrangian and Hamiltonian approach. But… Now that we’re at it, and we have to move to a discussion of these two animals (I mean the Lagrangian and Hamiltonian), let’s go for it.

We have kinetic versus potential energy. Kinetic energy (T) is what it always is. It depends on the velocityÂ and the mass: K.E. = T = mv2/2 = m(dx/dt)2/2 = p2/2m. Huh?Â What’s this expression with p in it? […] It’s momentum: p = mv. Just check it: it’s an alternative formula for T really. Nothing more, nothing less. I am just noting it here because it will pop up again in our discussion of the Hamiltonian modeling approach. But that’s for later. Onwards!

What about potential energy (V)? We know that’s equal to V = kx2/2. And because energy is conserved, potential energy (V) and kinetic energy (T) should add up to some constant. Let’s check it:Â dx/dt =Â d[Acos(Ït +Â Î±)]/dt = âAÏsin(Ït +Â Î±). [Please do the derivation: don’t accept things at face value. :-)] Hence, T = mA2Ï2sin2(Ït +Â Î±)/2 = mA2(k/m)sin2(Ït +Â Î±)/2 = kA2sin2(Ït +Â Î±)/2. Now, V is equal to V = kx2/2 = k[Acos(Ït +Â Î±)]2/2 = k[Acos(Ït +Â Î±)]2/2 =Â kA2cos2(Ït +Â Î±)/2. Adding both yields:

T + V = kA2sin2(Ït +Â Î±)/2 + kA2cos2(Ït +Â Î±)/2

= (1/2)kA2[sin2(Ït +Â Î±) + cos2(Ït +Â Î±)] = kA2/2.

Ouff!Â Glad thatÂ worked out: the total energy is, indeed, proportional to the squareÂ of theÂ amplitude and the constant of proportionality is equal to k/2. [You should now wonder why we do not have m in this formula but, if you’d think about it, you can answer your own question: the amplitude will depend on the mass (bigger mass, smaller amplitude, and vice versa), so it’s actually in the formula already.]

The point to note is that this Hamiltonian function H = T + V is just a constant, not only for this particular case (an oscillation without damping), but in all cases where H represents the total energy of a (closed) system.

OK. That’s clear enough. How does our Lagrangian look like? That’s notÂ a constant obviously. Just so you can visualize things, I’ve drawnÂ the graph below:

1. The red curve represents kinetic energy (T) as a function of the displacement x: T is zero at the turning points, and reaches a maximum at the x = 0 point.
2. The blue curve is potential energy (V): unlike T, V reaches a maximum at the turning points, and isÂ zero at the x = 0 point. In short, it’s the mirror image of the red curve.
3. The Lagrangian is the green graph: L = T â V. Hence, L reaches a minimum at the turning points, and a maximum at the x = 0 point.

While that green function would make an economist think of some Lagrangian optimization problem, it’s worth noting we’re notÂ doing any such thing here: we’re not interested in stationary points. We just want the equation(s) of motion. [I just thought that would be worth stating, in light of my own background and confusion in regard to it all. :-)]

OK. Now that we have an idea of whatÂ the Lagrangian and Hamiltonian functions are (it’s probably worth noting also that we do not have a ‘Newtonian function’ of some sort), let us now show how these ‘functions’ are used to solve the problem. What problem? Well… We need to find some equation for the motion, remember? [I find that, in physics, I often have to remind myself of what the problem actually is. Do you feel the same? đ ] So let’s go for it.

II. Lagrangian mechanics

As this post should not turn into a chapter of some math book, I’ll just describe the how, i.e. I’ll just list the steps one should take to model and then solve the problem, and illustrate how it goes for the oscillator above. Hence, I will notÂ try to explain whyÂ this approach gives the correct answer (i.e. the equation(s) of motion). So if you want to know why rather than how, then just check it out on the Web: there’s plenty of nice stuff on math out there.

The steps that are involved in the Lagrangian approach are the following:

1. Compute (i.e. write down) the Lagrangian function L = TÂ â V. Hmm? How do we do that? There’s more than one way to express T and V, isn’t it? Right you are! So let me clarify: in the Lagrangian approach, we should express T as a function of velocity (v) and V as a function of position (x), so your Lagrangian should be L = L(x, v). Indeed, if you don’t pick the right variables, you’ll get nowhere. So, in our example, we have L =Â mv2/2Â âÂ kx2/2.
2. Compute the partial derivativesÂ âL/âxÂ and âL/âv. So… Well… OK. Got it. Now that we’ve written L using the right variables, that’s a piece of cake. In our example, we have:Â âL/âx = â kx and âL/âv = mv.Â Please note how we treat x and v as independent variables here. It’s obvious from the use of the symbol for partial derivatives:Â â. So we’re not taking any total differential here or so.Â [This is an important point, so I’d rather mention it.]
3. Write downÂ (‘compute’ sounds awkward, doesn’t it?) Lagrange’s equation: d(âL/âv)/dt =Â âL/âx.Â […] Yep. That’s it. Why?Â Well… I told you I wouldn’t tell you why. I am just showing the how here. This is Lagrange’s equation and so you should take it for granted and get on with it. đÂ In our example: d(âL/âv)/dt = d(mv)/dt = âk(dx/dt) = âL/âx = â kx. We can also write this as m(dv/dt) = m(d2x/dt2) = âkx.Â  Â Â Â
4. Finally, solve the resulting differential equation. […]Â ?!Â Well… Yes. […] Of course, we’ve done that already. It’s the same differential equation as the one we found in our ‘Newtonian approach’, i.e. the equation we found by combining Hooke’s and Newton’s laws. So the general solution is x(t) = Acos(Ït +Â Î±), as we already noted above.

So, yes, we’re solving the same differential equation here. So you’ll wonder what’s the difference then between Newtonian and Lagrangian mechanics? Yes, you’re right: we’re indeed solving the same second-order differential equation here. Exactly. Fortunately, I’d say, because we don’t want any other equation(s) of motion because we’re talking the same system. The point is: we got that differential equation using an entirely different procedure, which I actually didn’t explain at all: I just said to compute this and then that and… â Surprise, surprise! â we got the same differential equation in the end. đÂ So, yes, the Newtonian and Lagrangian approach to modeling a dynamic system yield the same equations, but the Lagrangian method is much more (very much more, I should say) convenient when we’re dealing with lots of moving bits and if there’s more directions (i.e. degrees of freedom) in which they can move.

In short, Lagrange could solve a problem more rapidly than Newton with his modeling approach and so that’s why his approach won out. đ In fact, you’ll usually see the spatial variables noted asÂ qj. In this notation, j = 1, 2,… n, and n is the number of degrees of freedom, i.e. the directions in which the various particles can move. And then, of course, you’ll usually see a second subscript i = 1, 2,… m to keep track of everyÂ qjÂ for each and every particle in the system, so we’ll have nĂm qij‘s in our model and so, yes, good to stick to Lagrange in that case.

OK. You get that, I assume. Let’s move on to Hamiltonian mechanics now.

III. Hamiltonian mechanics

The steps here are the following. [Again, I am just explaining the how, not the why. You can find mathematical proofs of why this works in handbooks or, better still, on the Web.]

1. The first step is very similar as the one above. In fact, it’s exactly the same:Â write T and VÂ as a function of velocity (v) and position (x) respectively and construct the Lagrangian.Â So, once again, we have L = L(x, v). In our example: L(x, v) =Â mv2/2Â âÂ kx2/2.
2. The second step, however, is different. Here, the theory becomes more abstract, as the Hamiltonian approach does not only keep track of the positionÂ but also of theÂ momentumÂ of the particles in a system. Position (x) and momentum (p) are so-calledÂ canonical variables in Hamiltonian mechanics, and the relation with Lagrangian mechanics is the following: p = âL/âv. Huh?Â Yeah. Again, don’t worry about the why. Just check it for our example: â(mv2/2Â âÂ kx2/2)/âv = 2mv/2 = mv. So, yes, it seems to work. Please note, once again, how we treat x and v as independent variables here, as is evident from the use of the symbol for partialÂ derivatives. Let me get back to the lesson, however. The second step is: calculate the conjugate variables. In more familiar wording: compute the momenta.
3. The third step is: write down (or ‘build’ as you’ll see it, but I find that wording strange too) the Hamiltonian function H = T + V. We’ve got the same problem here as the one I mentioned with the Lagrangian: there’s more than one way to express T and V. Hence, we need some more guidance. Right you are! When writing your Hamiltonian, you need to make sureÂ you express the kinetic energy as a function of the conjugate variable, i.e. as a function of momentum, rather than velocity. So we have H = H(x, p), not H = H(x, v)! In our example, we have H = T + V =Â p2/2m + kx2/2.
4. Finally, write and solve the following set of equations: (I)Â âH/âp = dx/dt and (II) ââH/âx = dp/dt. [Note the minus sign in the second equation.]Â In our example: (I) p/m = dx/dt and (II) âkx = dp/dt. The first equation is actually nothing but the definition of p: p = mv, and the second equation is just Hooke’s law: F = âkx. However, from a formal-mathematical point of view, we have twoÂ first-order differential equations here (as opposed to oneÂ second-order equation when using the Lagrangian approach), which should be solved simultaneously in order to find position and momentum as a function of time, i.e. x(t) and p(t). The end result should be the same:Â x(t) = Acos(Ït +Â Î±) and p(t) = … Well… I’ll let you solve this: time to brush up your knowledge about differential equations. đ

You’ll say: what the heck? Why are you making things so complicated?Â Indeed, what am I doing here? Am I making things needlessly complicated?

The answer is the usual one: yes, and no. Yes. If we’d want to do stuff in the classical world only, the answer seems to be: yes!Â In that case, the Lagrangian approach will do and may actually seem much easier, because we don’t have a setÂ of equations to solve. And why would we need to keep track of p(t)? We’re only interested in the equation(s) of motion, aren’t we? Well… That’s why the answer to your question is also: no! In classical mechanics, we’re usually only interested in position, but in quantum mechanics that concept of conjugate variables (like x and p indeed) becomes much more important, and we will want to find the equations for both. So… Yes. That means a set of differential equations (one for each variable (x and p) in the example above) rather than just one. In short, the realÂ answer to your question in regard to the complexity of the Hamiltonian modeling approach is the following: because the more abstractÂ Hamiltonian approach to mechanics is very similar to the mathematics used in quantum mechanics, we will want to study it, because a good understanding ofÂ Hamiltonian mechanics will help us to understand the math involved in quantum mechanics. And so that’sÂ the reason why physicists prefer it to the Lagrangian approach.

[…] Really?Â […] Well… At least that’s what I know about it from googling stuff here and there. Of course, another reason for physicists to prefer the Hamiltonian approach may well that they think social science (like economics) isn’t real science. Hence, we – social scientists – would surely expect them to develop approaches that are much more intricate and abstract than the ones that are being used by us, wouldn’t we?

[…] And then I am sure some of it is alsoÂ related to the Anglo-French thing.Â đ

Post scriptum 1Â (dated 21 March 2016): I hate to write about stuff and just explain theÂ howârather than the why.Â However, in this case, theÂ whyÂ is really rather complicated. The math behind is referred to asÂ calculus of variationsÂ â which is a rather complicated branch of mathematics âÂ but the physical principle behind is the Principle of Least Action. Just click the link, and you’ll see how the Master used to explain stuff like this. It’s an easy and difficult piece at the same time. Near the end, however, it becomes pretty complicated, as he applies the theory to quantum mechanics, indeed. In any case, I’ll let you judge for yourself. đ

Post scriptum 2 (dated 13 September 2017): I started a blog on the ExercisesÂ on Feynman’s Lectures, and the posts on the exercises on Chapter 4 have a lot more detail, and basically give you all the math you’ll ever want on this. Just click the link. However, let me warn you: the math isÂ notÂ easy.Â Not at all, really.