You are viewing spoonless

orangegray
In particle physics, everything is based on symmetry. Symmetry magazine is the main industry newsletter for particle physics. All of the fundamental laws of physics can be expressed as symmetries. Other than symmetry, the only rule is basically "anything that can happen, will happen".

There are two general categories of symmetries in particle physics, internal symmetries and spacetime symmetries. I'm only going to discuss spacetime symmetries here.

Within the category of spacetime symmetries there are continuous symmetries like rotational symmetry (responsible for the conservation of angular momentum), translational symmetry (responsible for regular conservation of momentum), time translation (responsible for conservation of energy), and Lorentz boosts (responsible for Einstein's theory of relativity).

But then there is also another kind of spacetime symmetry--discrete symmetries. There are 2 important discrete spacetime symmetries and they are pretty simple to explain. The first is called time reversal symmetry, usually denoted by the symbol T. As an operator, T represents the operation of flipping the direction of time from forwards to backwards--basically, hitting the rewind button. Parts of physics are symmetric with respect to T and other parts are not. The other important one is P (parity), which flips space instead of time--it's basically what you see when you look in the mirror and left and right are reversed, everything is backwards.

Here is a video of me doing a cartwheel, an every day process which by itself would appear to break both P and T. The animation shows the forward-in-time process first which is a right-handed cartwheel, followed by the time reverse which then looks like a left-handed cartwheel. Because applying T in this case accomplishes exactly the same thing as P (if you ignore the background), this means that this process breaks both P symmetry and T symmetry, but it preserves the combination of the 2, PT:



And now for the front handspring. Unlike the cartwheel, this process respects P symmetry. If you flip left and right, it still looks the same. However, if you time reverse it, it looks like a back handspring instead of a front handspring! So the handspring respects P symmetry but not T symmetry.



Of the 4 fundamental forces of nature--gravity, electromagnetism, the strong force, and the weak force--the first 3 respect time-reversal symmetry while the fourth, the weak force, does not. Because the other 3 are symmetric, it was assumed for a long time (until the 1960's) that all laws of physics had to be symmetric under T. Only in 1964 did the first indirect evidence that the weak force does not respect T symmetry emerge, and more direct proof came in the late 90's and still more interesting examples have piled on within the past decade.

Lots more explanation behind the cut!Collapse )

gymnastics

orangegray
I started doing gymnastics in late-2009, after switching to that from a brief flirtation with parkour. The problem is, I moved so much between 2009 and 2013 (from California to Illinois in 2010, then to Pennsylvania in 2012, then to Queens, New York 6 months later, and finally to New Jersey in mid-2013. Every time I moved I had to find a new gym and figure out the schedule and how to get there.

In Illinois, I was pretty sure the closest gym was nearly 3 hours away in Chicago, so I tried to drive up there at least one Saturday per month to practice, but it was pretty impractical. Eventually I found one closer but it was almost time to move again by that point. So for the first 3.5 years it was nearly impossible for me to get consistent practice in. On top of that, the gyms I went to were all "open gyms", with only very minimal guidance from instructors so I was basically just teaching myself.

Well I'm pleased to announce that now that I've settled down, the past year (starting in mid-2013) I've actually been going pretty regularly, almost every week, and I'm finally starting to make some progress. Even more important than the regularity is that the gym I go to now has actual instructors that teach an actual adult class. I've realized now that my form was so bad I had very little chance of doing anything non-trivial until I got the proper training and feedback I needed to improve my body position.

There's a beginner class, an intermediate class, and an advanced class, and I've just recently decided to start attending the intermediate class rather than the beginner, even though I'm still kind of on the border. I'm generally one of the best when I go to the beginner, but one of the worst when I go to the intermediate--I get different things out of each when I go.

Also, now that I own a house with an actual lawn, I can practice in the spring and summer in the backyard on the weekends. It's more difficult because the grass isn't as springy as a gymnastics floor, but I've found that if I can do something on the grass then it means I have *really* got it down.

Recently I took some videos of me doing a few things. The ones that came out the best were my front-to-back cartwheel and front handspring. However, when I was editing the videos I was struck by an interesting thought about the physics of gymnastics. When you run the video in reverse, the cartwheel changes chirality from right-handed to left-handed. But when you run the handspring in reverse, it changes from a front handspring to a back handspring. (By the way, the term "front-to-back" cartwheel just means that you start facing forward and end up facing backwards, but in between while you're rotating the body twists around--as oppose to a regular cartwheel where you just start out facing to the side and don't twist as you rotate. Either one demonstrates the change in chirality, but the front-to-back version is a bit more fun to do and watch.)

Anyway, there are some really neat parallels between this and particle physics which I hadn't fully realized before. So I want to post the videos and then explain how gymnastics can be used to illustrate the concept of "discrete spacetime symmetries" in particle physics. I have it in a couple formats, including animated gif, but the gif version loads so slowly in a web browser that I need to cut down the size or something before posting it--or maybe I'll give up on that and just upload a video to Youtube instead. Still figuring out what the best way is to do it--but soon!

wandering sets part 10: mystery solved!

orangegray
When I wrote part 9 I was thinking I was pretty confused and I was not sure I had really made any progress on answering the basic question that I had set out to answer with this series. But I'm pleased to announce that within 24 hours after writing that one, I started thinking and piece by piece, it just all came together. I think I have pretty much solved the mystery, aside from some minor threads that might still need to be wrapped up. (Just didn't get a chance to write it down until today.) That was way faster than I'd imagined it would take.

The main thing I wasn't seeing is how mixing (whether the ordinary process of two gasses mixing in a box, or the more esoteric quantum measurement process) relates to wandering sets. And the lynchpin that was missing, that holds everything together, and explains how mixing relates to wandering sets, is "what is the identity of the attractor?"

I realized that if I could pinpoint what the attractor was in the case of mixing, then I would see why mixing is a wandering set (and hence, a dissipative process). Soon after I asked myself that question, the answer became pretty obvious. The attractor in the case of mixing--and indeed, in any case where you're transitioning from a non-equilibrium state to thermodynamic equilibrium--is the macrostate with maximal entropy. In other words, the macrostate that corresponds to "thermodynamic equilibrium".

I think the reason I wasn't seeing this is because I was thinking too much about the microstates. But from the point of view of a microscopic description of physics, any closed system is always conservative--all of the physics is completely reversible. You can only have dissipation in two ways. One is fairly trivial and uninteresting, and that's if the system is open and energy is being sucked out of it. Sucking out energy from a system reduces its state space, so from within that open system, ignoring the outside, you start in any corner of a higher dimensional space and then you get pulled into an attractor that represents the states which have lower total energy. If energy keeps getting sucked out, it will eventually all leave and you'll just be left in the ground state (which would in that case be the attractor).

But there's a much more interesting kind of dissipation, and that's when you course grain a system. If you don't care about some of the details of the microscopic state, but you only care about the big picture, then you can use an approximate description of the physics, you can just keep track of the macrostate. And that's where the concept of entropy comes into play, and that's when even closed systems can involve dissipation. There's no energy escaping anywhere, but if you start in a state that's not in thermodynamic equilibrium, such as two gasses that aren't mixed at all, or that are only halfway mixed, or only partially mixed anywhere in between... from the point of view of the macrostate space, you'll gradually get attracted towards the state of maximal entropy. So it's the macrostate phase space that is where the wandering sets comes in, in this case. Not the microstates! The physics of the evolution of the macrostate involves a dissipative action, meaning it contains wandering sets; and it is an irreversible process because you don't have the microstate information that would be required in order to know how to reverse the process.

So how does this work in the case of a quantum measurement? It's really the same thing, just another kind of mixing process. Let's say you have a quantum system that is just a single spin (a "qubit") interacting with a huge array of spins comprising the "environment". Before this spin interacts, it's in a superposition of spin-up and spin-down. It is in a pure state, similar to the state where two gasses are separated by a partition. Then you pull out the partition (in the quantum case, you allow the qubit to interact with its environment, suddenly becoming entangled with all of the other spins). In either case, this opens up a much larger space, increasing the dimensionality of the microstate space. Now in order to describe the qubit, you need a giant matrix of correlations between it and all of the other spins. As with the mixing case I described earlier, you could use a giant multidimensional Rubik's cube to do this. The only difference is that classically, each dimension would be a single bit "1" or "0", while this quantum mechanical mixing process involves a continuous space of phases (sort of ironic that quantization in this case makes something discrete into something continuous). If this is confusing, just remember that a qubit can be in any superposition of 1 and 0, and therefore it takes more information to describe it's state than a classical bit requires.

But after the interaction, we just want to know what state the qubit is in--we don't really care about all of these extra correlations with the environment, and they are random anyway. They are the equivalent of thermal noise, non-useful energy. So therefore, we shift from our fine grained description to a more course grained one. We define the macrostate as just the state of the single qubit, but averaged over all of the possibilities for the environmental spins. Each one involves a sum over its up and its down state. And if we sum over all of those different spins, that's accomplished by taking the trace of the density matrix, which I mentioned in part 9. Tracing over the density matrix is how you course grain the system, averaging over the effects of the environment. As with the classical mixing case, putting this qubit in contact with the environment suddenly puts it in a non-equilibrium state. But if you let it settle down for a while, it will quickly reach equilibrium. And the equilibrium state, the one with the highest entropy, is one where all of the phases introduced are essentially random, ie there are no special extra correlations between them. So the microstate space is a lot larger, but there is one macrostate that the whole system is attracted to. And in that macrostate, when you trace over the spins in the environment, you wind up with a single unique state for the qubit that was measured. And that state is a "mixed state", it's no longer a coherent superposition between "0" and "1" but a classical probability distribution between "0" and "1". The off diagonal elements of the density matrix have gone to zero. So while the microstate space has increased in dimensionality, the macrostate space has actually *decreased*! This is why I was running into so much confusion. There's both an increase in dimensionality AND a decrease in dimensionality, it just depends on whether you're asking about the space of microstates or the space of macrostates.

Mystery solved!

I'm very pleased with this. While I sort of got the idea a long time ago listening to Nima Arkani-Hamed's lecture on this, and I got an even better idea from reading Leonard Susskind's book, it really is all clear to me now. And I have to thank wandering sets for this insight (although in hindsight, I should have been able to figure it out without that).

I would like to say "The End" here, but I must admit there is one thread from the beginning--Maxwell's Demon--which I never actually wrapped up. I suspect that my confusion there, about why erasure of information corresponds to entropy increase, and exactly how it corresponds, is directly related to my confusion between macrostate and microstate spaces. So I will write a tenative "The End" here, but may add some remarks about that in another post if I think of anything more interesting to say. Hope you enjoyed reading this series as much as I enjoyed writing it!

The End

wandering sets part 9: still confused

orangegray
I have to admit, in trying to tie all of this together, I have realized that there still seems to be something big that I don't understand about the whole thing. And there is at least one minor mistake I should correct in part 8. So from here on out, we're treading on thin ice, I'm doing something more akin to explaining what I don't understand rather than describing a solution.

It seemed that if I could understand wandering sets, then all of the pieces would fit together. And it still seems that way, although the big thing I still don't get about wandering sets is how they related to mixing. And that seems crucial.

The minor mistake I should correct in part 8 is my proposed example of a completely dissipative action. I said you could take the entire space minus the attractor as your initial starting set, and then watch it evolve into the attractor. But this wouldn't work because the initial set would include points that are in the neighborhood of the attractor. However, a minor modification of this works--you would just need to start with a set that excludes not only the attractor but also the neighborhood around it.

In thinking about this minor problem, however, I realized there are also some more subtle problems with how I presented things. First, I may have overstated the importance of dimensionality. In order to have a completely dissipative action, you could really just use any space which has an attractor that is some subset of that space, where it attracts any points outside of it into the attractor basin. The subset wouldn't necessarily have to have a lower dimension--my intuition is that in thermodynamics that would be the usual case, although I must admit that I'm not sure and I don't want to leave out any possibilities.

This leads to a more general point here that the real issue with irreversibility need not be stated in terms of dimension going up or down--a process is irreversible any time there is a 1-to-many mapping or a many-to-1 mapping. So a much simpler way of putting the higher/lower dimensionality confusion on my part is that I often am not sure whether irreversible processes are supposed to time evolve things from 1-to-many or from many-to-1. Going from a higher to lower dimensional space is one type of many-to-1 mapping, and going from lower to higher is one type of 1-to-many mapping. But these are not the only types, just types that arise as typical cases in thermodynamics, because of the large number of independent degrees of freedom involved in macroscopic systems.

Then there's the issue of mixing. I still haven't figured out how mixing relates to wandering sets at all. Mixing very clearly seems like an irreversible process of the 1-to-many variety. But the wandering sets wiki page seems to be describing something of the many-to-1 variety. However, they say at the top of the page that wandering sets describe mixing! I still have no idea how this could be the case. But now let's move on to quantum mechanics...

In quantum mechanics, one can think of the measurement process in terms of a quantum Hilbert space (sort of the analog of state space in classical mechanics) where different subspaces (called "superselection sectors") "decohere" from each other upon measurement. That is, they split off from each other, leading to the Many Worlds terminology of one world splitting into many. Thinking about it this way, one would immediately guess that the quantum measurement process therefore is a 1-to-many process. 1 initial world splits into many different worlds. However, if you think of it more in terms of a "collapse" of a wavefunction, you start out with many possibilities before a measurement, and they all collapse into 1 after the measurement. So thinking about it that way, you might think that quantum physics involves the many-to-1 type of irreversibility. But which is it? Well, this part I understand, mostly... and the answer is that it's both.

The 1-to-many and many-to-1 perspectives can be synthesized by looking at quantum mechanics in terms of what's called the "density matrix". Indeed, you need the density matrix formulation in order to really see how the quantum version Lioville's theorem works. In the density matrix formulation of QM, instead of tracking the state of the system using a wavefunction--which is a vector whose components can represent all of the different positions of a particle (or field, or string) in a superposition--you use a matrix, which is sort of like the 2 dimensional version of a vector. By using a density matrix instead of just a vector to keep track of the state of the system, you can distinguish between two kinds of states--pure states and mixed states. A pure state is a coherent quantum superposition of many different possibilities. Whereas a mixed state is more like a classical probability distribution over many different pure states. A measurement process in the density matrix formalism, then, is described by a mixing process that evolves a pure state into a mixed state. This happens due to entanglement between the original coherent state of the system and the environment. When a pure state becomes entangled in a random way with a large number of degrees of freedom, this is called "decoherence". What was originally a coherent state (nice and pure, all the same phases), is now a mixed state (decoherent, lots of random phases, too difficult to disentangle from the environment).

What happens is that you originally represent the system plus the environment by a single large density matrix. And then, once system becomes entangled with environment, the matrix decomposes into the different superselection sectors. These are different sub matrices, each of which represents a different pure state. The entire matrix is then seen as a classical distribution over the various pure states. As I began writing this, I was going to say that because it was a mixing process, it went from 1-to-many. But now that I think of it, because the off-diagonal elements between the different sectors end up being zero after the measurement, the final space is actually smaller than the initial space. And I think that's even before you decide to ignore all but one of the sectors (which is where the "collapse" part comes in, in collapse based interpretations). From what I recall, the off-diagonal elements wind up being exactly zero--or so close to zero that you could never tell the difference--because you assume the way in which the environment gets entangled is random. As long as each phase is random (or more specifically--as long as they are uncorrelated with each other), when you sum over a whole lot of them at once, they add up to zero--although I'd have to look this up to remember the details of how that works.

I was originally going to say that mixed states are more general and involve more possibilities than pure states, so therefore evolving from a pure state to a mixed state goes from 1-to-many, and then when you choose to ignore all but one of the final sectors, you go back from many-to-1, both of these being irreversible processes. However, as I write it out, I remember 2 things. The first is what I mentioned above--even before you pick one sector out you've already gone from many-to-1! Then you go from many-to-1 again if you were to throw away the other sectors. And the second thing I remember is that, mathematically pure states never really do evolve into mixed states. As long as you are applying the standard unitary time evolution operator, a pure state always evolves into another pure state and entropy always remains constant. However, if there is an obvious place where you can split system from environment, it's tradition to "trace over the degrees of freedom of the environment" at the moment of measurement. And it's this act of tracing that actually takes things from pure to mixed, and from many to 1. I think you can prove that from a point of view of inside the system, whether you trace over the degrees of freedom in the environment or not is irrelevant. You'll wind up with the same physics either way, the same predictions for all future properties of the system. It's just a way of simplifying the calculation. But when you do get this kind of massive random entanglement, you wind up with a situation where tracing can be used to simplify the description of the system from that point on. You're basically going form a fine grained approximation of the system+environment to a more course grained approximation. So it's no wonder that this involves a change in entropy. Although whether entropy goes up or down in the system or in the environment+system, before or after the tracing, or before or after you decide to consider only one superselection sector--I'll have to think about and answer in the next part.

This is getting into the issues I thought I sorted out from reading Leonard Susskind's book. But I see that after a few years away from it, I'm already having trouble remembering exactly how it works again. I will think about this some more and pick this up again in part 10. Till next time...
orangegray
I think I did a decent job in part 7 of getting across the main paradox in physics that has confused me over the years. And there's a very similar paradox that I found on the wandering sets Wikipedia page (http://en.wikipedia.org/wiki/Wandering_set), so now seems like a good time to return to that.

They define a wandering point as a point which has a neighborhood in phase space which, after some time in the future, never gets back to where it intersects itself again. Similarly, they define a wandering set as a set whose points never intersect each other again after a certain time in the future. One seemingly minor caveat, which may be important, is that the intersection doesn't have to be exactly zero (no points in common), just so long as it has measure zero in the entire space. Measure is sort of like volume (but more mathematically rigorous). So for example, the set of points in a 2D plane has zero volume in a 3D space. So if two 3D objects intersect only in a 2D plane, it doesn't count as a true intersection, since the volume of that intersection is zero. Same goes for higher dimensional spaces, except the definition of volume is different.

Going hand in hand with their definition of a wandering set is their definition of a "dissipative action". The action is the specific rules that time-evolve the system from the past into the future. It's defined as dissipative if and only if there is some wandering set in the space under that action. If there is no wandering set in the space, then it's a conservative action.

But now for the contradiction I thought I saw on the page (after re-reading it many more times, I've figured out why it's not actually a contradiction). They define one more thing, a "completely dissipative action", whose definition seemed to me at first to be completely incompatible with their definition of a dissipative action. They define a completely dissipative action as an action that time-evolves a wandering set of positive measure into the future in such a way that the path it sweeps out through the space (it's "orbit") ends up taking up the entire space--or more precisely, the measure of its orbit is the same as the entire space. The reason this seemed to be a contradiction to me is that I was picturing a case such as mixing, where you start out with an initial condition that takes up some limited subspace of the entire phase space (like one cube of the Rubik's cube we talked about earlier), and then after you time-evolve it forward it ends up expanding to fill the whole space. But if it expands to fill the whole space, then it can't be a wandering set, because the intersection between the whole space and the original set is non-zero (it's the original set)!

So how does one resolve this paradox? Well, the main mistake I was making was confusing the final image of the set after it gets time-evolved with its orbit. The final image is where it is in a single snapshot in time, while the orbit is like the image you get when you leave a camera lens open for a long time (called "time exposure" in photography, I believe). Basically, it's the union of all of the different images it takes up as it progresses in time, not just a single snapshot. If it were just the final image, then their definition of a completely dissipative action is indeed contradictory, and can't coexist with their definition of a wandering set.

Ok, so it's the orbit, not the final image. Even then, it's a bit hard to imagine a scenario that would be "completely dissipative". The reason I was hoping it would be easier to imagine this scenario is because I was hoping that maybe the simplest kind of dissipation would be complete dissipation. And maybe understanding that would be a big step in the process towards understanding any kind of dissipation. In order to imagine what kind of a scenario would work, we need to find a case where the original set never wanders back to it's starting point but ends up sweeping out a path that fills all of space. To do that, it's best to think about what the reason would be that a set might never wander back to its original starting point. In most normal situations in physics, if you've got things moving around according to nice simple laws of physics, and you didn't start at any special starting point, you'd expect the motion to fill the whole space and eventually wander back an infinite number of times. The only case where it wouldn't get back, is if somehow it gets trapped in some subspace. This could be a single point that it ends up approaching, or a line, or a plane, or even a circle for instance. For example, if you had a planet that goes near a solar system and gets sucked into the orbit of that solar system, it would end up getting trapped in an ellipse, and never continue its nice straight motion, never getting back to the original starting point, even if the galaxy were inside a giant box. There's a name for this kind of occurrence in physics, it's called an "attractor". Basically, it seems that in order to have dissipation in the sense describe on the wandering set Wikipedia page, you would need some form of attractor. The ellipse I described would be a regular attractor, but in chaos theory you also have weirder more fractal patterns called "strange attractors". Chaos theory (also known as non-linear dynamics) seems intimately connected with the topic of dissipation, as many of the dissipative systems I mentioned in the beginning (such as hurricanes) are chaotic systems. I wasn't kidding when I said the question I'm wondering about here involves "all areas of physics"! :-)

So now unlike the mixing case, where you're going from a small dimensionality to a higher dimensionality, we've got the opposite happening. You start the motion in a space of higher dimensionality, and then get trapped in an attractor of lower dimensionality. Because no matter where you start in the higher dimensional space, you tend to end up in the lower dimensional attractor, you've got a many-to-1 mapping from initial to final states. In other words, you've got irreversibility and hence dissipation! And this higher to lower dimensional transition seems much more similar to the collapse of the wavefunction in quantum mechanics, which also goes from higher to lower. As opposed to the mixing case which seems to go from lower to higher. So this is not just an issue of quantum mechanics working one way, and thermodynamics working the other--now we have the same paradox appearing solely within regular classical thermodynamics, (which hearkens back to my earlier point that this issue has been around longer than quantum mechanics).

For this case of moving from somewhere in the bulk space into a smaller attractor, the definition of a "completely dissipative action" makes more sense. If you pick as your starting set, the entire space except the attractor, and then all of those points move into the attractor, you have exactly satisfied the definition. The orbit includes both everything outside of the attractor (which is what you started with) as well as everything in the attractor (what you end up with, or near enough to count as the same measure). So the orbit does indeed take up the entire space. But the set is still wandering, since there is no intersection between itself and the final attractor. Presumably, an action that is only partially dissipative (as opposed to completely dissipative) would include an attractor which captures starting points in some of the rest of the space, but not all.

We're getting closer and closer, so hopefully in the next part I'll be able to resolve this higher/lower dimensionality paradox in both thermodynamics and quantum mechanics (or if not both, at least one of them).
orangegray
In parts 2 through 5 I explained a bit about how wandering sets and thermodynamics works. And in part 6 I explained a bit about how quantum mechanics works. Now we can begin to bridge the gap and see how the two different angles from which I've been approaching this question intersect.

One of the biggest confusions I've had in trying to piece this together over the years is in mixing up whether the process of dissipation involves a transition from a higher dimensional space to a lower, or from a lower to a higher. I think it is both depending on how you look at it, but you have to keep straight what space you're talking about and what you mean.

If you look at it from the point of view of quantum mechanics, dissipation comes from the measurement process which involves projection matrices (or projection "operators" more generally) which take many possibilities and collapse them down to one. It's common to hear people use the word "reduction" in phrases like "the reduction of the state vector" to mean measurement in quantum mechanics. And measurement is the only time something irreversible happens, the rest of the laws of quantum mechanics are entirely reversible. So you would think intuitively, that a reduction or a collapse involves going from a higher dimensional space to a lower dimensional space. That's what a projection is mathematically. For example, when you walk in the sun outside and it's not directly overhead, you are followed around by a shadow. Your shadow is a 2-dimensional projection of your 3-dimensional self on the ground. A shadow is one of the simplest kinds of projections, but mathematically a projection refers to anything that reduces a higher dimensional object to a lower dimensional image. That's what the measurement operators used in quantum mechanics do, but because they are acting within the quantum Hilbert space, they project a space of ridiculously large dimensionality down to one of slightly lower dimensionality (but usually, still infinite).

On the other hand, if you look at it from the point of view of thermodynamics, dissipation happens only when entropy increases. The microscopic laws of physics, even in classical mechanics, are completely reversible and non-dissipative. The only irreversibility that comes into play is when the available phase space of a system increases. Let's walk through a concrete example of a mixing process step by step and see why it is irreversible and why it increases entropy.

First, imagine that you just have 3 classical particles in a box. They just bounce around in the box according to Newton's laws of physics. They move in straight lines unless they bounce off of a wall, in which case their angle of refraction equals their angle of incidence, just as a billiard ball bounces off of the wall of a pool table. It's easy to see that these laws are reversible, and that if you applied them backwards, you'd see basically the same thing happening, it's just that all 3 particles would be moving backward along their original paths instead of forward. Nothing weird or spooky or irreversible about it. But now let's conceptually divide that box into a left side and a right side, and keep track of which side each of the 3 particles are in. If the microstate of this system is the exact positions of all 3 particles plus the exact direction that each of them is moving in, then let's call the "macrostate" a single number between 0 and 3 that equals how many particles are on the right side of the box. To get at this number, we can construct a simplified microstate phase space which is a list of 3 booleans specifying which side of the box they are on. For example, if particles A and B are on the left side of the box, and particle C is on the right side, our list would be (left,left,right). If they were all on the right side, it would be (right,right,right). The macrostate can be deduced from the microstate (by summing up the number of right's in our list), but the reverse is not true as some of the macrostates correspond to more than one microstate. For example, the macrostate "2" could either be (left,right,right) or (right,right,left).

The full microstate phase space is what we talked about earlier--it's an 18-dimensional space, 3 times 3 coordinates for position, and 3 times 3 coordinates for momentum. But in order to understand maxing, we only really have to visualize a simplified microstate phase space based on our list of 3 right/left booleans--in order to do so, you need to picture something that looks like a mini Rubik's cube. A regular Rubik's cube consists of 3x3x3 = 27 cubes (if you include the center cube, which doesn't actually have any colors painted on it, and you can't actually see). But they also sell "mini" Rubik's cubes that are only 2x2x2 = 8 cubes. They are much easier to solve, but not completely trivial if I recall. Each of the 8 cubes in a mini Rubik's cube corresponds to one of the 8 microstates of our system: (left, left, left), (left, left, right), (left,right,left), ... etc., ... (right,right,right). Each of the particles can be in one of 2 possible states, but there are 3 particles so the space is 3-dimenionsal. But because we're only concerned with left-versus-right the space is discrete rather than continuous.

Now imagine that instead of 3 particles, we had an entire mole of particles--that is, Avagadro's number of particles, 6.02x10^23 particles. Any quantity of gas that you could fit in an actual box and hold in your hand would realistically have to have at least this number of particles, and probably a lot more! So what happens to our space of states? Now, instead of being 3 dimensional it is 6.02x10^23 dimensional--quite a bit larger. But still each dimension can only have 2 possible states. Our 3-particle system had only 2^3 = 8 microstates. But this system has an unimaginably large number of microstates, 2^(6.02x10^23) states! Equilibrium for this system means that you have allowed the particle to bounce around long enough that they are nice and randomly mixed (roughly equal numbers on the left and the right). The entropy of any system is simply the natural log of the number of accessible microstates it has. If the system is in equilibrium, then nearly all states are accessible so the entropy is log(2^(6.02x10^23)) = 4x10^23. A very small number of those states correspond to most of the particles being on the left, or most being on the right, but these are such a negligible fraction out of the number above, that it doesn't change the answer.

But what if we started the system in a state where all of the particles just happened to be on one side of the box? In other words, in the state (left,left,left,left,...,left,left) where there are 6.02x10^23 left's? This is a very special initial condition, similar to the special state the universe started in which I mentioned earlier. Because there is only 1 microstate where all of the particles are on the left, this state is extremely unlikely compared to the state where roughly equal numbers are on the left and on the right. The entropy of this state is just log(1) = 0. The true entropy of a gas like this is of course more than 0, but for our purposes here, we only care about the entropy associated with its mixing equally on either side of the box. The rest of the entropy is locked up in the much larger microstate phase space mentioned earlier, before we simplified it down to only the part that cares about which half of the box the particles are in.

The main point of all of this, which I'm getting to is... if all of the particles start out on one side of the box, and then later they are allowed to fill the whole box, you've drastically increased the entropy, because there are a lot more possible places for the particles to be. A slightly more complicated version of what we just went through is if you had two different kinds of particles, let's call them blue and red. Imagine all of the red particles started on the left side, and all of the blue particles started on the right side, perhaps because there is initially a divider in between. Then when you lift the divider, the two of them mix with each other, and after it reaches equilibrium, roughly equal numbers of red and blue will be on both sides. This is what is meant by "mixing" in thermodynamics. There are many many more ways in which they could be mixed than the one way in which they could all be on their own sides, so there is a lot more entropy in the mixed state at the end than there was in the separated state in the beginning. Unlike the version of this where only 3 particles were involved, this version is irreversible in the sense that: it's extremely unlikely, and pretty much inconceivable, that you would ever see a mixed box of these particles naturally and spontaneously sort themselves out to all blue on one side and all red on the other, whereas you would find nothing surprising whatsoever if initially unmixed red and blue gases gradually mixed with each other and wound up in a perfectly homogeneously purple mixture at the end.

In this example, the available state space before the particles are allowed to mix involves only 1 state. Afterwards, it involves something with a size on the order of 2 raised to the power of Avogadro's number of states. So the entropy should increase by something on the order of Avogadro's number. It seems like what has happened is that the state space at the beginning was very small and low dimensional, and then at the end it is very large with high dimensionality. Naively, this appears to be exactly the opposite of what happens during a measurement in quantum mechanics. But somehow, that's not the case--what's going on really is exactly the same. I think what's happening is that we're just confusing two different spaces here. And it's a confusion that I've often made in thinking about this. Where we'll have to go from here to resolve this paradox is to discuss open vs closed systems, and to explore a little bit the many worlds interpretation of quantum mechanics, and what happens to entropy in different parts of the multiverse as different branches of the universal wave function evolve forward in time. I'll leave you with one final piece of the paradox which seems directly related to this high-low dimensionality confusion: if the total entropy remains exactly the same in all branches of the multiverse, then you would think that every time a quantum measurement is performed and different branches split off from each other, the entropy would get divided among them and hence be less and less in each branch over time. (Because surely, the dimensionality of the accessible states in one branch is less than the dimensionality of the accessible states in the combined trunk before the split?) And yet, exactly the opposite happens--while the total entropy remains exactly constant, the entropy in each single branch increases more and more every time there is a split!

To be continued...
orangegray
First, a quick comment about something pretty basic that I completely forgot to mention in part 4 on the "Boltzmann's brain" paradox: why do they call it Boltzmann's brain? Seems obvious I should have explained this, but it's because the type of dilemna raised by it is similar to the "brain in a vat" thought experiment that philosophers of metaphysics like to argue about. The basic question is depicted explicitly in the movie The Matrix: how do you know that you aren't just a brain in a vat somewhere, with wires plugged into you, feeding this brain electrical impulses that it interprets as sensory experiences? I think for the most part, the answer is: you don't. I mentioned that the Boltzmann's brain paradox could involve the vacuum fluxuation of a spontaneously generated galaxy, or solar system, or even just a single room. But the ultimate limit of this would be, just a single brain in a vat, with no actual world around it at all. Anyway, just wanted to add that to make part 4 more clear, because you may have been wondering why it was called "Boltzmann's brain". If I ever decided to make these notes into a book, I guess this would be one of the chapters. Now on to the matter at hand...

One of the most puzzling things about quantum mechanics, especially when you first learn it, is why there appear to be two completely different types of rules for how physics works. One is the microscopic set of rules which physicists usually refer to as "unitary time evolution of the wavefunction". In quantum mechanics, what's called the wavefunction is similar to a probability distribution either in regular space or momentum space (not phase space, the combination of the two) for which state a particle (or field, or string) could be in. (Actually, it's more like the square root of a probability, but no matter.) While it is similar to a probability distribution in regular space or momentum space, it can be much more simply represented as a single vector in a much larger space called a Hilbert space. The Hilbert space in quantum mechanics is usually infinite dimensional, so much much bigger than even the 6N dimensional phase space we talked about in part 5. As the system progresses further into the future, this state vector traces out a single path through the Hilbert space, and that path is always exactly reversible according to these microscopic rules. The key word here is "unitary". The vector is moved around in the Hilbert space mathematically by applying a unitary matrix to it. (Heisenberg's formulation of quantum mechanics was orginally called "matrix mechanics" because of this.) Unitary matrices are matrices whose transpose is equal to their complex conjugate (replacing all imaginary components with real components and vice versa, where by imaginary and real I'm talking about the mathematical notion of i, the square root of -1). If this doesn't make any sense, don't worry about it, the only important thing to understand is that this property of unitarity guarantees many nice things about time evolution in quantum mechanics. It makes things nice and smooth, so that a single state always moves to a single state, and if the total probability of the particle being anywhere is initially 100% then it will stay 100% in the future. But most importantly, it guarantees reversibility. Because the time evolution matrix in quantum mechanics is unitary, it means Louiville's theorem holds and you can always reverse time and go backwards exactly to where the system came from.

But wait--if this is the case, then that also means that all quantum mechanical systems are conservative, ie non-dissipative. Does dissipation not happen in quantum mechanics at all? This brings us to the second type of rule in quantum mechanics: the process of measurement. Originally, this rule was called the "collapse of the wavefunction". Because when looked at through Schrodinger's wave mechanics, it appears that when a macroscopic observer makes a measurement in the lab, of a property of a microscopic system (for instance if he asks the question "where is this particle actually located?") the rule is that what used to be a probability distribution over many different states--suddenly that distribution is reduced to, or collapses to, a single state. Before the measurement, we mathematically describe the particle as being in a "superposition" of many different positions at once, but after the measurement it is only found at one of those positions. Mathematically, this is achieved with a projection matrix. A projection matrix is very different from a unitary matrix. Its action in the Hilbert space is that it maps many vectors onto a single vector, instead of mapping one vector onto a single vector. Because of this, the action is irreversible, and does not satisfy Liouville's theorem. The measurement process is therefore dissipative rather than conservative. In other words, the rule for how an observer of a microscopic system makes measurements in quantum mechanics seems completely opposite to the rule for how microscopic systems evolve in time when they are not being observed. One is nice and smooth and reversible, the other is a sudden reduction, an irreversible collapse. Dissipation only seems to happen during observation.

But the way I've explained this highlights the paradox, called the "measurement problem in quantum mechanics". This paradox is what has spawned many different interpretations of quantum mechanics, which philosophers still argue fiercely about today. But the real question is how to reconcile reversibility with irreversibility, and the answer lies in thermodynamics. And once you understand it, you realize that it doesn't really make a whole lot of sense to talk about measurement in this way, as a sudden "collapse" of the wavefunction. And it leads to deeper questions about whether the wave function is real or just a mathematical abstraction--and if there is anything in quantum mechanics which can be said to be real at all. To be continued...

wandering sets, part 5 : phase space

orangegray
Our universe has 3 large spacial dimensions (plus one temporal dimension, and possibly another 6 or 7 microscopic dimensions if string theory is right, but those won't be of any importance here).

Given 3 numbers (say, longitude, latitude, and altitude), you can uniquely identify where a particle is located in space. But the state of a system depends not only on what the particles positions are, but also on what their momenta are, ie how fast they are moving (the momentum of a particle in classical mechanics is simply it's mass times it's velocity--when relativistic and quantum effects are taken into account, this relationship becomes much more complicated). This requires another 3 numbers in order to fully specify what the state of a particle is.

If you were to specify all 6 of these numbers for every particle in a given system, you would have completely described the state of that system. (I'm ignoring spin and charge here, which you'd also need to keep track of in order to fully specify the state.) In order to categorize all possible states of such a system, you therefore need a space with 6N dimensions, where N is the number of particles. It is this 6N dimensional space which is called "phase space" and it is in this space where wandering sets are defined. The state of the system is represented by a single point in phase space, and as it changes dynamically over time, this point moves around.

In statistical mechanics and in quantum mechanics, you often deal with probability distributions rather than a single state. So you might start out with some distribution of points in phase space, some fuzzy cloudy region near some neighborhood of a point for instance. And as the system evolves, this cloud can move around and change its shape. But one really central and important theorem in physics is Liouville's theorem... it says that as this cloud of probability moves around, the volume it takes up in phase space always remains constant. This theorem can be derived from the equations of motion in classical or quantum mechanics. But it really follows as a consequence of energy conservation, which in turn is a consequence of the invariance of the laws of physics under time translations. At any given moment in time, the basic laws of physics appear to be the same, they do not depend explicitly on time, so therefore energy is conserved and so is the volume of this cloud in phase space. It can morph into whatever different shape it wants as it wanders around, but its total volume in phase space must remain constant.

Systems that obey Liouville's theorem are called conservative systems, and they don't have wandering sets. Systems that do not obey Liouville's theorem are called dissipative systems, and they *do* contain wandering sets.

But wait-- I just said that Liouville's theorem follows from some pretty basic principles of physics, like the fact that the laws of physics are the same at all times. So doesn't that mean that all physical systems in our universe are conservative--in other words, that there really is no such thing as dissipation? And does that in turn mean that entropy never really increases, it just remains constant?

This is one of the most frustrating paradoxes for me whenever I start to think about dissipation. It's very easy to convince yourself that dissipation doesn't really exist, but it's equally easy to convince yourself that all real world systems are dissipative, and that this pervasive tendency physicists have for treating all systems as conservative is no better than approximating a cow as a perfect sphere (a running joke about physicists).

I'll let this sink in for now, but end this part by hinting at where things are going next and what the answers to the above paradox involve. In order to understand the real distinction between conservative and dissipative systems, we have to talk about the difference between open and closed systems, about the measurement problem in quantum mechanics, what it means to make a "measurement", and how to separate a system from its environment, and when such distinctions are important and what they mean. We need to talk about the role of the observer in physics. One of the popular myths that you will find all over in pop physics books is that this role for an observer, and this problem of separating a system from its environment, was something that came out of quantum mechanics. But in fact, this problem has been around much longer than quantum mechanics, and originates in thermodynamics / statistical mechanics. It's something that people like Boltzmann and Maxwell spent a long time thinking about and puzzling over. (But it's certainly true that quantum mechanics has made the problem seem deeper and weirder, and raised the stakes somewhat.) Philosophically, it's loosely connected to the problem of the self vs the other, and how we reconcile subjective and objective descriptions of the world. In short, this is probably the most important and interesting question in the philosophy of physics, and it seems to involve all areas of physics equally, and it all revolves somehow around dissipation and entropy. To be continued...

wandering sets, part 4: Boltzmann's brain

orangegray
During the course of graduate school in theoretical physics, you tend to have a lot of conversations about things that get deep and philosophical, and at times your brain feels like it's going to explode, or you're reaching some kind of higher state of consciousness or having a psychedelic experience.

I can't count the number of times I felt this way in graduate school, but one of the most memorable times I can remember was when our theory group sat together one lunch and casually discussed a new paper which had just come out. I can't remember what the specific point was in the paper, but it hinged on what's known as the Boltzmann brain paradox. Some of us graduate students were relatively unfamiliar with the paradox, so the professors explained it to us.

Because entropy always increases as time goes forward, the early universe had very low entropy. Nobody knows what the first moment of the big bang looked like exactly. But presumably, it had zero entropy, or at least--lower entropy than any other moment in time after that first moment. This is a very special state, and it is usually accepted as the explanation for the "arrow of time", the fact that time looks different as it progresses forward from how it would look if you were to play everything backwards on rewind. If the universe had started out in a state of maximal entropy, where everything was homogenous and there were nothing but uniform randomness, going forwards or backwards would look the same... everything would just stay random-looking, no order would arise out of the random sea of chaos.

But instead, according to the standard cosmological picture of how the universe evolved, it started out in a very special, very unique, extremely low entropy state, sometimes called the initial singularity. That's all well and good, but there's one problem with it--out of all the possible states that the universe could go into, this one seems extremely special and extremely unlikely for it ever to get into in the first place. So our theories don't explain how this initial state got set up. What's worse, though, is that our theories do explain how random bubble universes could spontaneously fluxuate out of a background universe, appear for a moment, and then disappear back into nothingness, and you can calculate the probability for this to happen. The disturbing thing is, if you calculate the probability that one of these phantom bubble universes suddenly appears out of nowhere in the current state that we find ourselves in today, it's unlikely but nowhere near as unlikely as the scenario where the universe somehow managed to begin in the very special ultra low entropy state that it supposedly began in. The paradox is, that if you adhere to standard Bayesian probability theory and you ask yourself "how did I most likely get here, given my current perceptions and memories?" the answer--according to at least some relatively convincing recent scientific theories--is that probably, the universe didn't exist at all before a few moments ago, and all of your memories, and even all of the world outside a tiny little bubble around you doesn't exist. All of those memories seem to indicate that there was some past history before, but they could instead have just been conjured into existence out of nothingness only moments ago, and somehow it has convinced you to believe they actually happened. It could be that only our galaxy fluxuated into existence a moment ago, or only our solar system, only the earth--or as they explained it as we were sitting there, only the room we were sitting in! We sat there together, listening to the professors explain this, nodding our heads and asking questions. And I felt like we were in movie The Matrix or something, and I was being told that maybe--just maybe--the entire world I have known is all an illusion.

That's the Boltzmann brain paradox, I think in Boltzmann's time it had a slightly different form, but it has evolved over the decades as our understanding improves, and today it represents just as big a paradox as it once did, if anything moreso because the most persuasive theories of our time point more directly toward it being true. Nobody knows how to solve this paradox, so we just sort of waive away these undesirable solutions to the equations that involve these phantom bubble universes and assume that, despite any evidence to the contrary, our memories are reliable and the world we live in and the past we remember is real.

A key part of the mathematics behind the Boltzmann brain paradox that the professors explained to us that day is the Poincaré recurrence theorem. It says that if you start a system evolving according to any of the usual equations that define classical or quantum mechanics, and all of the particles start in a certain state, they will wander all around phase space for a long, long time, but eventually come back to that exact same state. You can even calculate the time it will take to return to the same state, and it's extremely long but it doesn't depend on what the initial state is. You could start the particles anywhere in phase space (I should explain what phase space is in the next part, I guess), and then they will eventually reach again the exact same state. But in between they will have wandered all around and had many adventures. This is called a Poincaré recurrence, and the time it takes for this to happen is called the Poincaré recurrence time. In order to calculate how unlikely it would be for the universe to get into the state that is usually thought of as the initial singularity, you just need to know the Poincaré recurrence time. It's basically how long it would take for all of the particles in the universe moving around randomly to just happen to wander into this unique state all at the same time.

This brings us to wandering sets. According to Wikipedia, if you have a set of points in a mathematical space that starts out somewhere, and if you have some rules that move those points all around the space, so that they follow some trajectories... and then if they eventually get back to where they started, they are called "non wandering". If, on the other hand, they never get back to where they started for all eternity, they are called "wandering". They just never make it home again. In physics, this is applied to a "phase space" which represents the possible states that the universe could be in at any given moment--I'll explain more about it in the next part. And the rules which move the points around are simply the laws of physics... they tell you how to take some initial conditions and time-evolve the system into a new state where all of the particles have moved somewhere else. In the new state they could have different positions or different velocities or both. But they will still be located somewhere in phase space, the space of all possible states they could be in. This distinction between spaces that contain wandering sets and those which don't turns out to be exactly the distinction needed to rigorously define what is meant by "dissipation". To be continued...

wandering sets, part 3 : Maxwell's demon

orangegray
I gave 3 examples of things that dissipate in part 2: friction, electrical resistance, and hurricanes. I feel like I understand fairly well why we call these dissipative, although I've always felt or hoped that there is some unifying principle that sheds more light on the subject and explains why them and not other things. But there's a forth example that is far more interesting, and for that example I still don't feel like I really understand why exactly it's dissipative: computation.

Now, you might first think--maybe computation is dissipative because it involves the flow of electricity through circuits (whether those circuits be wires or be microchips), but that's beside the point. First, as I understand it, any kind of physical irreversible computational process must necessarily dissipate heat and increase entropy. So this applies not just to electrical circuits but to anything we could conceivably use to compute an answer to something, including for example, an abacus (of course the amount of computation that can be performed by an abacus is presumably so tiny that you wouldn't notice.) Second, it's not just the electrical resistance because supposedly, computers actually draw *more* electricity while they are involved in some intense computation, not when they are just idling. There are many circuits which are on while the computer is doing nothing, but it's not being on that creates the entropy I'm worried about... it's the entropy created specifically from irreversible computation, from switching those circuits on and off in just such a way that it computes a simple answer to a more complex question fed to it. Beforehand, there are many possible answers, but afterwards, there is only one... for example, 42. This reduces the available microstates of the system from many to one, and therefore represents a reduction of entropy (which remember, counts the number of available microstates). Because of the 2nd law, this cannot happen by itself without producing heat... it needs to produce heat in order to cancel out that entropy loss by a gain in entropy due to the heat, for exactly the same reason that the earth must dump heat into its environment if evolution is to result in more highly organized organisms. So even a perfectly efficient computer which caused no net entropy gain for the universe would still produce heat!

The only exception to the above process is if, instead of taking a large set of inputs and reducing them to one output, all of the inputs and outputs correspond exactly in a 1-to-1 fashion, in other words, you use all reversible logic gates to build the computer. An example of an irreversible logic gate is and AND gate. It takes 2 inputs and has 1 output, it outputs "Yes" if both the inputs are on, and "No" if either one of them is off. Another example is an OR gate, which outputs "Yes" if either input is on, and "No" if both are off. To build a reversible gate, you need 2 inputs and 2 outputs, so that if you ran the computation backwards, you could recover the question from the answer. For example, if you put 42 into the computer, it should be able to spit out what the ultimate question is, just as easily as going the other direction. This is the meaning of reversibility.

Maxwell's demon is a thought experiment that James Clerk Maxwell came up with which illustrates how weird this connection between entropy and information is. If there were a little demon who were watching the individual molecules in a box, and he had a switch that could slide in a divider instantly in the middle of the box, then he could sit there and watch for the moment when each gas particle (normally, bouncing around randomly in the box) was about to cross the boundary from one side of the box to the other. If he presses the switch at just the right time, he can deflect the gas particle back into the left side of the box without expending any energy. If he keeps doing this for hours and hours, eventually all of the gas particles will randomly wander into the left side of the box and get stuck there, because he will put in the partition just as they try to cross the boundary back over to the right. Because entropy is connected to volume (smaller volumes have a smaller # of microstates), the final state has less entropy than the initial state, due to having half the volume. And yet, no work was done and no heat was expended in the process! This seems to be a blatent violation of the 2nd law of thermodynamics. So what happened here?

Well, in the real world, demons don't exist. And humans do not have the supernatural powers that demons have that would enable them to see individual gas particles moving super fast around in a box. But what if we set up a computer that could play the role of the demon? In principle, a computer could detect a gas particle much faster than a human, maybe even as fast as Maxwell's hypothetical demon. But if it does this, it has to either store the information about where each of the gas particles are, or temporarily watch each gas particle for a moment and then forget about it. If it stores this information, then it needs to fill up an exponentially large memory storage system. If it wants to keep the storage from getting out of hand, then it has to erase some of this information at some point... and erasure of information is an irreversible process. Because it is irreversible, it must dissipate heat. I mostly understand this part of Maxwell's demon. The other part I've always been a little bit fuzzy on though... what happens if the computer chooses to just store and store more and more information in its memory? Then, it will be filling up its memory with more and more information about the trajectories of the billions and billions of particles in the box. But does this in itself represent an increase in entropy? Or is it just the erasure of such information which increases entropy? It seems to me that storing anything at a memory location which could have previously taken multiple values but is then set to a single value represents a decrease in entropy. It would seem that storing it decreases entropy and then erasing it undoes that increasing it again. But I must be thinking about things a bit wrong there. I admit, this is where my understanding has always grown a bit fuzzy.

In the next part, I hope to actually get to wandering sets, and by extension, Boltzman's brain paradox, Poincare recurrence, and Liouville's Theorem. But maybe that's ambitious. To be continued...

Profile

orangegray
spoonless
domino plural

Latest Month

June 2014
S M T W T F S
1234567
891011121314
15161718192021
22232425262728
2930     

Tags

Syndicate

RSS Atom
Powered by LiveJournal.com
Designed by Lizzy Enger