?

Log in

No account? Create an account

Previous Entry | Next Entry

I recently read Robert Laughlin's A Different Universe. This was a book suggested to me by pbrane a few years ago. I found it a good book and would recommend it to others. (Even though I mostly disagree with his conclusions.) The purpose of the book is to question the prevailing school of thought among physicists known as "reductionism". Laughlin, aside from his occasional propensity for taming giant gravitons from Taub-NUT space, is a condensed matter physicist. And in general, condensed matter physicists tend to be (slightly) less reductionist in their thinking than particle physicists. This is probably the main reason I chose/prefer particle physics over condensed matter as a career path; I've always been a strong believer in the power of reductionism, and it suits my thinking style better.

One thing I noticed about the book (much like Lee Smolin's book from what I've heard) is that what he actually says in any particular paragraph when you read the fine print is a lot weaker than the overall impression/theme he tries to convey with the book. For instance, Laughlin admits straight up in the preface "all physicists are reductionists at heart, myself included. I do not wish to impugn reductionism so much as establish its proper place in the grand scheme of things." However, nearly the entire book is dedicated to questioning the importance of the reductionist paradigm and exposing its alledged limitations... and occasionally he even uses phrases like "such and such is just another reductionist myth". I think he had a lot of interesting things to say. But there were also times where I thought he was stretching the truth pretty far, to try to get it to go along with his theme. (Something I suppose all authors do, to one extent or another.) His writing style is unique and entertaining, and he has a dark and twisted sense of humor--so naturally I enjoyed it. It also gives a historical account of many condensed matter innovations that were new and interesting to me; I'm familiar with most of the big events and revolutions in particle physics, but it occurred to me how little I know about the history of condensed matter physics while reading it--perhaps I just haven't cared as much to learn about it, or perhaps it doesn't get as much press. Sadly, I still have very little clue what a "superconductor" is, beyond the popular conception of it, although the book did help a little.

One thing that has always struck me as odd is when people use the term "emergence" in a way that suggests it is opposed to or in conflict with reductionism. Laughlin did this once in a while, and it bugged me. In my opinion, reductionism is the very basis for emergence, and you cannot have one without the other. Same goes for terms like "collective behavior". If you're using the word "collective" at all, you're presupposing there are a lot of interacting parts forming the collective--by definition, you can't have a collection of things unless you have individuals making up the collection! I will recognize "holistic" thinking as opposing reductionist thinking, but only in the sense that they are two different approaches towards understanding the same thing--each is needed at different times. In principle, reductionist thinking is the only one which has the promise for describing everything; in practice, nobody is omniscient or able to do arbitrarily fast mathematical computations, so some sort of holistic thinking is always required as well... and it often beats reductionist thinking to the punchline.

It seemed like Laughlin was drawing on a lot of intuition based on his vast knowledge of condensed matter physics. Given that he's a Nobel Prize winning condensed matter physicist, and has had a lot of years of experience to build up a deep understanding of the subject, I felt that a lot of times I had to just "take his word for it" that he's correctly interpretting the signifigance of such things. It made me wonder, if I spent more time thinking about the emergent phenomena that condensed matter physicists study, rather than the more "fundamental" (a word he doesn't like) phenomena studied by particle physicists, if I would have a different perspective on things... perhaps closer to his.

On the other hand, there were some examples he mentioned that I have worked on and I do understand, and it is clear to me he is either mistaken in these cases, or just stretching the truth in order to support his other points. Laughlin claims that, in many cases, there is no known way to derive the macroscopic properties of certain systems from the microscopic equations (and furthermore that we will never be able to do that for those cases). I can't speak for many of the other condensed matter examples he gives, but one example he mentions is phase transitions (and he even mentions spin systems in particular). I have personally published research on these, and I can attest to the fact that when you put the microscopic equations for interacting spins into a computer, it reproduces all of the collective macroscopic behavior correctly. In particular, you end up with phase transitions where you can correctly predict the critical temperature, critical exponents, correlation length, etc. all from the microscopic equations. All of the macroscopic qualitative properties "emerge" from the microscopic (which is another way of saying: all of the macroscopic qualitative properties "reduce" to the microscopic physics). That is, we do understand how the emergent phenomena reduces to the interaction of individual spins, at least in this case. My view is that emergence is purely a mathematical phenomenon... something that happens when you take the limit of a large finite collection of interacting things to infinity. As long as the system is large enough, it will appear "exact" for all practical purposes. You can model this on a computer, or sometimes even with a paper and pencil, or just witness it happening in the real world. What happens is that the collective behavior of the system ends up not depending on the details of the microscopic behavior. Different microscopic systems can have the same infinite-system-size limit, just as different high energy theories can have the same effective field theory at low energies (these are really the same statement). The converse is NOT true: a particular high energy theory will always result in the same low energy effective theory... in other words, the low-energy "collective" description reduces to and is entirely determined by, the high-energy "individual" description which is more exact and more "fundamental". The collective description would only be exact if the system size were literally infinite. You can do this mathematically and it makes perfect sense, however, in any physical system, the system size is still finite, so the collective description is still approximate even though Laughlin refers to it as "exact". However, for practical purposes, when the system size is 10^23 or more, it doesn't make any difference whether it's "exactly exact" or not. You can treat 10^23 as infinity, which is what Laughlin does. So in that particular case (the issue of exactness), the difference of his opinion from mine is purely a philosophical one.

Laughlin tries to get around the generally acknowledged fact that we do understand the connection between the individual and collective properties of many systems by saying "but you have to add some axioms from statistics in order to derive thermodynamics from statistical mechanics". His argument is that we discovered thermodynamics first, and we never would have derived the properties of large things just from looking at the physics of how a single molecule or atom works. Only by introducing extra axioms were we able to connect up the small with the large. In particular, the main axiom he refers to is the assumption that all of the microscopic states are a priori equally likely. I feel that there are several possible responses to what he's saying here, and I'm not sure which one is best but I am sure that what he's saying here doesn't hold up. First, I would mention that even if we had to add an axiom to connect the two, that doesn't invalidate the connection. If the axiom is a "necessary" truth, then it again illustrates that emergence is purely mathematical, and nothing truly "new" was really added. If on the other hand, you view the new axiom as reflecting an empirical (non-necessary) truth about the world (which is the view Laughlin takes, best I can tell), you should be able to describe a system (however imaginery) which does not obey the axiom. But I think there is a third possibility for what this axiom means which is closer to the actual truth. You could define a system where the microscopic states are not equally likely, however I would suggest that this just means you've labelled the states in an inconvenient way. There should always be a way to relabel the states in a more convenient basis such that they are all equally likely... then you can apply the standard equations of statistics to them, and all of the usual thermodynamical laws will emerge, having been determined and derived purely from statistics. I don't know for sure that what I've suggested is the case, or if anyone has proven it, but it's what I would suspect. However, even if the additional axiom does represent a new empirical fact that was discovered, as may be the case, it would still not appear to me to go against reductionism... it's the type of axiom that is not going to have to be rediscovered again and again with each new system... you invoke it once as a part of your basic framework of assumptions about reality, and from then on you can derive the properties of any macroscopic systems if you are handed the microscopic equations... even if you have never encountered a particular system before. The catch is, of course, that you would need arbitrarily advanced mathematical expertise... and in practice, humans are pretty bad at math. So I would agree that there are many situations where no human is advanced enough to solve the equations... I expect posthumans (conscious machines) to be a significant improvement, but they will still have to deal with their own finite limitations of memory and computational ability, so this will still not obviate the need for holistic thinking (this is one thing I agree with Laughlin on entirely--as I suspect most non-strawman reductionists would).

I found myself liking Laughlin; he's got a keen sense of humor and a practical outlook. I agree with him that physics will not be over after particle physics completes a "theory of everything". However, I think that there is a meaningful distinction between emergent and fundamental phenomena, and particle physics falls into the former and condensed matter falls into the latter. They are both interesting, but for different reasons. Once a theory of everything is complete, the fact will still remain that humans are notoriously bad at math, and will not be able to deduce all of the logical consequences of that theory. Posthumans will have a better chance, but even they will be limitted. For this reason, experiments will still continue to be necessary and we are not nearing an "End of Science" as John Horgan has suggested.

I put consciousness in the title, but haven't mentioned anything about it so far. I'd like to explain why a proper understanding of emergence makes any claims about quantum mechanics being related to consciousness foolish. What I have in mind here are two things 1.) the idea that interpretting quantum mechanics requires a notion of conscious observers, something argued recently by an emeritus professor and a lab manager in our physics department (who were rightfully laughed at, and refuted the next week by people who knew what they were talking about), and 2.) ideas such as those proposed by Roger Penrose that the human brain is a quantum computer rather than a classical computer (in other words, the idea that you cannot fully describe/simulate a human mind with a classical computer). Incidentally, Penrose's book The Emperor's New Mind is another book I've started reading within the past year. (I have a stack of about 10 books that I've started, and Laughlin's is unfortunately the only one I'm actually close to finishing.) While I get the impression that both Penrose and Laughlin are highly intelligent, I disagree with them both (Penrose severely, and Laughlin only slightly). Of the two, I think Penrose is a lot more foolish, and his arguments a lot more filled with holes.

Emergence is what happens when you end up with a simple description of a large system that doesn't depend on the detailed exact description of the individual components of the system. It's a consequence of mathematically taking the limit of some large but finite parameter in the theory (for instance, the system size itself) to infinity. This simplifies the equations immensely, and allows a description that is approximately correct for all practical purposes, as long as you don't look too closely at what's going on in the details (aka "high energy behavior") of the system. Emergence happens at all sorts of levels... in particle physics, the theory of emergence is described by the Wilsonian renormalization group. In chemistry, it's described by the periodic table which tells us how to approximate atoms by a few classical parameters. In biology, there are all sorts of levels of organization and emergence, including DNA, chromosomes, cells, neurons, organs, etc. All of these are really the same type of thing going on. In all cases it's equally correct to say "the next level emerges from the last" or equivalently "the next level reduces to the last". In all cases, the exact description determines the approximate description (and not vice versa), and in all cases the exact description is not necessary for understanding or predicting the qualitative properties of the collective system. It's this last part, which I've italicized, which is relevant for understanding why Penrose is wrong about consciousness. Quantum physics is many levels of emergence down from consciousness. At every level, the details of the last level are irrelevant. Classical physics is just as good for describing biological systems as quantum physics, because classical mechanics is simply the limit of quantum mechanics as the number of individual quantum degrees of freedom are taken to infinity. All biological systems have a large number of degrees of freedom (high entropy), and hence the (large-> infinite) approximation is valid and all biological systems are describable classically. Penrose does not understand this and hence it has led him to speculate that the mind is not a classical computer. As for Fred and Bruce, what they are saying is a lot less testable and more philosophical. However, the essence of why they are wrong is the same. While the word "observer" appears in most interpretations of quantum mechanics, it's intended to mean a macroscopic device (one with a large number of degrees of freedom) that becomes correlated with a microscopic quantum state. This has unfortunately led people to confuse "observer" with "conscious observer" where the word in this context means nothing of the sort. There is no possible way in which conscious observers could behave differently than non-conscious measuring devices (from the point of view of quantum mechanics), precisely because of emergence. Consciousness is an emergent phenomenon which operates on a level of emergence that is so far removed from quantum mechanics that its relevance to quantum mechanics is zero. Yes, it is determined at some level by the equations of quantum mechanics, but those equations are entirely unnecessary for its description--only classical emergent equations are necessary. I think Bob Laughlin would agree with me here (perhaps even moreso, since he thinks the emergent equations are exactly exact, as opposed to just "exact for all practical purposes" which is my view).

The last comment I have is about David Deutsch, father of the quantum computer and leading advocate of the Many Worlds Interpretation of quantum mechanics, and a personal hero of mine. For some reason, Deutsch identifies himself as a "non-reductionist", even though as far as I can tell I agree with everything he has to say regarding it (and I claim to be a true reductionist). I think Deutsch would probably disagree with Laughlin in most of the same places as me. And some of the things Deutsch believes in (such as the potential for building a large quantum computer, or even a conscious quantum computer) Laughlin has dismissed in his book specifically as a pipe dream "resulting from reductionist thinking". I think Laughlin would be as puzzled as I to hear Deutsch say he's not a reductionist. But do I agree with Deutsch about building a large quantum computer? It depends on "how large" you're talking about. I believe it is possible to build a 2048-bit quantum computer, which is enough to break RSA encryption as it's currently being used (ok, maybe a bit more, depending on error correction and such, but not too much more). It's also enough to provide a huge speedup for certain computational algorithms. I do not believe it's possible to build one that has enough bits for consciousness (I was on the fence when I read Deutsch's book 4 years ago, openminded to the possibility that such a statement might be too anthropocentric, but I've since thought about it more). Conscious observers are inherently classical because their complexity requires a large number of degrees of freedom. While Deutsch has only suggested this as a hypothetical possibility (not something that would be done soon), I think he's falling prey to the same errors as Penrose, Bruce, and Fred. I would call those errors "not understanding emergence" whereas Laughlin would call them "reductionist errors"... even though ironically none of them (Deutsch, Penrose, Rosenblum, or Kuttner) calls themself a reductionist, and all of them but Deutsch violate the spirit of reductionism with respect to the mind/body problem in philosophy. I on the other hand am I reductionist, yet I don't fall prey to Laughlin's "reductionist thinking" errors--go figure. :-)

P.S. In all of this, I forgot to include my definition of reductionism. A reductionist is someone who believes that the whole is equal to (no more, no less) than the sum of its parts. If you agree with that then you're a reductionist (by my account); if you don't, then you're not. Regardless of whether you're a reductionist, you can still take advantage of reductionist methods to analyze something by breaking it into its parts. And regardless of whether you're a reductionist, you can still take advantage of holistic thinking to grok the whole. But only a reductionist like myself will agree with the statement that there is nothing in the whole that is not contained in the parts.

Comments

( 71 comments — Leave a comment )
azalynn
May. 6th, 2007 04:39 am (UTC)
But only a reductionist like myself will agree with the statement that there is nothing in the whole that is not contained in the parts.

Hmm. I guess that would make me a reductionist then, because it seems intuitively obvious that different parts = different whole. Therefore, the whole must be contained in the parts. Is that an oversimplification?
spoonless
May. 6th, 2007 10:40 pm (UTC)
To be fair to the non-reductionists, I would have to say it's a bit more subtle than that. I'm really oversimplifying it by boiling it down to one statement; nevertheless I think that statement captures the essential difference between how reductionists and non-reductionists tend to think about the world... but I think it takes a lot of context to understand exactly what it means.

Different parts can sometimes make the same whole. However, as a reductionist I believe that a different whole always has to have different parts... whereas many (but not all) non-reductionists believe that the whole is not entirely determined by the parts.
onhava
May. 6th, 2007 04:55 am (UTC)
I guess I should read the book, because I don't see what is supposed to be interesting or nontrivial about this, and I don't buy this simplistic "condensed matter vs. high-energy = emergent vs. reductionist" dichotomy. Quantum field theories are full of examples of really interesting emergent properties: confinement, spontaneous symmetry breaking, nonabelian Coulomb phases, free magnetic phases, etc.... What are these if not emergent? I would even say that in string theory gravity is really an emergent property of the worldsheet action. Holography is all about emergent space. I can't think of very many interesting things that high energy theorists do that don't involve emergent properties.

When you say that Laughlin says "there is no known way to derive the macroscopic properties of certain systems from the microscopic equations", I'm not sure what to think. I'm sure, for instance, that we'll never be able to derive a lot of biology, at least for some sense of "derive." For one, I don't think physics can ever really predict (or postdict) that life exists, since you have to have some particular unlikely equilibrium configuration of matter to start the process. So it's not like you can just run down the RG of your physical theory and derive an effective theory of interacting people. In that sense, of course you can't "derive" a lot of things. But on the other hand, once you know that certain things exist, you can definitely work out a lot of their properties from the underlying theory. Physics doesn't predict DNA, but it does explain its properties. So... I guess I just don't see what his point is supposed to be.

(Even a lot of sort of classic examples of things that aren't understood from the microscopic theory, like turbulence, I think probably will be eventually.... It's just a matter of understanding the right sort of critical phenomenon that marks the transition to turbulent flow.)
spoonless
May. 6th, 2007 10:56 pm (UTC)

I don't buy this simplistic "condensed matter vs. high-energy = emergent vs. reductionist" dichotomy.

Any time I see "emergent vs reductionist" as you've written above, I cringe. Emergence exists because of reductionism, and can only be studied via reductionism, so there is certainly no dichotemy there. Where I do see a difference (although not a dichotemy, just a spectrum) is between more fundamental and less fundamental levels of emergence, or levels of organization. High energy physics focuses on the more fundamental (more exact) levels whereas condensed matter people study the less fundamental, more emergent, or more "approximate" levels. At every level, there is the phenomenon of emergence. And reductionism is the way to study emergence, at any level. I would also agree that quantum field theory and string theory are full of examples of emergence... as is any part of physics.
(no subject) - onhava - May. 7th, 2007 04:14 am (UTC) - Expand
(no subject) - spoonless - May. 7th, 2007 05:02 am (UTC) - Expand
pbrane
May. 6th, 2007 06:02 am (UTC)
Ah excellent! Now you make me feel guilty at not yet finishing my atheism post (which I will get to one of these days!)...

Too many particle theorists know too little about condense matter theory, really. The fact that even you, who has published effectively condensed matter research, say that you don't understand superconductivity is just a simple example of what is amazingly prevalent in the particle physics community. Hell, the fact that 99% of what we do is based on single particle pair interactions is just I guess an extreme visualization of that. Sure, we do renormalization, which really is thinking about the interactions of an infinite number of (virtual) particles with our test probe, in a bath we deceptively call "the vacuum", but we never *think* about all those other particles...

Laughlin claims that, in many cases, there is no known way to derive the macroscopic properties of certain systems from the microscopic equations (and furthermore that we will never be able to do that for those cases). I can't speak for many of the other condensed matter examples he gives, but one example he mentions is phase transitions (and he even mentions spin systems in particular).

It's certainly true that certain systems simply cannot have their macrophysics derived from their microscopic setup - those whose microphysics have dangerous irrelevant operators. You never know for sure whether there is some higher (canonical) dimension operator lying around, which, after passing near a strongly coupled region of the renormalization group flow, picks up a large enough (negative) anomalous dimension and suddenly takes over the physics.

Certainly *some* phase transitions are well understood, but the 3d Ising model? Umm... isn't that "computationally intractable" in the strict complexity-theoretic sense? Putting the microscopic equations on a computer and "experimentally" determining the properties at the transition aren't in any way a solution - a solution would either be analytic, or maybe perturbative, or find some duality to a system which itself has an analytic or perturbative solution, at least in my mind.

I don't think that Laughlin is in any way saying that it isn't true that if you had a computer the size of the visible universe, and used it to crank out a lattice form of the Standard Model, that you'd be able to predict the entire macrophysics of water crystallizing in a droplet, or any other small enough macrosystem. Right? He's just saying that there are some systems which will have strongly coupled physics at some scale, and will be *forever* theoretically intractable - it's like the fact that while what you say about the RG flow is strictly true: it's deterministic in its flow from high energy to large, but it can also be exponentially sensitive to initial conditions in certain strongly coupled cases.
spoonless
May. 6th, 2007 11:40 pm (UTC)

It's certainly true that certain systems simply cannot have their macrophysics derived from their microscopic setup - those whose microphysics have dangerous irrelevant operators.

I think this is an example where you would not get emergence... in other words, there is no unique low energy effective theory that's insensitive to the details of the UV physics. As you mention later (which is an interesting point I hadn't thought of) this might be a way in which Penrose (hypothetically) could be right (although it still seems incredibly unlikely for other reasons). However, I don't see why you say "the macrophysics cannot be derived from the microphysics". To the extent that there is a macrophysics, it can still be derived from the microphysics, which tells you everything... it's just that you don't get the usual phenomenon of emergence where you end up not needing to know the microphysics at all. Anyway, I think there are two different issues here and in this comment you mixed them up... unless I'm misinterpretting what you're saying.

Certainly *some* phase transitions are well understood, but the 3d Ising model? Umm... isn't that "computationally intractable" in the strict complexity-theoretic sense?

Maybe in the strict complexity-theoretic sense, but that didn't stop my computer from crunching the 5d Ising model in a matter of weeks. It all depends on how large a system size you're talking about. You don't need to plug in a ginormous system size to get a good idea/understanding of how things behave. You still get the same basic emergent phenomena showing up and can study it.

a solution would either be analytic, or maybe perturbative, or find some duality to a system which itself has an analytic or perturbative solution, at least in my mind.

Hmmm. So to you, only if we understand something perturbatively is it true understanding? I tend to think more and more of our understanding, both with regard to proofs in mathematics and physical insight, will rely on computers in the future. I mean... I get what you're saying... just because you simulate something doesn't mean you know why it behaves the way it does. True. But even if it isn't, it's still concrete evidence that there is nothing more going on besides the microscopic physics. That is, the whole is nothing more than the sum of its parts. This is really all I mean when I say I'm a reductionist.

ack... this is a rather incomplete response, but I just got a phonecall and have to run. I will respond to the rest later. cheers,
(no subject) - pbrane - May. 7th, 2007 03:52 am (UTC) - Expand
(no subject) - onhava - May. 7th, 2007 04:37 am (UTC) - Expand
(no subject) - pbrane - May. 7th, 2007 04:52 am (UTC) - Expand
(no subject) - pbrane - May. 7th, 2007 05:15 am (UTC) - Expand
(no subject) - spoonless - May. 7th, 2007 03:40 am (UTC) - Expand
(no subject) - pbrane - May. 7th, 2007 04:29 am (UTC) - Expand
(no subject) - spoonless - May. 7th, 2007 05:45 am (UTC) - Expand
(no subject) - pbrane - May. 7th, 2007 05:58 am (UTC) - Expand
(no subject) - onhava - May. 7th, 2007 04:26 am (UTC) - Expand
(no subject) - pbrane - May. 7th, 2007 04:41 am (UTC) - Expand
(no subject) - spoonless - May. 7th, 2007 04:52 am (UTC) - Expand
(no subject) - onhava - May. 7th, 2007 05:07 am (UTC) - Expand
(no subject) - onhava - May. 7th, 2007 05:04 am (UTC) - Expand
(no subject) - pbrane - May. 7th, 2007 05:47 am (UTC) - Expand
pbrane
May. 6th, 2007 06:02 am (UTC)
cont...

Saying "emergence is described by RG flows" really sidesteps the point: something like N=1 Seiberg duality, while you can look at the result as an RG flow, is an amazingly more magical kind of emergence than the usual thing we do in particle theory normally. Looking at it from either low or high energy starting points, it's startling: who in their right mind would look at a large distance QCD(-like) system which is IR free, and imagine that it's actually UV free as well, described by a totally different QFT, whose fundamental d.o.f. are twisted up in totally nonlocal ways to make up your weakly coupled IR particles? Alternatively, in what way does knowing that you have this nice and pleasant AF QCD system, which you can sit down and solve scattering problems on the lattice to arbitrary precision at the small scales - but then when you zoom out, it gets strongly coupled and a mess, then morphs into a completely different theory at long distance - one which is totally solvable, perturbatively, in terms of different variables; in what way does this help you - *if you don't know the "trick" that allows you to uncover the duality? Very (VERY!) few systems will have dualities like this (that you can find!) explicitly, but most systems which have strongly coupled fixed points (which is really every system, somewhere, right?) will flow to "something" - we just won't be able to figure out what it is.

The fun for me was always just playing around with those systems where you *could* play these kinds of tricks (and looking for new systems like this!). But it's dangerous to fall into the trap of imagining this is the normal state of affairs.
spoonless
May. 7th, 2007 03:52 am (UTC)
Re: cont...
I don't think I would use the term "emergence" to refer to Seiberg duality. It's a duality between two IR theories, neither of which is any more fundamental than the other, right? I would reserve the term "emergence" for when you have an exact theory that has some limit that gives you an approximate theory, valid in a certain regime (usually, low energy).

I would say that if the physics is the same for two "theories", they are really the same theory, just written down in two different ways. That's what any duality is, right? (I've never studied Seiberg duality in-depth, although I'd like to... it sounds like great fun!)
Re: cont... - onhava - May. 7th, 2007 04:32 am (UTC) - Expand
Re: cont... - pbrane - May. 7th, 2007 04:48 am (UTC) - Expand
Re: cont... - pbrane - May. 7th, 2007 04:57 am (UTC) - Expand
Re: cont... - spoonless - May. 7th, 2007 06:41 am (UTC) - Expand
pbrane
May. 6th, 2007 06:03 am (UTC)

You can treat 10^23 as infinity, which is what Laughlin does. So in that particular case (the issue of exactness), the difference of his opinion from mine is purely a philosophical one.

As long as you'll admit that he's right in saying that this is exactly what particle physicists may be doing whenever they do *any* QFT as well, right? The description of the world in terms of a system where particle number is not conserved (i.e. QFT) is really just the same as what you get when you take the many-body quantum mechanics problem to the infinite size limit. This fact is a reflection of the idea that you can derive effective QFTs from a *non-relativistic* many body quantum mechanics problem in that limit, and in fact it's not that uncommon (the linearization of every nonpathological nonrelativistic dispersion relation, about a filled fermi surface, looks like \omega = a*k).

I pretty much agree with you about the statistics thing - he went off on a tangent there - nobody reasonably thinks that independent likelihood of identical states is an unreasonable axiom to use to take as given.

Once a theory of everything is complete, the fact will still remain that humans are notoriously bad at math, and will not be able to deduce all of the logical consequences of that theory.

Tsk tsk tsk... :) "Once a theory of everything is complete..." - lets say we figure out that String Theory is "right" (if you scatter two particles at energies approaching the Planck energy, the scatter like weakly interacting superstrings in 10 dimensions). And lets say we also deduce that *there is* a landscape (I know your advisor doesn't believe in it...) of quasistable vacua, and it turns out to be infinite, and thus you can't even do statistics on it properly... and lets say we figure out that the path through the landscape that any given region of the universe traverses as it cools is chaotic. What exactly have we gained by this "theory of everything"? We don't know which of the infinitude of vacua infinitessimally close in parameter value to our own we're really in, and we don't know how we got here (and *cannot* without infinitely precise determinations about the initial conditions to a *cosmology* question[!!!]), then what exactly about our 'macrophysics' (i.e. anything at lower energies that say, the GUT scale) do we now have *any* sort of handle on?

I'm not saying this is inevitable... I'm just saying that because it is *possible* (and indeed, even Shamit Kachru will admits it's not just possible, but if not likely, at least "envisionable"), reductionism may be far more limited than you are admitting. Sure, we may get lucky and there will be a reductionist description of some use at the fundamental scale, but instead it may be that our vacuum has no weakly coupled description at the Planck scale.

Consciousness and quantum mechanics. Fooey! I may have controversial (naturalistic dualism!) views on consciousness, but quantum mechanics sure don't play any role in them. It's certainly conceivable that QM matters there (there *could* be some dangerous irrelevant operators in the QM system in our brains that somehow makes it that the corresponence principle breaks down and quantum effects make their presence known on much larger scales - but I doubt it [and I doubt that even if it does, that it helps].
spoonless
May. 7th, 2007 04:16 am (UTC)

As long as you'll admit that he's right in saying that this is exactly what particle physicists may be doing whenever they do *any* QFT as well, right?

Of course. The holographic principle requires that there are only a discrete number of degrees of freedom in the underlying theory. So local field theory, which involves a continuously infinite number of degrees of freedom at each point, is almost certainly an approximation. I don't think this is controversial, at least among string theorists. If, on the other hand, you're asking if I think spacetime is literally a lattice... well, no. I doubt it, although it's conceivable.

What exactly have we gained by this "theory of everything"?

I think the main gain of a theory of everything would be understanding how the world works. (And in particular, shutting certain philosophers up.) I don't think anyone has seriously suggested that a theory of everything will have practical applications! Have they? :)

Consciousness and quantum mechanics. Fooey!

Yeah... in case you were wondering, that portion was not directed at you (I would not have expected you to disagree with me there). It's just something that I wove in there because I've also been thinking about it lately. We had some pretty fun(ny) colloquiums lately surrounding it.
(no subject) - pbrane - May. 7th, 2007 05:31 am (UTC) - Expand
(no subject) - spoonless - May. 7th, 2007 06:29 am (UTC) - Expand
(no subject) - pbrane - May. 7th, 2007 05:34 pm (UTC) - Expand
(no subject) - easwaran - May. 7th, 2007 06:44 pm (UTC) - Expand
(no subject) - pbrane - May. 7th, 2007 07:04 pm (UTC) - Expand
(no subject) - easwaran - May. 7th, 2007 06:53 pm (UTC) - Expand
easwaran
May. 7th, 2007 06:39 pm (UTC)
I've never understood what emergence is supposed to mean as contrasted to reductionism. Sometimes it just feels like a buzzword that got passed around with the '80s fad for fractals and "chaos theory" (though I also recall some large passages in Gödel, Escher, Bach that discussed reductionism and holism).

I've started to think some sort of anti-reductionism is a bit more plausible, once I started thinking of it at the level of explanation rather than just prediction. I assume that (in philosophical terminology) everything physical supervenes on the microphysical states of the world, but it seems eminently correct to say that for some phenomena the best explanation lies at a different level, and there may not even be a good explanation in terms of the microphysics. (Hilary Putnam has an example where he considers a microphysical explanation of why a square peg won't fit through a round hole, and the obvious macro-scale geometric one, and says that the latter is good and the former really isn't even an explanation - I don't think I buy that, but it's at least plausible.)

As for the quantum/consciousness stuff, I generally agree with you that the point of emergence is that the large scale is best explained at the large scale, so we can generally ignore the small scale. But the key word here is "generally" - occasionally some minor thing from the small scale percolates upwards (I think this is often more true in some phases than others, like turbulent flow as opposed to laminar flow or whatever they call it). It's at least conceivable that neurons and brains might be finely tuned machines that manage to magnify microscopic quantum interactions to the point of macro-scale significance.

The real problem with Penrose is that his Gödel-based arguments have absolutely no force at all. Even granting that for every recursively-presented axiomatic system, we could generate its Gödel sentence, we would only be committed to believing the Gödel sentence if we were antecedently committed to believing the axiomatic system. The position he tries to argue against says that there is a specific axiomatic system that can properly characterize our commitments (perhaps that's putting it too strongly, but that's the way he attempts to argue against it). But there's absolutely no reason to think that we can recognize what system this is, unless we have perfect introspective abilities about our own mind. We can accept the conditional T->G(T) for all axiomatic theories T and their Gödel sentences. We can also accept some theory T. But if we don't recognize T as the same under these two different presentations, then we don't get a contradiction.

Not to mention that there's no reason to suppose that quantum mechanics provides a non-Turing method of computation. In fact Scott Aaronson argues that quantum mechanics (or any physically possible computation method) doesn't collapse NP down to P.

I'll put the statistics stuff in a separate comment.
easwaran
May. 7th, 2007 06:51 pm (UTC)
If on the other hand, you view the new axiom as reflecting an empirical (non-necessary) truth about the world (which is the view Laughlin takes, best I can tell), you should be able to describe a system (however imaginery) which does not obey the axiom. But I think there is a third possibility for what this axiom means which is closer to the actual truth. You could define a system where the microscopic states are not equally likely, however I would suggest that this just means you've labelled the states in an inconvenient way. There should always be a way to relabel the states in a more convenient basis such that they are all equally likely... then you can apply the standard equations of statistics to them, and all of the usual thermodynamical laws will emerge, having been determined and derived purely from statistics.

I don't think there's a distinction between these two responses. I think what you've listed as the third option is most likely true, but irrelevant. Given any continuous probability distribution, it seems likely that there's some parametrization of the space on which the distribution is uniform. (Some of the phil. of prob. people in Australia I've talked with think there's something special about this, that probabilities are always just areas in some sort of epistemic space, but I don't see what would make area in this space any more fundamental than subjective probability.) But the interesting fact seems to be that the particular description we actually use gives rise to a uniform distribution. That's an empirical claim. Sure there's always some re-description on which it will (if the theorem I suggest above is true) but who cares about those other descriptions if we're not using them yet?

I don't understand why people get so hung up on uniform distributions. After Bertrand's Paradox, shouldn't we know better? I guess I should read Jaynes' stuff on "maximum entropty" and the like to see if it makes any better sense.
malathion
May. 11th, 2007 07:47 am (UTC)
Back to Kant, back to Kant... I should post flyers. :)
ikioi
May. 18th, 2007 03:56 am (UTC)
Classical physics is just as good for describing biological systems as quantum physics, because classical mechanics is simply the limit of quantum mechanics as the number of individual quantum degrees of freedom are taken to infinity.

Isn't part of the point of the Schrodinger's cat thought experiment that in some systems which are large enough to matter to human life, the randomness of quantum physics shows up as randomness at the large level? Classical mechanics holds that nothing is random, and that with a sufficient amount of information you could predict when the cat will die before it does. Doesn't quantum physics say that you cannot have "enough information" to make this predicition? If so, doesn't that leave the possibility that a major part of human decisions making is "random all the way down"? Don't get me wrong; I don't think we need randomness in physics to rescue free will from determinism. I don't think a deterministic universe would be void of free will, but since the universe seems to be non-deterministic, couldn't that non-determinism manifest at the large scale, and specifically at the level of human thought?
spoonless
May. 18th, 2007 06:49 am (UTC)
A couple of points here...

First, yes it is possible to set up an experiment which amplifies quantum "randomness" to have effects on the macroscopic world. (If it could have no effect whatsoever, there would not have been any way for us to have discovered quantum mechanics.)

However, "large" systems (where large has to be defined carefully in terms of entropy) left to their own devices (that is, with no careful intentional intervention on our part) behave classically for all practical purposes. There are states of matter, for instance certain temperatures, pressures, or densities, where quantum effects become important, but that's not relevant for describing an animal or a human being.

The "for all practical purposes" is what makes your question about whether human beings are really deterministic somewhat ambiguous (and not terribly interesting). Human brains involve randomness... whether it's pseudo-randomness due to chaos theory and sensitivity to initial conditions, or its true randomness due to quantum mechanics, doesn't make any significant difference to how the person behaves. Using either set of equations should work equally well, and should generate a person that everyone agrees is the person in question (including that person, upon introspection). In other words, you could build minds just as easily using classical laws (or classical computers) as you can using quantum laws (or quantum computers). To such an extent that nobody is going to be able to tell the difference. (And if you're a mind/body functionalist like me, the mind is defined by its function so which set of microscopic laws are generating it is irrelevant).
(no subject) - spoonless - May. 18th, 2007 06:50 am (UTC) - Expand
ikioi
May. 18th, 2007 04:36 am (UTC)
Emergence is what happens when you end up with a simple description of a large system that doesn't depend on the detailed exact description of the individual components of the system. It's a consequence of mathematically taking the limit of some large but finite parameter in the theory (for instance, the system size itself) to infinity.

Surely we would all consider law to be an example of emergence. Yet, I think none of us could show the theory in which one could take a large but finite parameter to infinity and end up with law. In fact, I bet none of us could even convincingly argue that there such a theory must exist and be comprehensible to a conscious mind. Maybe there will one day be such a theory, or maybe such a theory is impossible. (I'm agnostic on that subject.)

We can carry this out to see why the idea of emergence does not depend on reductionism. It would be easy to conduct tests to see that there are things, many smaller parts, which seem to be necessary for law to emerge. For instance, we don't ever see law happening without people. We can go on and narrow down more and more conditions that give rise to law and without which, law does not happen. This is what's known as finding a collection of things upon which law supervenes. Once we do this, then we have a whole: law, and we have some parts: people, communication, etc. Still we do not have a theory, a means of reducing the whole to it's parts. We know only that those are in some sense, "parts" of the whole. We also do not know that such a theory will ever exist, will ever be understood by anyone conscious (human or transhuman). At this point it becomes easy to say the the idea of emergence does not depend on the idea of reductionism. Even if you disagree with my hypothesis, you must agree that *I* believe what I am saying. That means that I believe in an emergent property, law, without believing there is a theory that one can use to derive it from it's parts, aka, without believeing that it's reducible. For a reductionist, as you claim to be, all emergent properties are reducible, that there is a practical and accessible theory describing the emergent property from smaller bits. Having the belief that emergence depends on reductionism would seem to be the defining attribute of a reductionist. :-)

As an argument that reductionism is dangerous, I'd like to say that reduction is much more powerful and useful in some subjects than others. Math, must surely be the most reducible subject with parts of physics running a close second. Subjects like law, history, economics, and art are exteremly irreducible and deal with lots of scenarios where it's utterly absurd to try and find a structure that allows the deduction of solutions. Misunderstanding that, and trying to force these more irreducible bodies of knowledge which deal with more emergent phenomena, into global reductive systems is what gives rise to dangerous and intolerant belief systems like religion, racism and Nazism. BTW, I'm not calling you a nazi or anything. I know you most definitely aren't. At the risk of being cocky, I'd claim you aren't really a reductionist either, though. ;-)
ikioi
May. 18th, 2007 05:02 am (UTC)
I need to make a small correction and an appology. I'd like to retract this line: "At the risk of being cocky, I'd claim you aren't really a reductionist either, though." Honestly, I don't know what I meant by that and I think it's offensive for me to tell you what you do or don't believe. So, sorry about that.
(no subject) - spoonless - May. 18th, 2007 06:15 am (UTC) - Expand
(no subject) - ikioi - May. 18th, 2007 07:51 am (UTC) - Expand
(no subject) - spoonless - May. 18th, 2007 06:02 am (UTC) - Expand
(no subject) - spoonless - May. 18th, 2007 06:03 am (UTC) - Expand
(no subject) - ikioi - May. 18th, 2007 07:22 am (UTC) - Expand
(no subject) - ikioi - May. 18th, 2007 07:45 am (UTC) - Expand
(no subject) - spoonless - May. 18th, 2007 08:18 pm (UTC) - Expand
(no subject) - ikioi - May. 19th, 2007 01:06 am (UTC) - Expand
(no subject) - spoonless - May. 18th, 2007 08:25 pm (UTC) - Expand
(no subject) - ikioi - May. 18th, 2007 11:39 pm (UTC) - Expand
(no subject) - spoonless - May. 20th, 2007 07:52 pm (UTC) - Expand
(no subject) - ikioi - May. 20th, 2007 11:19 pm (UTC) - Expand
(no subject) - spoonless - May. 18th, 2007 06:08 am (UTC) - Expand
(no subject) - ikioi - May. 18th, 2007 07:22 am (UTC) - Expand
(no subject) - spoonless - May. 18th, 2007 09:45 pm (UTC) - Expand
(no subject) - ikioi - May. 18th, 2007 11:36 pm (UTC) - Expand
(no subject) - spoonless - May. 20th, 2007 08:04 pm (UTC) - Expand
(no subject) - ikioi - May. 20th, 2007 11:40 pm (UTC) - Expand
(no subject) - spoonless - May. 20th, 2007 08:48 pm (UTC) - Expand
(no subject) - ikioi - May. 21st, 2007 12:22 am (UTC) - Expand
(no subject) - ikioi - May. 21st, 2007 12:26 am (UTC) - Expand
(no subject) - spoonless - May. 21st, 2007 06:49 pm (UTC) - Expand
(no subject) - ikioi - May. 21st, 2007 12:40 am (UTC) - Expand
(no subject) - ikioi - May. 21st, 2007 01:00 am (UTC) - Expand
(no subject) - spoonless - May. 21st, 2007 06:57 pm (UTC) - Expand
(no subject) - spoonless - May. 18th, 2007 06:23 am (UTC) - Expand
( 71 comments — Leave a comment )

Profile

blueshirt
spoonless
domino plural

Latest Month

May 2017
S M T W T F S
 123456
78910111213
14151617181920
21222324252627
28293031   

Tags

Powered by LiveJournal.com
Designed by Lizzy Enger