So there's this classic thought experiment called the "quantum suicide" experiment, originated by AI researcher Hans Moravec, and developed further by cosmologist Max Tegmark. It's a derivative of the Schrodinger's Cat and Wigner's Friend thought experiments, but not quite as ancient or well known.

The Quantum Suicide experiment is the closest thing we have to an experimental test that distinguishes between Copenhagen and Many Worlds. However, it is very one-sided in that if Many Worlds is correct, you can be certain it is (however only the person performing the experiment can appreciate that certainty, nobody else in the world can). But if Copenhagen is correct, you will die so you will never know which is correct.

The experiment goes something like this: Construct a device with connects the outcome of a truly random quantum observable, similar to the one used in the Schrodginer's Cat experiment, to a lethal device that kills the experimenter. Have the experimenter (you, let's say) repeat this many times in a row. Assuming Many Worlds is true, then what should you expect to happen? Well, in most of the worlds you will die, but there is guaranteed to be at least one world where you survive. You won't experience anything in the worlds where you die, so the rational thing to do would be to expect your next experience will be seeing yourself miraculously survive the experiment many times in a row. If you perform the experiment and witness this, as expected, then you can be pretty sure Many Worlds is true and Copenhagen is false. Unfortunately, if anyone else witnesses this, without being in the machine themselves, you will be unable to convince them that it wasn't just crazy luck or some kind of divine intervention, since neither of the 2 theories predict that they should see you survive that many times in a row. (Many Worlds only predicts that you should see yourself survive, from your own perspective.)

There's a variant of this which leads to something called Quantum Immortality, but that's considered much more controversial, and depends on other assumptions, so we won't get into that. (My personal feelings are that quantum suicide makes sense and would work assuming Many Worlds is correct, but that Quantum Immortality is an exaggeration/distortion of the thought experiment which leads to nonsensical conclusions.)

At work, we've talked about a lot of different versions of the quantum suicide experiment, because it's pretty relevant to themes we want to explore in the movie we're making (or rather, discussing making). And the most interesting version that's come up is one I've decided to call the "coin operated quantum suicide booth". It was proposed by my coworker as a way of attacking some of the assumptions I hold about observer selection (mostly stemming from Nick Bostrom's Self Sampling Assumption). We argued about it for a while, and at some point, he had succeeded in convincing me that I was probably wrong. But then after a lot more thinking about it, I changed my mind and decided I had been right all along, and do not have to give up the Self Sampling Assumption or any of my views on the anthropic principle.

In this version, there is a booth which contains the above described quantum suicide device. But there is also a coin that gets automatically flipped once you step into the booth. If it comes up heads, then after 10 minutes of waiting in the booth, the quantum suicide experiment is performed on you 1 million times in a row in rapid succession. If it comes up tails, then nothing else happens for 10 minutes, and then either way the door opens and you're allowed to walk out (if you're still alive).

Now imagine that your friend bets you that the coin will land on heads. What odds should you be willing to take on that bet? This hinges on what you think the chances are you'll win the bet, assuming you survive and walk out of the booth. Another related way of asking it is: what do you expect to see happening after you walk out of the booth? There are strong arguments to support several things being true here. 1.) you should be extremely certain that if you end up surviving, you will remember the coin coming up tails and you will win the bet. 2.) you should expect that when you walk into the booth, if you look at the coin, you will see it come up heads with 50% chance and tails with 50% chance, and 3.) whether you look at the coin should not affect in any way the chances that it will come up tails, or that you will remember it having come up tails, and 4.) if you do look at the coin while you're still in the booth, and you see heads, you should expect with 100% probability that after you walk out of the booth, you'll remember seeing heads and lose the bet.

The interesting thing about this thought experiment is, that the 4 things above do not seem consistent with each other at first glance. And yet, I do think they are consistent, and if Many Worlds is true and there is any consistent notion of "what to expect to see next", then it implies they are all true. Before you walk into the booth, you should expect you will win the bet. This should be true regardless of whether you intend to look at the coin. If you don't look at the coin, there's nothing more to be said. In nearly all of the worlds where you survive, you do win the bet. If you do choose to look at the coin while you're in the booth, there's a 50% chance you'll see heads and a 50% chance you'll see tails. If you see tails, then you know you have won the bet. If you see heads, then you're in a really weird situation. You should expect now that you will *lose* the bet (even though before you looked at the coin, you thought you would win it). However, a more nuanced thing to say is that you should expect yourself to be killed in nearly all of the branches, and the only one where you survive you'll lose the bet. So therefore, it's useful to be prepared for that one case where you do survive... so be prepared to pay up! It seems as though, if you were really in this situation, you might be a bit afraid to look at the coin. Like it would ruin your chances of winning the bet. Because then there's a 50% chance you'll be in this weird situation thinking you're either going to die or lose the bet. But if you don't look at the coin, you'll never be in that situation. But the illusion here is that looking at the coin has somehow influenced what's going to happen. The only thing it really affects though is whether you know you're about to die and/or lose the bet. If you don't know, then you don't have to deal with as many weird feelings. Although you still know that a lot of your future selves (roughly half of them) will die, and that in some very obscure branch of the multiverse, you will survive but lose the bet. (For simplicity, we can assume that the outcome of the coin itself is chosen based on some other quantum random variable, which could have been determined far ahead of time. If it's a purely classical coin, then I don't think it changes anything about this analysis, but it makes the whole thing a bit more difficult to think about since the outcome is pseudorandom rather than purely random.)

I'd like to write a bit more about the mind/body problem and the anthropic principle. Clearly things like cloning and the quantum suicide experiments described here tie in heavily to that, and call into question how to tie consciousness (in the sense of subjective personal experience or identity) to physical copies of our bodies. If nobody has written up a paper on this coin operated quantum suicide booth, I'm thinking it might be worthwhile.

The first time I used Feynman diagrams in a physics class, believe it or not, was not in Quantum Field Theory, where they are used most frequently, but in graduate Statistical Mechanics, which I took the year before. We weren't doing anything quantum, just regular classical statistical mechanics. But we used Feynman diagrams for it! How is this possible? Because the path integral formulation of quantum mechanics looks nearly identical mathematically to the way in which classical statistical mechanics is done. In both cases, you have to integrate an exponential function over a set of possible states to obtain an expression called the "partition function". Then you take derivatives of that to find correlation functions, expectation values of random variables (known as "operators" in quantum mechanics") and to compute the probability of transitions between initial and final states. This might even be the same reason why the Schrodinger Equation is sometimes used by Wall Street quants to predict the stock market, although I'm not sure about that.

One difference between the two approaches is what function gets integrated. In classical statistical mechanics, it's the exponential of the Boltzmann factor for each energy state e^(-E/kT). You sum this over all accessible states to get the partition function. In Feynman's path integral formalism for quantum mechanics, you usually integrate e^(iS) where S is the action (Lagrangian for a specific path integrated over time) over all possible paths connecting an initial and final state. Another difference is what you get out. Instead of the partition function, in quantum mechanics, you get out a probability amplitude, whose magnitude then has to be squared to be interpreted as a transition probability.

I was going to write about how these are very close to the same thing, but as I read more in anticipation of writing this, I got more confused about how they fit together. In the path integral for quantum mechanics, you can split it up into a series of tiny time intervals, integrating over each one separately. Then taking the limit as the size of these time intervals approaches zero. When you look at one link in the chain, you find that you can split the factor e^{iS} into a product of 2 factors. One is e^{ip*\delta_x} which performs a Fourier transform, and the other is e^{-iHt} which tells you how to time-evolve an energy eigenstate in quantum mechanics into the future. The latter factor can be viewed as the equivalent of the Schrodinger Equation, and this is how Schrodinger's Equation is derived from Feynman's path integral. (There's a slight part of this I don't quite understand, which is why energy eigentstates and momentum eigenstates seem to be conflated here. The Fourier transform converts the initial and final states from position into momentum eigenstates, but in order to use the e^{-iHt} factor it would seem you need an energy eigenstate. These are the same for a "free" particle, but not if there is some potential energy source affecting the particle! But let's not worry about that now.) So after this conversion is done, it looks even more like statistical mechanics. Because instead of summing over the exponential of the Lagrangian, we're summing over the exponential of the Hamiltonian, whose eigenvalues are the energies being summed over in the stat mech approach. However there are still 2 key differences. First, there's the factor of "i". e^{-iEt} has an imaginary exponent, while e^{-E/(kT)} has a negative exponent. This makes a pretty big difference, although sometimes that difference is made to disappear by using the "imaginary time" formalism, where you replace t with it (this is also known as "analytic continuation to Euclidean time). There's a whole mystery about where the i in quantum mechanics comes from, and this seems to be the initial source--it's right there in the path integral, where it's missing in regular classical statistical mechanics. This causes interference between paths which you otherwise wouldn't get. The second remaining difference here is that you have a t instead of 1/kT (time instead of inverse-temperature). I've never studied the subject known as Quantum Field Theory at Finite Temperature in depth, but I've been passed along some words of wisdom from it, including the insight that if you want to analyze a system of quantum fields at finite temperature, you can do so with almost the same techniques you use for zero temperature, so long as you pretend that time is a periodic variable that loops around every 1/kT seconds, instead of continuing infinitely into the past and the future. This is very weird, and I'm not sure it has any physical interpretation, it may just be a mathematical trick. But nevertheless, it's something I want to think about more and understand better.

Another thing I'd like to think about more, in order to understand the connection here, is what happens when you completely discretize the path integral? That is, what if we pretend there's no such thing as continuous space, and we just want to consider a quantum universe consisting solely of a finite number of qubits. Is there a path integral formulation of this universe? There's no relativity here or any notion of space or spacetime. But as with any version of quantum mechanics, there is still a notion of time. So it should be possible. And the path integral usually used (due to Dirac and Feynman) should be the continuum limit of this. I feel like I would understand quantum mechanics a lot more if I knew what the discrete version looked like.

Oh, one more thing before we move on to the quantum suicide booth. While reading through some Wikipedia pages related to the path integral recently, I found something pretty interesting and shocking. Apparently, there is some kind of notion of non-commutativity, even in the classical version of the path integral used to compute Brownian motion. In this version of the path integral, you use stochastic calculus (also known as Ito calculus I think?) to find the probabilistic behavior of a random walk. (And here again, we find a connection with Wall Street--this is how the Black Sholes formula for options pricing is derived!) I had stated in a previous part of this series that non-commutativity was the one thing that makes quantum mechanics special, and that there is no classical analog of it. But apparently, I'm wrong, because some kind of non-commutativity of differential operators does show up in stochastic calculus. But I've tried to read how it works, and I must confess I don't understand it much. They say that you get a commutation relationship like [x, k] = 1 in the classical version of the path integral. And then in the quantum version, where there's an imaginary i in the exponent instead of a negative sign, this becomes [x, k] = i or equivalently, [x, p] = ih. So apparently both non-commutativity and the uncertainty principle is directly derivable from stochastic calculus, whether it's the quantum or the classical version. So this would indicate that really the *only* difference between classical and quantum is the factor of i. But I'm not sure that's true if looked at from the Koopman-von-Neumann formalism. Clearly I have a lot more reading and thinking to do on this!

Meanwhile I've discovered a few more goodies, such as this video of Sidney Coleman's "Quantum Mechanics in Your Face" lecture:

Quantum Mechanics In Your Face

I had always heard about this famous lecture on quantum mechanics given by Sidney Coleman, but never watched it myself. I knew most everything in it, but was both surprised and pleased with seeing the way he presents it. I was especially interested to hear him say at the end of the lecture that the view of quantum mechanics he considers most correct follows the spirit of Hugh Everett. By invoking Everett's name rather than Bohr, he seems to be aligning pretty strongly with the Many Worlds Interpretation, although in all fairness he does say that "many people have taken Everett's ideas and run in different directions with them" which possibly implies that he thinks people like Bryce DeWitt (who coined the term "many worlds" or David Deutsch distorted Everett's original ideas.

The list of high profile physicists I've seen now willing to come out in favor of Everett is pretty impressive. So let's see, we've got at least... Leonard Susskind, Raphael Bousso, Sean Carroll, Sidney Coleman, Max Tegmark, David Deutsch. They've all made comments that to me put them more in Everett's camp than Bohr's. And yet interestingly, Lubos Motl and Tom Banks both consistently identify with the Copenhagen camp and invoke Bohr's name over Everett when asked to explain quantum mechanics. But today I happened to run across Lubos linking to this very Sidney Coleman lecture saying it was a great explanation of quantum mechanics. So if Coleman invokes Everett and Motl invokes Bohr, but they both think they're on the same page... there must be a lot less difference between them than people realize.

Last week I read a really fascinating paper by Sean Carroll, where he presents a new derivation of the Born rule (something that's considered necessary for Many Worlds to make sense, but not for Copenhagen):

Self-Locating Uncertainty and the Origin of Probability in Everettian Quantum Mechanics

This is no question my favorite derivation so far. I had always felt the strange combination of feeling like the derivation ought to be obvious, and that surely it wasn't as difficult or obscure as Deutsch, Zurek, and others made it sound. Carroll's derivation agrees much more with my intuition that the Born rule is epistemic (not something that requires decision theory or a notion of value to derive), and is implied by some pretty basic obvious assumptions (specifically: the fact that changing the environment shouldn't change where you think you're located in a system, a principle he refers to as ESP = epistemic separation principle).

This fit in very nicely to the ongoing conversation I've been having with a coworker about the Everett interpretation based movie we've been talking about making for the past year or so (no idea if that will ever happen, but he says each week that any week we'll probably start filming by next week). It spawned a whole side discussion on observer selection effects, the anthropic principle, quantum suicide experiments, and all kinds of related stuff.

The main thing to come out of it was a new thought experiment my coworker devised, which I must say, is totally brilliant. He was convinced that it undermined my view of quantum suicide experiments and meant that you can't reason about them the way that most people (like myself) who believe in the anthropic principle think you should reason about them. It did manage to confuse me a lot, and took a couple days of thinking for me to eventually realize how to make sense of it within my usual framework of thinking. But I did eventually feel like I resolved the paradox. I don't think he *quite* accepted my resolution though, and even I would admit that despite feeling confident about how it's supposed to work there is still something that seems pretty surprising, spooky, or counter-intuitive about it.

I mentioned that possibly, if nobody has proposed this particular thought experiment before, we ought to write an academic paper on it. Tentatively, I'm calling it the "coin operated quantum suicide booth". I can't think of a shorter name, but that basically describes exactly what goes on in it.

I think I'll get to describing the actual thought experiment in part 6, but for now I just want to note the relevance to this whole instrumentalism/realism series of posts I'm writing. What it boils down to I think is that Copenhagen and Many Worlds have both evolved over time (especially Copenhagen) and today they stand incredibly close to each other, such that it's sometimes hard to tell them apart. As I've mentioned many times, the only real difference is that Many Worlders consider the wave function "real" while Copenhagenists do not. But I think there is a specific reason Copenhagenists take the point of view they do rather than accepting fully Many Worlds. It's because they are cautious about making metaphysical commitments yes, but they are cautious in one particular way. It turns out that if you really take Many Worlds seriously, then you have to use a lot of reasoning that's very deeply connected to what's known as the "anthropic principle". Nick Bostrom, one of my personal heros, whom I was delighted to meet and talk with briefly at the Stanford Singularity Summit years ago, is a philosopher at Oxford who wrote the influential book Anthropic Bias: Observer Selection Effects in Science and Philosophy. (Yes, he owns the domain anthropic-priciple.com, and yes, he's also known for the Simulation Argument, his reviews and analysis of the Doomsday Argument, his other popular book Superintelligence, and for the organization he co-founded as the World Transhumanist Association, now known as Humanity+). I read most of Anthropic Bias a long time ago (around 2002 I think? Maybe earlier?), and have since always thought about any kind of observer selection effects in the way he suggests (through what he calls the Self Sampling Assumption).

Anyway, in physics there tends to be a big divide over people who take anthropic reasoning seriously, and those who see it as a specious. I've always been one to take it very seriously. And what I've found is that the people who don't like the anthropic principle tend to be the Copenhagenists, while those who tend to take it seriously tend to be the Many Worlders. Why? Because if you don't view the other branches of the wave function as merely mathematical (as the Copenhagenists do), then you have to think about the splitting of one observer into many. And reasoning about such splitting necessarily involves a lot of observer selection effects, otherwise known as "anthropic bias".

I'll explain the actual thought experiment we came up with in the next part.

Koopman-von Neumann formulation of classical mechanics until reading Tom Banks' 2011 post about probability in quantum mechanics on cosmic variance.

But finding out about it makes so many things about quantum clear to me that were murky in the past. The main thing that's now crystal clear is this: quantum mechanics is a generalization of statistical mechanics. They aren't really two different theories, rather quantum mechanics

I had made it most of the way to understanding this when I wrote my series on Wandering Sets in 2013. In some ways, I think it's probably the best thing I've ever written on this blog, even though I think it ended up being too long, meandering, and esoteric for my friends to follow all the way through. I want to write a popular physics book at some point where I explain these ideas more clearly, with pictures and more analogies and examples. What I've learned via KvN solidifies my hunch that QM and SM are really the same theory.

I think one of the first things any student is struck with when they take their first course on quantum mechanics is how different the math is from classical mechanics or stat mech. In classical mechanics, you have lots of differential equations that come from a single important master entity called a Lagrangian, and if you want you can write this in an alternate way as something similar called a Hamiltonian. But all of the variables in the theory just stand for regular real numbers (like 2, pi, 53.8, etc.) that describe the world. In quantum mechanics, you start from the assumption that there is a complex Hilbert space of operators. And you can write down a Hamiltonian, which you're told is an analog of the Hamiltonian used in classical mechanics. The Hamiltonian seemed like a weird way of writing the Lagrangian in classical mechanics, but in quantum mechanics it takes on a more important role. But the "variables" used in the quantum Hamiltonian are not ordinary real numbers, they're operators. These operators correspond to observables (things you can observe about the world), but instead of being a single number they are more like a technique used for making measurements and getting a set of possible results out with associated probabilities. And instead of these operators acting on states in a more familiar space (like the ordinary 3-dimensional space we live in, or the phase space used in statistical mechanics), they act on states in a complex Hilbert space. Complex numbers like 5+i play an important role in this space, and yet as a student there's really no way of understanding why or what the purpose is. You're just asked to accept that if you start with these assumptions, somehow they end up predicting the results of experiments correctly where the corresponding classical predictions fail.

There were many reasons why I ended up leaning towards many worlds rather than other interpretations. I've always preferred representational realism to instrumentalism, so that was one reason. Another was locality (reading David Deutsch's 1999 paper on how quantum mechanics is entirely local as long as you assume that wave functions never collapse was the most influential piece of evidence that convinced me.) But there was a third reason.

The third reason was that whenever I had asked myself "what's the essential difference between classical mechanics and quantum mechanics?" it came down to the idea that instead of regular numbers representing a single outcome, you have operators which represent a set of possible outcomes. In other words, instead of reality being single threaded (one possibility happens at a time), it's multi-threaded. Things operate in parallel instead of in series. This especially resonated with my computing background, and my hope that one day quantum computers would be developed. I knew that it was a little more complicated than just "replace single-threaded process with multi-threaded process", but I thought it was the biggest difference between how the two theories work and what they say.

Learning about the KvN formalism hasn't completely destroyed my preference for Many Worlds, but it has obliterated my view that this is the most important difference between the theories. I now understand that this is just not true.

While I was writing my wandering set series in 2013, I discovered the phase space formalism of quantum mechanics (and discussed it a bit in that series, I believe). This was very interesting to me, and I wondered why it wasn't taught more. It demonstrates that you can write quantum mechanics in a different way, using a phase space like you use in statistical mechanics, instead of using the usual Hilbert space used in quantum mechanics. That was surprising and shocking to me. It hinted that maybe the two theories are more similar than I'd realized. But even more surprising and shocking was my discovery this year of KvN, which shows that you can write statistical mechanics... ordinary classical statistical mechanics... in an alternate formalism using a Hilbert space! What this means is that I was just totally wrong about the number/operator distinction between quantum and classical. This is not a difference in the theories, this is just a difference in how they are written down. Why was I mistaken about this for so long? Because the standard procedure for taking any classical theory and making it a quantum theory is called "canonical quantization", and the procedure says that you just take whatever variables you had in the classical theory and "promote" them to operators. It's true that this is how you can convert one theory to the other, but it's extremely misleading because it obscures the fact that what you're doing is not making it quantum but just rewriting the math in a different way. What makes it quantum is solely the set of commutation relations used!

to be continued in part 5...

As I mentioned in part 3, I had never heard of the
But finding out about it makes so many things about quantum clear to me that were murky in the past. The main thing that's now crystal clear is this: quantum mechanics is a generalization of statistical mechanics. They aren't really two different theories, rather quantum mechanics

*is*statistical mechanics... it's just that a central assumption of statistical mechanics had to be dropped in light of the evidence.I had made it most of the way to understanding this when I wrote my series on Wandering Sets in 2013. In some ways, I think it's probably the best thing I've ever written on this blog, even though I think it ended up being too long, meandering, and esoteric for my friends to follow all the way through. I want to write a popular physics book at some point where I explain these ideas more clearly, with pictures and more analogies and examples. What I've learned via KvN solidifies my hunch that QM and SM are really the same theory.

I think one of the first things any student is struck with when they take their first course on quantum mechanics is how different the math is from classical mechanics or stat mech. In classical mechanics, you have lots of differential equations that come from a single important master entity called a Lagrangian, and if you want you can write this in an alternate way as something similar called a Hamiltonian. But all of the variables in the theory just stand for regular real numbers (like 2, pi, 53.8, etc.) that describe the world. In quantum mechanics, you start from the assumption that there is a complex Hilbert space of operators. And you can write down a Hamiltonian, which you're told is an analog of the Hamiltonian used in classical mechanics. The Hamiltonian seemed like a weird way of writing the Lagrangian in classical mechanics, but in quantum mechanics it takes on a more important role. But the "variables" used in the quantum Hamiltonian are not ordinary real numbers, they're operators. These operators correspond to observables (things you can observe about the world), but instead of being a single number they are more like a technique used for making measurements and getting a set of possible results out with associated probabilities. And instead of these operators acting on states in a more familiar space (like the ordinary 3-dimensional space we live in, or the phase space used in statistical mechanics), they act on states in a complex Hilbert space. Complex numbers like 5+i play an important role in this space, and yet as a student there's really no way of understanding why or what the purpose is. You're just asked to accept that if you start with these assumptions, somehow they end up predicting the results of experiments correctly where the corresponding classical predictions fail.

There were many reasons why I ended up leaning towards many worlds rather than other interpretations. I've always preferred representational realism to instrumentalism, so that was one reason. Another was locality (reading David Deutsch's 1999 paper on how quantum mechanics is entirely local as long as you assume that wave functions never collapse was the most influential piece of evidence that convinced me.) But there was a third reason.

The third reason was that whenever I had asked myself "what's the essential difference between classical mechanics and quantum mechanics?" it came down to the idea that instead of regular numbers representing a single outcome, you have operators which represent a set of possible outcomes. In other words, instead of reality being single threaded (one possibility happens at a time), it's multi-threaded. Things operate in parallel instead of in series. This especially resonated with my computing background, and my hope that one day quantum computers would be developed. I knew that it was a little more complicated than just "replace single-threaded process with multi-threaded process", but I thought it was the biggest difference between how the two theories work and what they say.

Learning about the KvN formalism hasn't completely destroyed my preference for Many Worlds, but it has obliterated my view that this is the most important difference between the theories. I now understand that this is just not true.

While I was writing my wandering set series in 2013, I discovered the phase space formalism of quantum mechanics (and discussed it a bit in that series, I believe). This was very interesting to me, and I wondered why it wasn't taught more. It demonstrates that you can write quantum mechanics in a different way, using a phase space like you use in statistical mechanics, instead of using the usual Hilbert space used in quantum mechanics. That was surprising and shocking to me. It hinted that maybe the two theories are more similar than I'd realized. But even more surprising and shocking was my discovery this year of KvN, which shows that you can write statistical mechanics... ordinary classical statistical mechanics... in an alternate formalism using a Hilbert space! What this means is that I was just totally wrong about the number/operator distinction between quantum and classical. This is not a difference in the theories, this is just a difference in how they are written down. Why was I mistaken about this for so long? Because the standard procedure for taking any classical theory and making it a quantum theory is called "canonical quantization", and the procedure says that you just take whatever variables you had in the classical theory and "promote" them to operators. It's true that this is how you can convert one theory to the other, but it's extremely misleading because it obscures the fact that what you're doing is not making it quantum but just rewriting the math in a different way. What makes it quantum is solely the set of commutation relations used!

to be continued in part 5...

Several new things have happened since then. I started thinking along these lines to try to come up with some material for the microtalk I gave at FreezingWoman in March 2015. I ended up deciding to avoid the dicey subject of realism vs instrumentalism, and even for the most part avoided the entire topic of quantum mechanics. Instead focusing on the question of "what is the universe made of?" and keeping to things I feel that I understand well such as relativity and some aspects of quantum field theory. By the time I finished putting it together, I realized that I had a pretty good case that something more like neutral monism is the right way to look at metaphysics rather than materialism. The idea that metaphysics is even meaningful sort of presumes realism over instrumentalism. And yet because I defined "neutral monism" in my talk as "none of the above" (metaphysical theories) I felt like I left it a little bit open that perhaps instrumentalism is true after all and we just need to give up metaphysics entirely.

After returning from Freezing Woman, I spent a month and a half expanding my 5 minute microtalk into a 15-minute video presentation, which I released on Vimeo and linked to on Facebook and Google+. A handful of my friends viewed it and gave me positive feedback, some of them resharing it, but overall it didn't get a lot of attention. Then later, I found out that someone on Youtube had downloaded the Vimeo video and uploaded it to their Youtube channel, where it did get a lot of attention. (13,467 views, 164 upvotes, and only 5 downvotes... with lots of positive comments from people, many asking if there will be a sequel!):

Materialism and Beyond: What is Our Universe Made Of?

This weekend I uploaded it to my own Youtube channel, which I had been meaning to do (apparently, hardly anyone is on Vimeo; I original chose it primarily because I don't like the idea of ads being inserted in the middle of my video). So far not much action there either, but we'll see I guess.

I can't remember when it was, but at some point this year (maybe around May?) I ran across a *really* interesting post that my adviser in graduate school, Tom Banks, made defending the Copenhagen Interpretation of quantum mechanics:

http://www.preposterousuniverse.com/blo

(There's another version of it hosted by Discover magazine but the mathematical equations don't show up right there.) I was shocked that this has been online since 2011 and I somehow managed to not find it until 2015. Not only because it is written by someone I knew personally and hold in great regard, but because it basically explains almost everything I've ever wanted to understand about quantum mechanics in one shot. I often wanted to ask him about this subject, but I was always too shy to do it. I guess I felt like to him, it might be considered a waste of time. But if he could have summed it up this well in one sitting, I would have surely asked and gotten a lot of benefit out of it. Sadly, I finally find it now long after I've quit physics.

So, it took me a while to understand everything he says there. He does make a lot of simple mistakes in his explanation, which confuses things. (For example, he uses the term "independent" several times to mean "mutually exclusive", something anyone--including him--who knows anything about probability knows are two very different things.) Nevertheless, there is a core of what he's saying that turns out to be very important. At first when I read it I sensed that, but it hand't fully sunk in. Since then, I have read a lot more things, gotten into some discussions and debates with people coming from different perspectives on this (one being a mailing list I got invited to as a consequence of people liking my video), and mulled it over in my head. And gradually, it sunk in and I feel like I have now absorbed the message. And it's a really important message that I had sort of suspected before but hadn't really understood.

This week I was thinking through this stuff again and went back to read the Koopman-von Neumann (KvN) formulation of classical mechanics Wikipedia page again (for like the third time since reading my advisor's post about KvN, which I had never heard of until then). (And in connection with the mailing list I'm on, just before that reading some more about Quantum Darwinism and Zurek's existential interpretation of QM). And suddenly halfway through the week, I felt like everything clicked. After all of these years, I

**finally**understand Copenhagen. And it's a lot more coherent than I had imagined.

This doesn't mean I have converted now to a Copenhagenist. I'm still not sure whether I prefer Copenhagen, Many Worlds, or something in between. (And almost certainly, the right answer is somewhere in between, at least compared to what Bohr's original ideas were and what Everett's original ideas were.) And while I call my advisor a Copenhagenist, I'm not even sure he uses that term. I think his view is a modern version of Copenhagen, but does include all of the insights that have been gleaned since the time of Heisenberg and Bohr.

(Although I think he denies that those new insights have significantly changed anything about the interpretation.) I've also read a bit more about consistent histories lately and decided that there are slight differences between it and Copenhagen, it's not just a clarification of Copenhagen because in some ways, it does away with the idea of quantum measurement (or makes it less central/important to the theory). I still think QBism is a form of Copenhagen, although some of its advocates seem to think it has features which distinguish it from Copenhagen.

At any rate, using the broad definition of Copenhagen which I have always used (to include modern versions of it rather than a more narrow one focusing strictly on Bohr and Heisenberg's writings), I'd like to try to sum up the new insights I've absorbed. This was my intention in writing this post, but since I've only introduced that intention and not gotten there yet, I'll start my summary in part 4.

To be continued...

My first exposure to quantum mechanics was in high school, in 1994 when I read David Z Albert's book Quantum Mechanics and Experience. He's a philosopher of quantum mechanics whom I still have great respect for. At the time of reading his book, where he outlines most of the major competing interpretations, it seemed like David Bohm's interpretation made the most sense to me so my personal suspicion was that something like Bohm's interpretation was probably right. Then in the late 90's, after taking my first couple actual quantum mechanics courses, and then especially after taking David Finkelstein's Quantum Relativity course, I realized that the Copenhagen Interpretation was a lot more sophisticated and made more sense than I had originally thought. So I leaned more towards Copenhagen by 1998. But something about it still didn't sit right with me so I figured either I had to try harder to understand it, or there was something missing that Copenhagen leaves out.

I started graduate school in 2003, and during the same year I think, read David Deutsch's The Fabric of Reality, which convinced me pretty thoroughly that the Many Worlds Intepretation was right, and Copenhagen was just silly nonsense. After taking graduate level quantum mechanics, and especially after taking quantum field theory in 2004-2005, I learned more about why Bohm's theory is not taken seriously by most physicists. But my conviction that Many Worlds is the only proper way to understand quantum physics weakened somewhat. After working with my advisor, a Copenhagenist, and hearing some of the things he had to say about this, my convictions had weakened further by the end of graduate school in 2009, although I still considered/consider myself someone who leans instinctively in the Many Worlds direction.

Another thing I learned, only late in graduate school, is that at least one physicist whom I respect a lot, Gerard 't Hooft, believes that something in the spirit of Bohm's interpretation may ultimately be right (although he would reject Bohm's actual interpretation, he thinks there may be some underlying deterministic hidden variable theory behind quantum mechanics). This completely shocked me the first time I found out, since I had assumed any reasonably intelligent people who had thought about the subject had moved beyond hidden variable theories. But it was another thing that further weakened my convictions that anyone (including the Many Worlds advocates) has really figured it all out.

So now, in 2015, this is an issue I've been debating with myself for over 20 years. As you can imagine I've thought about it quite a bit. And I've learned quite a bit a long the way, but also realized that it's a very tough question. There is a lot of consensus on some issues, but almost no consensus still on other issues.

But the main remaining split in the physics community, I feel, is between some kind of broadly Many Worlds (Everettean) interpretation and a broadly Copenhagen (Bohr and Heisenberg) interpretation. In the Copenhagen camp, I would include consistent history (which I don't think of as an interpretation in itself, but rather a clarification of how to apply Copenhagen in a cosmological context) and Quantum Bayesianism (a more modern and sophisticated version of Copenhagen, where probability is treated more carefully). I would certainly *not* include objective collapse interpretations like GRW or Penrose, as they are a very different beast--even though many people tend to confuse these with Copenhagen.

The main point of contention between Many Worlds and Copenhagen is exactly the debate between scientific realism and instrumentalism. Copenhagen takes a very metaphysically conservative approach where you're very careful only to talk about what can be directly measured by an observer. These are the only statements which logical positivists like Bohr believed were meaningful. But if you understand quantum mechanics, and you also happen to believe that there is at least some part of the external world which might actually be "real", then you must accept the Many Worlds Interpretation instead of Copenhagen. Copenhagen gives only a 1st person perspective of the world, whereas the Many Worlds Interpretation gives a 3rd person perspective of the world, and assumes that there is some world which exists independent of classical observers like us who happen to reside in it.

What do I mean by an "external world"? A world beyond the senses. I believe that there is something which generates the perceptions I have--ie, there is more to this world than just what's in my mind. Even though the history of physics (especially during the 20th century) has been one long progression of scientists realizing they have to give up one or another assumption about what reality is exactly, I have always held onto the belief that there does exist some external physical world behind it all. Why have I held onto this assumption, and what do I mean by it? Well, first and foremost I mean that there is a difference between a waking state and a dreaming state. Our world is not like a dream in that anything goes; there are certain patterns and rules which guide the perceptions we have. It's even possible for us to have illusions or hallucinations, where we think we perceive one thing but it's actually something else behind those perceptions--it's just that our perceptions got distorted or mixed up somehow along the way. So far, I suspect even most hardline instrumentalists would agree with me. Where we would disagree is in how we talk about these patterns. I think that if nothing else, the patterns themselves constitute something "real" which would exist independently from the human mind. If all humans and animals suddenly disappeared from the universe, I would expect there still to be the same structures remaining. In fact, I think that a lot of these structures were here before we got here.

By structures and patterns I don't require anything more than something like pure information or mathematics, and indeed, modern physics seems to back up the idea that this is what the world is made of, not matter or energy or some kind of "substance" like the original materialists thought. Substances, whether they be physical or mental, were just a bad idea. Substance isn't something that exists.

But interestingly, this is where the realism vs instrumentalism debate crosses the line over into the materialism debate. Why do I not include an alternative to materialism which it is pitted against? Because I think the general consensus is that there isn't really a good alternative theory to materialism. There appear to be various problems with materialism, but the problems with the original alternatives--dualism and idealism--are much worse, so these are hardly discussed any more. Usually, I think the critics of materialism these days are generally called non-materialists, since they don't buy into materialism but may not have a fully worked out theory to replace it. However if there is any viable alternative to materialism still alive today, I think it is phenomenology. So perhaps I should call this the materialism vs phenomenology debate.

Phenomenology is another philosophical school that, like pragmatism, seems somewhat intertwined with instrumentalism and positivism historically. And as I mentioned, instrumentalism and positivism are so close that I'm not actually sure what the difference is--I think they are about as interchangeable as materialism and physicalism, although positivism has acquired a negative connotation these days (perhaps due to the excesses of logical positivism) whereas instrumentalism still carries weight with many philosophers and especially physicists.

I have a book on phenomenology at home (Introduction to Phenomenology by Sokolowski), in fact my icon on Google+ is a picture of me reading it. I'll admit that I never quite finished it, although I skimmed enough of it to get a sense. It's mostly written in the vein of Husserl's thought. The other really famous founder of phenomenology, Heidegger, tended to be a lot more kooky and spiritual, closer to the idealism of Hegel, whereas Husserl's analysis was more rigorous and scientific.

The main reason I got interested in trying to understand phenomenology is because I've always wanted to understand the Copenhagen Interpretation and yet I don't feel like it's possible to understand it from within the philosophical framework within which I generally think (materialism/realism). Some people would say that phenomenology is neutral about the materialism/non-materialism debate, but I'd definitely say it counts as an alternative. (I'm less sure about whether it's a viable alternative.) It's not exactly the same as dualism or idealism, it's something different. In some ways, I'd say it's an intermediate position between materialism and idealism, but which avoids dualism. It does this by being more instrumentalist about the whole thing and avoiding metaphysics as much as possible.

I think phenomenologists would agree with me that substances are a silly thing of the past. But there's a key point where I see the phenomenologists and Copenhagenists of the world disagreeing with me (or at least, with my default set of beliefs). Whereas I can imagine a world without experiences--namely, one in which there is structure but no sentient beings, I think the position of the phenomenologist/instrumentalist/Copenhag

And that's an intriguing position to me, which I wouldn't just dismiss offhand. In fact, in order to really be sure about this, I should give it some more thought. But basically that belief seems to be required of instrumentalists because in order to make a meaningful statement about the world you have to refer to something which can be measured or experienced. If there is nobody there to experience it, then you're just making meaningless statements. The problem with this, and the reason why I have never been fully willing to accept it, is that it seems to imply that the early universe didn't really exist, or that somehow it is meaningless to talk about the early universe (before life evolved).

With our telescopes we can see the light from ancient galaxies which formed shortly after the big bang, that light is just now reaching us. So from my perspective, it seems very reasonable to therefore say that these galaxies exist/existed as part of the universe. To me, they are not simply theoretical constructs that we invented to help explain the strange patterns of light that dazzle our eyes. They are not simply a summary of the data streaming into our eyes, they are the explanation for it!

I say that, however, with a bit of hesitation. As I mentioned, the history of physics has been one long "screw you" to realists. Many questions which appeared to be meaningful ended up being not meaningful, and as it turns out you have to be extremely careful about which questions you ask otherwise you end up asking something which isn't meaningful. So we should rightfully be skeptical when someone tells us to take any particular physical model seriously on metaphysical level. But it does seem to me that the only way Copenhagen makes sense is if you went whole hog with this and rejected any and all "theoretical" constructs that scientists use to explain the data. Including the existence of those distant galaxies that appear to have existed before any conscious observers came on the scene.

If a tree falls in the woods but nobody hears it, does it make a sound? A pure instrumentalist would have to answer no to this, whereas a realist such as myself answers yes. To a phenomenologist, sound is a perception, it's a part of a conscious being's internal phenomenology. Yes, vibrations in the air trigger it (or at least, that is the theoretical construct which scientists have come up with to try to predict when the sensation of sound will be experienced). But the sound itself is not the vibrations of air molecules, it's the experience, or as they say in philosophy of mind, the "qualia". Sound is what it's like to hear something. And that can't exist unless someone is around to hear it.

Now, I think that many Copenhagenists would argue that I'm holding up a straw man of Copenhagen. They would probably say that it's possible to subscribe to the Copenhagen Interpretation and not go all the way to phenomenology or pure instrumentalism. But I tend to think that's because they just haven't analyzed the basic philosophical assumptions of the interpretation deeply enough. (On the other hand, I'm also very open to the possibility that it's myself who hasn't analyzed them deeply enough--but I've been trying for 20 years to make sense of Copenhagen, and I still haven't managed to do it.)

Why do I think Copenhagen implies the extreme instrumentalism I describe above? Simply stated, because large systems are built out of tiny systems. All Copenhagenists would agree that it's meaningless to ask about the state of a single atom before it is measured by some large classical measuring device. There is a quantum wave function which can be used to predict--probabilistically--what the outcome of a measurement of that atom's properties (such as its location or momentum) will be. But that wave function is not viewed by Copenhagenists as a description of the objective state of the atom. It's viewed as simply a tool used for calculating probabilities. People like myself who do tend to think of the wavefunction as describing the objective state of that atom, call ourselves "Everettians". (Or some, Bohmians, etc. but Everett/ManyWorlds is the realist interpretation taken the most seriously today.) What many Copenhagenists would say is that for a single atom, it has no objective state, but for a large collection of atoms it does. They would say that all you can talk about for the single atom is what might happen if someone were to try to measure certain properties of it. To say that the atom is at a particular location is wrong, and even to say that it is in a superposition of being at several locations is wrong. According to a Copenhagenist, any question asked about what the location of the atom is before you measure it is "meaningless" because an atom is just a theoretical construct that we use to summarize the sensory data we take in.

The problem is that any large classical system that could count as a measuring device is built out of atoms. So if there is no objective state of any single atom at some point in time, it's hard for me to imagine how there could be an objective state for the entire system. I mean, I understand how the properties of large collections of things can differ greatly from the properties of small things. Many collective properties "emerge" after a certain point as a system gets large enough. But this is something much more radical. It's the statement that a large system has properties whereas a small thing like an atom does not have properties. It has something else, something that you might call "proto-properties". (And interestingly, there is something similar in this to the idea of proto-panpsychism, where things like atoms don't have mental properties, just proto-mental properties.) These proto-properties don't have the characteristics of what we would ordinarily call a property of something. You can't talk about them with the usual classical logic we use. You instead have to use quantum logic, where things like the law of distribution don't actually work. And where "A and B is true" is different from saying "B and A is true". It's an extremely radical proposal that requires completely throwing out the window the entire system of language we've used for millennia to describe the world. Whether this has been successful I can't tell, but when I compare it to the Many Worlds Interpretation, many worlds seems so straightforward and easy to understand, and doesn't have any of this extremely bizarre and counterintuitive extra baggage to it: it just makes sense. Yes, there may be a few unsolved issues with the foundations of Many Worlds (although this is debatable), but even so they don't seem anything like the daunting challenges that making sense of Copenhagen seems to involve. So mainly for this reason, even though many of the brightest physicists I know think Copenhagen is the best way to interpret quantum mechanics, I still don't really buy it. Although I will continue to read more and try to understand more. And I will continue to look for problems that may be lurking in Many Worlds which may not be obvious.

To be continued...

So for the past couple weeks I've been brainstorming about different ideas and topics I might want to cover. I realized that there is a huge connected set of issues that I'm maximally interested in, but they span so many different related topics that there's no way I could even fit them all into an hour talk. So I'll have to significantly narrow it down. All of them are in some way related to either philosophy of science, philosophy of quantum mechanics, philosophy of mind, philosophy of mathematics, or epistemology. It's kind of the intersection of all of them that interests me, but I've been having trouble isolating one without feeling like the others are too crucial for understanding any of them.

Since physics is the area where most of my expertise lies, my initial thought was that I wanted to give an overview of the main philosophical lessons one learns in the course of learning more and more about the physical structure of the world. A lot of naive common sense notions are replaced by other notions, and you're whole picture of reality starts to become a bit different. Over time you start to even forget how most people think about the world. My first thought about what a good title for the talk would be, was "materialism". It could basically be me summing up what the ancient view of materialism was, and summing up how that view has changed over the course of the history of physics, and finishing with the current status of materialism. Which in some ways stands in a very ironic state: on the one hand, the evidential support for some very minimal kind of physicalism/naturalism has grown extremely strong, but on the other hand the physical structure of the world as uncovered by scientific investigation has turned out to be so much more weird and different than was envisioned by the original "materialists" that it seems like that's the wrong word for it and now conveys something pretty different from what was original meant by it or from what an ordinary person unfamiliar with any of this stuff might imagine by it. In some ways, it's clear that the original doctrine of materialism was completely wrong, but compared to any of the early theories that predated it or were seen as valid alternatives to it (idealism, dualism) it still seems like the closest to the truth. But who knows? My views on this have evolved within the past few years, and I might be willing to admit now that some aspects of idealism or dualism might need to be reincorporated into it. Or, maybe neutral monism or something like "mathematicism" is a better term than materialism.

But when I started exploring the main issues, and reading up on this again, I realized that the debate over physicalism/materialism is not really at the heart of what I wanted to cover. What I'm really more interested in is the debate between instrumentalism and scientific realism. The modern debate over materialism (which has by now mostly been renamed physicalism since it's been obvious from physics for a long time that most of the world is not "material" in any real sense--matter is just one form of energy/information and a lot of the world isn't in the form of matter at all) is within the province of philosophy of mind. The realism vs instrumentalism debate is instead within the province of philosophy of science and its most up to date form is within philosophy of quantum mechanics, where it takes shape in the debate between the Copenhagen Interpretation and the Many Worlds Interpretation.

The modern physicalism debate centers on whether consciousness can be reduced to--or eliminated in favor of--physical brain states. This is something I have my own opinions on, but which I'm far from an expert on. Whereas the debate between the scientific realists and instrumentalists is closer to my expertise since it's a debate about whether the theoretical constructs which physicists come up with (particles, fields, strings, branes, the quantum wavefunction, etc.) can be said to be "real" in any sense or if they are simply convenient fictions used to aid us in the practical business of predicting the outcomes of future experiments. Phrased in another way, the debate is over whether science has anything to say about what the world "is", like what it is ultimately made of, or if instead it only gives us a useful oracle for predicting future experiences. An instrumentalist would say that science cannot tell us about what the world beyond our senses "is", and a hardline instrumentalist might even argue that there is no fact of the matter at all about what the world is, that any such questions are just meaningless "metaphysical" nonsense.

I've always been a bit more on the side of scientific realism in this debate, however I acknowledge that there are many valid points on the instrumentalist side. In fact, I think the real answer is somewhere between pure realism and pure instrumentalism; the most accurate view of what science is and what it can say, I believe, must incorporate some aspects of both.

The most extreme form of instrumentalism was logical positivism. Actually I'm still unclear on this but it seems to me that instrumentalism and positivism mean almost the same thing and their histories are very intertwined. It seems like instrumentalism began with a French physicist / philosopher of science named Pierre Duhem from the late 19th century. In the early 20th century, it became more extreme and kind of morphed into logical positivism, where Niels Bohr and other founders of quantum physics used logical positivism to interpret quantum mechanics and came up with the Copenhagen Interpretation. Niels Bohr was both an instrumentalist and a logical positivist. So was Werner Heisenberg, although he read and wrote less about philosophy than Bohr did. Later, Quine revived instrumentalism. And pragmatism also became intertwined with instrumentalism. It's kind of ironic that the modern revival of instrumentalism was mostly Quine's doing, considering I don't think of myself as an instrumentalist, but Quine is probably my biggest hero in philosophy--mostly because of his explanation of why there is no meaningful distinction between analytic and synthetic truths.

Today in physics, the Copenhagen Interpretation of quantum mechanics, and instrumentalism in general, remains the dominant ideology. But in philosophy, as I understand it, logical positivism has been completely discredited and realism has made a comeback, while instrumentalism in a milder form remains acceptable to many philosophers but is probably overall less popular than realism (although I'm unsure of this last part and would love to find out what exactly the breakdown is).

My own belief? Either some very mild form of instrumentalism is right or some mild form of realism, or some combination. Surely the extreme forms of instrumentalism are wrong. Actually, I tend to suspect that the truth is somewhat more toward instrumentalism than philosophers realize, but somewhat more toward realism than most physicists realize. Although it is quite interesting that the physicists, you would think, would if anything have a bias toward thinking that what they do does in some way say something meaningful about the world--but instead it tends to be the opposite. Maybe by putting it in that way I'm making a straw man out of instrumentalism though?

To be continued...

I was asked to evaluate how this could be made realistic from a physics perspective, and we had many conversations about it. What my coworker pointed out, which sounds right, is that it seems like communicating between different branches of the multiverse would have to involve some kind of non-linear modifications of quantum mechanics. This led us to read up a little on how realistic such modifications would be. Most physicists assume that quantum mechanics is an exact description of the world, but many are open to the possibility that there are slight modifications to it at the scale of quantum gravity which haven't been detected yet. Unfortunately, every time someone has explored this possibility theoretically, they tend to be led to the conclusion that any kind of non-linear modification, no matter how slight, tends to lead to problems that make the whole theory inconsistent and incompatible with other more sacred laws of physics such as thermodynamics.

My advisor's advisor was one who went down this path and eventually concluded it was probably a blind alley.

It appears that there are a lot of connected things that happen when you monkey with the laws of physics as we know them. For example, if you add non-linear modifications to quantum mechanics, you tend to violate the 2nd law of thermodynamics. In creating a lot of negative energy, you also tend to violate the 2nd law of thermodynamics. And in creating a wormhole, you tend to create the possibility for closed timelike loops. David Deutsch and others have analyzed what might happen theoretically if closed timelike loops (CTC's) were possible, and the conclusion is that you'd be able to solve NP complete problems in polynomial time, ie it implies P=NP. Closed timelike loops also violate the 2nd law of thermodynamics, because entropy cannot always increase within a closed loop of time. Either entropy would have to remain constant throughout the whole loop, or increase for a while and then suddenly decrease. It's like one of those staircases from an Escher painting: the staircase that always goes up cannot connect to itself in a circle. Either it doesn't go up, or it goes up and then comes back down. The same with entropy.

So many things are connected here. Non-linear modifications of quantum mechanics, negative energy, perpetual motion, antigravity, time travel, traversable wormholes, and P=NP. The more I read about these (and especially when I read Scott Aaronson's stuff) the more it seems like either you have to accept all of them or none of them.

There are actually multiple connections between computational complexity and wormholes I've found, not just via the connection between CTC's and P=NP. For example, there is the ER=EPR conjecture which is a very exciting proposal by two of the world's greatest living theoretical physicists, Leonard Susskind and Juan Maldacena. They have found a possible way in which wormholes are the same thing as quantum entanglement. Again, I don't have time to delve into the details, but this has to do with the blackhole firewall paradox. Many physicists have been worried that if there are no modifications to quantum mechanics (ie, information is never lost in black holes) then this would imply the existence of "blackhole firewalls" where an infalling observer would reach a flaming wall of fire at the event horizon. This violates a central principle of general relativity known as "the equivalence principle" which states that physics is the same in all reference frames.

But what Susskind says is that maybe firewalls don't actually form inside a black hole, and aren't needed to resolve the information paradox. Instead, the explanation for how things like the no-cloning theorem are preserved in the context of quantum mechanics in a black hole is that the interior of the black hole is protected by an "armor of computational complexity" (as Scott Aaronson puts it). You could try to send messages from the outside to the interior non-locally via quantum entanglement (or equivalently, through a traversable wormhole), but it would require you to solve a computational problem which is so complex it's in the complexity class known as QSZK (Quantum Statistical Zero Knowledge). If I understand correctly, the only reason you cannot send such a message is because quantum computers are not powerful enough to solve problems in this class.

I mentioned in my previous post that while it seems crystal clear that there's no way an advanced civilization could ever build a macroscopic wormhole that something the size of a human could pass through, it's a lot less clear why they couldn't build a microscopic traversable wormhole and use it to send information faster than light. If they could do that, then they could also create timelike loops and hence solve P=NP problems. So maybe the only reason why they couldn't do it is also related to computational complexity. Maybe it's the same general reason Susskind suggests in the context of black hole physics: somehow, computational complexity prohibits the transmission of meaningful information through such a wormhole, even though it would otherwise appear to be possible to build one.

Shortly after writing my last entry on this, I discovered an interesting recent paper from May 2014 on traversable wormholes from a physicist at Cambridge. He wrote the paper in an attempt to construct a stable wormhole geometry using the throat of the wormhole itself to generate the required negative energy to stabilize it. He ended up finding that it was not possible to completely stabilize it, for the particular parameters he was using. But he argues that even though it isn't stable, it would collapse slowly enough that a beam of light would still have a chance to pass through it before it completely collapsed. So if you could somehow construct that geometry (a daunting task), maybe it could be considered "traversable" in that light could temporarily pass through it. He also speculates that maybe you could find other geometries with less symmetry where the whole thing could be stabilized, but this seems like wishful thinking to me. The possibility of having some kind of temporary closed timelike loop while an unstable microscopic wormhole is collapsing I find very intriguing, and unlike the large wormholes of Interstellar it's not something I would completely rule out as a possibility. However, again... if you accept that then it seems like you'd have to accept all of the above weird violations of physics, including P=NP. And to me, it seems more likely that somehow, this armor of computational complexity Susskind and Aaronson are talking about comes into play and stops the beam of light from making it all the way through. Or at least, stops there from being any meaningful information encoded in the light. It occurs to me that this may be exactly what Hawking had in mind with his Chronology Protection Conjecture.

The only somewhat serious possibility I've left out here is if somehow, Kip Thorne's original conclusion that traversable wormholes necessarily imply the ability to build time machines is flawed. If that's the case, then I will admit there is a real possibility for faster than light communication in the future; and then the armor of computational complexity, or the chronology protection conjecture, or whatever you want to call it, would only come into play when you tried to make the wormhole into a time machine. This seems far fetched to me, but less far fetched than the idea that P=NP, perpetual motion machines are possible, and the 2nd law of thermodynamics is wrong, all of which I think would need to be true in order to have an actual microscopic wormhole that could be made into a closed timelike curve.

Recently, some friends at work and I have been discussing the possibility of making a low budget sci-fi film related to the Many Worlds Interpretation of quantum mechanics. And this seemed like a pretty independent topic, but somehow in discussing the physics issues behind what we envision the plot of our film to be, there have been some crossovers. So I have had some new and interesting thoughts about why wormholes should be impossible from thinking about that. But I've also learned some new things in the course of reading some more papers on wormholes while trying to get the details right for this part V post.

First, there's a pretty good popular-level reference online which summarizes most of what I've already discussed plus a few important other things about wormholes. It comes from a Scientific American article written by Ford and Roman in 2000:

http://www.bibliotecapleyades.net/cienc

(A good bit of what I wrote in my previous posts was based loosely on what I found in Thomas Roman's 2004 review of this subject http://arxiv.org/abs/gr-qc/0409090

I said that I wanted to try and give some examples of the kinds of restrictions the QEI's and what we know about the Casimir Effect put on the construction of wormholes. Rather than doing the work myself, I'll just quote from the SciAm article above since they provide some numbers:

*When applied to wormholes and warp drives, the quantum inequalities typically imply that such structures must either be limited to submicroscopic sizes, or if they are macroscopic the negative energy must be confined to incredibly thin bands. In 1996 we showed that a submicroscopic wormhole would have a throat radius of no more than about 10*

This is only slightly larger than the Planck length, 10

Visser has estimated that the negative energy required for this size of wormhole has a magnitude equivalent to the total energy generated by 10 billion stars in one year. The situation does not improve much for larger wormholes. For the same model, the maximum allowed thickness of the negative energy band is proportional to the cube root of the throat radius. Even if the throat radius is increased to a size of one light-year, the negative energy must still be confined to a region smaller than a proton radius, and the total amount required increases linearly with the throat size.

It seems that wormhole engineers face daunting problems.

^{-32}meter.This is only slightly larger than the Planck length, 10

^{-35}meter, the smallest distance that has definite meaning. We found that it is possible to have models of wormholes of macroscopic size but only at the price of confining the negative energy to an extremely thin band around the throat. For example, in one model a throat radius of 1 meter requires the negative energy to be a band no thicker than 10^{-21}meter, a millionth the size of a proton.Visser has estimated that the negative energy required for this size of wormhole has a magnitude equivalent to the total energy generated by 10 billion stars in one year. The situation does not improve much for larger wormholes. For the same model, the maximum allowed thickness of the negative energy band is proportional to the cube root of the throat radius. Even if the throat radius is increased to a size of one light-year, the negative energy must still be confined to a region smaller than a proton radius, and the total amount required increases linearly with the throat size.

It seems that wormhole engineers face daunting problems.

So hopefully this gives you a sense for what an advanced civilization would need to do in order to make a wormhole such as that depicted in Interstellar, assuming it were even possible. They would need to be able to harness amounts of negative energy that were on par with the total (positive) energy output of billions of stars (like, the energy of an entire galaxy). But on top of that, they would need to find a way to concentrate all of that energy into an extremely tiny space much smaller than the size of a single proton. This is obviously something that, if it worked, would require a very "post singularity" civilization. But there's a catch-22 here in that, it's hard to imagine a civilization which could harness all of the energy in an entire galaxy (even if we were talking about positive energy rather than a kind of energy not known to exist in such quantities) without imagining that they had first been able to colonize a galaxy. But if they haven't been able to build a wormhole yet (the most plausible way anyone has come up with for traveling faster than light) then how would they be able to colonize a galaxy? Even if they had lifespans long enough to live for the hundreds of thousands of years it would take to make a roundtrip journey like that, they wouldn't be able to communicate back to their home planet or with other pioneers exploring other regions of the galaxy, while they were traveling. But all of this of course is pure science fiction, since we're not talking about positive energy, we're talking about negative energy which, as I've explained in previous posts, can only exist momentarily in very tiny quantities microscopically.

This leads in to my next important point: why couldn't an advanced civilization figure out a way to somehow mine negative energy, picking up little tiny quantities of it here and there from different microscopic effects, and store it or concentrate it somehow, building up a vast resource of negative energy which they could use to build wormholes?

There are a couple reasons they can't do that. One of course is that in doing so, it would violate the quantum energy inequalities. But even so, given that the full extent of these inequalities is still being worked out and we don't know exactly where or when they apply (for example, it has been proven for flat space and for various curved spaces, but if you add extra dimensions or other weird modifications of gravity, there are still some cases where it remains unproven), is there any more solid reason to think this couldn't be done? The answer is yes, there's a big reason which I had left out of previous posts but which is highlighted in this SciAm article and I've seen reference to in a few other places. The reason is, violating the QEI's would also allow you to violate the 2nd law of thermodynamics, one of the most sacrosanct laws of physics ever discovered, even more sacred (I dare say) than the absoluteness of the speed of light.

One of the unique things about gravity as opposed to other forces is that, as far as we know, it only acts attractively not repulsively. This is because the charge associated with this force (mass/energy) is believed to be always positive. (With other forces, such as electromagnetism, the charge--electrical charge--can be positive or negative, and therefore you can get either attraction or repulsion.) A repulsive force (or "antigravity") is what's needed in order to stabilize a wormhole. That's why you need negative energy. And because of quantum uncertainty, you do have a combination of positive and negative energy fluctuations in the vacuum, which average out to zero (or very slightly above zero) over the long run or over large regions of space. But by "long run" and "large regions of space" here we mean compared to the Planck length or the Planck time which are both very very tiny. Because the average has to be zero, if you take away the negative energy and beam it off into deep space, you would be left with a bunch of positive energy. In other words, you would have extracted positive energy out of the vacuum that you can then use to do useful work. This would be a free and infinite energy source, which is the holy grail for many crackpots who have made it their life's work to try and build perpetual motion machines (and often claim falsely to have succeeded). Negative energy, antigravity, perpetual motion, and breaking the 2nd law of thermodynamics (which implies that you need to expend energy to do useful work, you can't just do it for free) are all directly connected to each other. A firm disbelief in this by the physics community is why all of the people who claim to have harnessed "zero point energy" are ignored. The 2nd law of thermodynamics is something which should almost not even be regarded as a law of physics, but as a law of mathematics/statistics. It's pretty much a direct consequence of statistics, with only very minimal mathematical assumptions going in (such as ergodicity). If it were broken, it wouldn't just mean the laws of physics don't work, it would mean statistics and basic mathematics doesn't work.

In the next part, I'd like to connect up some of the issues here with the issues we've been discussing in the development of our low budget sci-fi film. As a teaser, I will say that a big thing I realized after writing all of this is that I've been thinking of "traversable wormhole" the whole time as mostly meaning "something big enough that a human being could pass through it". This is the type of wormhole portrayed in Interstellar. And as I hope you'll agree after reading this far, it seems like one can say with a very high degree of certainty that it would be impossible, even for an infinitely advanced post singularity civilization. However, there is another class of traversable wormholes I wasn't thinking about much when I started writing this series. And that's microscopic wormholes that could allow a single particle or some other small piece of matter or information to pass through, from one point in space to a very distance point in space. If this were possible, then you would have faster than light communication but not faster than light travel (unless you could scan every atom of the body, convert it to pure information, beam it through, and reconstruct the body--similar to Star Trek teleportation). It would still give rise to all of the same paradoxes of time travel, but it seems much more difficult to rule out just based on the physical restrictions on negative energy densities. I think it's pretty likely that this type of traversable wormhole is also impossible to build, although in focusing on the big kind of wormhole featured in Interstellar, I was missing what is surely the more interesting question (of how or why we can't build a microscopic traversable wormhole that could be used for communication). This will get us into issues of computational complexity, revisiting Hawking's chronology protection conjecture, and seeing the 2nd law of thermodynamics come up again in a different way.

The local energy conditions original proposed in GR apply to every point in spacetime. Since general relativity is a theory about the large-scale structure of the universe, the definition of a "point" in spacetime can be rather loose. For the purposes of cosmology, thinking of a point as being a ball of 1km radius is plenty accurate enough. You won't find any significant curvature of spacetime that's smaller than that, so whether it's exactly 0 in size or 1km in size doesn't matter. But for quantum mechanics, it matters a lot because it's a theory of the very small scale structure of the universe. There, the difference between 0 and 1km is huge, in fact so huge that even anything the size of a millimeter is already considered macroscopic.

So if you're going to ask whether quantum field theory respects the energy conditions proposed in general relativity, you have to get more precise with your definitions of these energy conditions. The question isn't "can energy be negative at a single point in spacetime?" but "can the average energy be negative in some macroscopic region of space over some period of time long enough for anyone to notice?" The actual definition of the AWEC (averaged weak energy condition) is: energy averaged along any timelike trajectory through spacetime is always zero or positive. A timelike trajectory basically means the path that a real actual observer in space who is traveling at less than the speed of light could follow. From the reference frame of this observer, this just means the energy averaged at a single point over all time. The ANEC (averaged null energy condition) is similar but for "null" trajectories through spacetime. Null trajectories are the paths that photons and other massless particles follow--all particles that move at the speed of light. A real observer could not follow this trajectory, but you can still ask what the energy density averaged over this path would be.

From what I understand, the quantum energy inequalities are actually a bit stronger than these averaged energy conditions. The AWEC basically says that if there is a negative energy spike somewhere, then eventually there has to be a positive energy spike that cancels it out. The QEI's say that not only does this have to be true, but the positive spike has to come very soon after the negative spike--the larger the spikes are, the sooner.

However, you may notice that the QEI's (and the averaged energy conditions) just refer to averaging over time. What about space? Personally, I don't fully understand why Kip Thorne and others focused on whether the average over time is violated but didn't seem to care about the average over space. Because the average over space seems important for constructing wormholes too--if you can't generate negative energy more than a few Planck lengths in width, then how would you ever expect to get enough macroscopic negative energy to support and stabilize a wormhole that someone could actually travel through?

I haven't mentioned the Casimir Effect yet, which is a big omission as it's one of the first things people will cite as soon as you ask them how they think someone could possibly build a traversable wormhole. Do the quantum inequalities apply to the Casimir Effect? Yes and no.

As I understand them, the quantum inequalities don't actually limit the actual absolute energy density, they limit the difference between the energy density and the vacuum energy density. Ordinarily, vacuum energy density is zero or very close to it. (It's actually very slightly positive because of dark energy, also known as the cosmological constant, but this is so small it doesn't really matter for our purposes.) The vacuum energy is pretty much the same everywhere in the universe on macroscopic scales. So ordinarily, if a quantum energy inequality tells you that you can't have an energy density less than minus (some extremely small number) then this also places a limit on the absolute energy density. But this is not true in the case of the Casimir Effect. Because the Casimir Effect lowers the vacuum energy in a very thin region of space below what it normally is. This lowered value of the energy (which is slightly negative) can persist for as long as you want in time. But energy fluctuations below that slightly lowered value are still limited by the QEI's.

This seems like really good news for anyone hoping to build a traversable wormhole--it's a way of getting around the quantum energy inequalities, as they are usually formulated. However, if you look at how the Casimir Effect actually works you see a very similar limitation on the negative energy density--it's just that it is limited in space instead of limited in time.

The Casimir Effect is something that happens when you place 2 parallel plates extremely close to each other. It produces a very thin negative vacuum energy density in the region of space between these plates. To get any decent amount of negative energy, the plates have to be enormous but extremely close together. It's worth mentioning that this effect has been explained without any reference to quantum field theory (just as the relativistic version of the van der Waals force). As far as I understand, both explanations are valid they are just two different ways of looking at the same effect. The fact that there is a valid description that doesn't make any reference to quantum field theory lends weight to the conclusion that despite it being a little weird there is no way to use it to do very weird things that you couldn't do classically like build wormholes. However, I admit that I'm not sure what happens to the energy density in the relativistic van der Waals description--I'm not sure there is even a notion of vacuum energy in that way of looking at it, as vacuum energy itself is a concept that exists only in quantum field theory (it's the energy of the ground state of the quantum fields).

Most of what I've read on quantum inequalities has come from Ford and Roman. They seem very opposed to the idea that traversable wormholes would be possible. I've also read a bit by Matt Visser, who seems more open to the possibility. The three of them, as well as Thorne, Morris, and Hawking seem to be the most important people who have written papers on this subject. Most other people writing on it write just a few papers here or there, citing one of them. Visser, Ford, and Roman seem to have all dedicated most of their careers to understanding what the limits on negative energy densities are and what their implications are for potentially building wormholes, time machines, or other strange things (like naked singularities--"black holes" that don't have an event horizon).

There are a few more things I'd like to wrap up in the next (and I think--final) part. One is to give some examples of the known limitations on how small and how short lived these negative energy densities can be, and what size of wormhole that would allow you to build. Another is to mention Alcubierre drives (a concept very similar to a wormhole that has very similar limitations). Another is to try to enumerate which averaged energy conditions are known for sure to hold in quantum field theory and in which situations, comparing this with which conditions would need to be violated to make various kinds of wormholes. And finally, to try to come up with any remotely realistic scenario for how this might be possible and give a sense for the extremely ridiculous nature of things that an infinitely advanced civilization would need to be able to do in order for that to happen practically, from a technological perspective.