Eternal Inflation Predicts That Time Will End
http://arxiv.org/abs/1009.4698
"If you accept that the end of time is a real event that could happen to you, the change in odds is not surprising: although the coin is fair, some people who are put to sleep for a long time never wake up because
they run into the end of time first. So upon waking up and discovering that the world has not ended, it is more likely that you have slept for a short time. You have obtained additional information upon waking—the information that time has not stopped—and that changes the probabilities. However, if you refuse to believe that time can end, there is a contradiction. The odds cannot change unless you obtain additional information. But if all sleepers wake, then the fact that you woke up does not supply you with new information."
Lending some weight to this theory is the fact that both Peter Woit and Lubos Motl think the paper is complete nonsense (Motl's rant on it is particularly entertaining and vacuous), since both of them are idiots (although usually in polar opposite ways)!
I've always thought of Raphael Bousso as a better physicist than ex physicist Lubos Motl was, and certainly better than mathematics lecturer Peter Woit. I suppose that doesn't guarantee that it's right though.
Normally I wouldn't pay much attention to a headline like this, but Bousso is actually someone I have a lot of respect for. And to add to that, I have found Bostrom's Doomsday Arguments in the past fairly persuasive (at least more convincing than not), which have a similar flavor... although Bostrom's arguments were far less technical in nature. This may give a more solid, physical basis to the idea that being a good Bayesian entails believing we are all doomed.
Comments
I'll have to give the thing a thorough read and see if I'm missing something objection-worthy.
But perhaps the main thing that makes me at least not dismiss it, is that all of the criticisms I've seen--including what you and
I've seen the "wait, this paper can't be that stupid, it's written by [famous smart person]" objection fail too many times to find it reliable.
And in fact, if you do the limiting procedure in the relevant way, then you won't get their result. Sure, if you take the limit by imposing a cutoff that has some geometric shape to it, and then expanding that shape, then you get their result. But if you take the limit by starting with a geometric cutoff, and then extending to the end of the life of any observers that are even partially included, then you get the result that no observer reaches the end of time.
Also, this main point doesn't really have anything to do with Bostrom's Doomsday Argument.
The passage you quote though does seem relevant. Just because all observers eventually wake doesn't mean that the fact of waking can't affect your probabilities. Think about "sleeping beauty" cases for an example where all observers wake, and yet it seems reasonable to say that waking actually gives beauty evidence that changes her beliefs about heads and tails.
Let probability on natural numbers be defined in the way they say (that is, for any statement about numbers, choose a cutoff, see what the probability is that the statement holds for a uniformly chosen set of numbers below that cutoff, and take the limit as the cutoff tends to infinity).
Then, it's a standard result that the probability that two randomly chosen natural numbers have no factor in common is 6/π^2. (It's a nice exercise to show this, using the fact that 1+1/4+1/9+1/16+...=π^2/6, and rewriting that sum as the product of (1+1/p^2+1/p^4+...) as p ranges over all primes.)
However, in an argument parallel to theirs, it looks like we can say that the probability that a natural number is greater than every power of 2 is 1/2, since as n goes to infinity, the probability that a random number below n is greater than every power of 2 below n is close to 1/2. (I probably have to be a bit more careful in setting that up.)
Of course, in that case, it's obvious how I'm messing things up by taking two nested quantifiers and making them depend on the same n, rather than making the second depend on the first. And their argument is making the same mistake, I think.
The way I see it, the problem with their claim that the location of the cutoff doesn't actually matter is that for any given location of the cutoff, the "novel catastrophe" they claim exists is just hitting the cutoff. The fact that they can remove the cutoff while calculating a consistent probability doesn't really explain how they think a universe will actually end. Without an arbitrary choice of cutoff, there is no catastrophe; the universe won't know it's hit the cutoff, and will keep happily evolving forward in time.
So their semantics for probability must be messed up.
The fact that they can remove the cutoff while calculating a consistent probability doesn't really explain how they think a universe will actually end.
I'm not following you here. Would you agree that there is no way to define probability consistently in an infinite space without using a cutoff? By "removing the cutoff" are you talking about the few measures where they took a limit of it going to infinity and obtained a finite non-zero probability? I don't think that was central to what they were saying, as the main example most people care about is the causal patch cutoff, which makes the most physical sense.
They did provide a physical interpretation of how they actually think it will end. Namely, that observers get thermalized when they run into the event horizon. I'm a little unclear on what they're saying there though, since the usual interpretation is that the infalling observer only gets thermalized from the perspective of another observer far away... not from their own perspective, which I think is what they are saying will happen.
Not at all. Just take the Borel sigma-algebra of regions, and put any probability measure at all that you like on it. This gives you a consistent definition of probability.
In fact, talking about probabilities by using the limits of frequencies in finite cutoffs explicitly gives you something that doesn't satisfy the probability axioms. In particular, there are two events that have a well-defined probability, such that their conjunction doesn't have a well-defined probability. I'll give an example with the natural numbers instead of in physical space, but you can generalize it fairly straightforwardly, I would think.
If you define probability in the limit way, then the probability that a number is even is 1/2, since as n goes to infinity, the limit of the number of numbers up to n that are even goes to 1/2. (At every odd n there is a slight deviation, but the size of the deviation goes to 0.)
Now if you let S be the set of natural numbers that have an even number of bits in their binary expansion and are even, or an odd number of bits in their binary expansion and are odd, then the probability that a number is in S is also 1/2. (There are slight deviations, but again the size goes to 0 as n goes to infinity.)
But the probability that a number is even and in S is undefined. This is because when n is exactly an even power of 2, the frequency is close to 1/3, and when n is exactly an odd power of 2, the frequency is close to 1/6, and it fluctuates back and forth as n increases, so there is no well-defined probability.
Not at all. Just take the Borel sigma-algebra of regions, and put any probability measure at all that you like on it. This gives you a consistent definition of probability.
Ok, I guess I should have used the word "unique" instead of "consistently". It's the arbitrariness here that's the issue. I should have said that there is no unique way to define probability on an infinite space.
Yes, you can consistently do whatever you want. The problem is that no matter how you do it, it's going to be arbitrary, right? So you might as well do it in a way that makes some physical sense, like looking at how statistics works in a physically local region of spacetime, and then generalizing from there. If you can think of a way that makes more sense, or is somehow less arbitrary, then I'm sure lots of people would be interested.
Only for the causal patch measure, whereas their argument claims to be more general. And I really don't see any physical reason in the causal patch measure to think that these observers are thermalized; as you say, infalling observers shouldn't notice if they cross a horizon. So I find this less than convincing.
Aside from that very brief discussion of the causal patch measure where they at least try to connect with physics, the rest of the paper really seems to be saying something much more radical. They say very explicitly:
This is really, really radical. They're saying that spacetime could be extendable -- there could be no physical obstruction to just continuing to propagate further into the future -- but time could end anyway. It's hard to overstate what an extreme departure that is from everything we know about physics. And truly radical ideas in physics are, to zeroth order, always completely wrong.
It's really surprising me that they didn't see this paper as an argument against geometric cutoffs, but instead chose to stick with believing in geometric cutoffs
Incidentally, I don't think they take a definite stance on this one way or another. There is a whole research program involving lots of people (mostly on the West coast, I think) that work on these geometric cutoffs. Nobody has thought of a better way to do it. So if nothing else, I think this paper may be important in that it could mean that the whole research program of trying to figure out anything from eternal inflation needs to end. Of course, they do take a more conservative tone in not just assuming that it's the death knell of the whole idea. All they claim is that if you believe in geometric cutoffs (which a lot of people do) then you have to take these cutoff observers seriously--they're not just artifacts, but actual physical observers who will experience the end of time.
This seems like a straightforward sort of case that philosophers would normally use as an argument against the frequency interpretation of probability. Look at page 14. The two assumptions highlighted there are ones that philosophers of probability generally already think are false.
Hmmm. It's interesting that you read this paper as being "frequentist" because I am pretty sure Bousso would consider himself a Bayesian if asked about it. (I can't speak for the other authors--actually, I can't speak for him either, but I'm at least familiar enough with his work to know roughly how he thinks about probability, and I think it's pretty similar, if not identical to how *I* think about it, and I'd consider myself a Bayesian.) Yes, he does use the term "relative frequency" a couple times, although I think he is using it in a pretty Bayesian sense. At least that's my interpretation. Perhaps my concept of what the real difference is between these two is a bit off though.
This whole business of regulating eternal inflation has become somewhat of a cottage industry over the past 5 years or so. My impression is that the people who know what they're doing (like Bousso) tend to condition everything on observations and pay careful attention to the reference class of observers making the observations. Whereas there are a minority of people doing it the wrong way, where you come up with some arbitrary way of defining some kind of inherent probability for something happening in the universe. I guess I would have labeled the former as "Bayesian" and the latter as frequentist, although maybe that is not the right use of the terms.
In fact, I think a perfect example of the difference would be the Sleeping Beauty puzzle you linked to. The right answer is 1/3 because you have to condition on her waking up and observing the coin, not just what the coin is doing. This is exactly the approach that Bousso (and most other physicists at Berkeley and Stanford) takes.
Just because all observers eventually wake doesn't mean that the fact of waking can't affect your probabilities.
In the case they are talking about in the paper, though, it doesn't affect the probabilities. They are not waking up more than once, they're just waking up once with a 50% chance of when they wake up. This particular example is different from the sleeping beauty case, but they use the same (correct) reasoning you'd use to get to 1/3rd in the puzzle consistently.
But if you take the limit by starting with a geometric cutoff, and then extending to the end of the life of any observers that are even partially included, then you get the result that no observer reaches the end of time.
I don't know what you mean by this. If you just extended it to the end of the life of any partially included observers, you'd end up with some ragged shape that's still finite. How does that tell you anything about the probability in the infinite case? They do mention that perhaps you could come up with some kind of cutoff that steers around observers, but then you'd have to pay particular attention to what the matter distribution is in the universe, which seems pretty weird.
If you just extended it to the end of the life of any partially included observers, you'd end up with some ragged shape that's still finite. How does that tell you anything about the probability in the infinite case?
I don't see how any such process involving finite shapes tells you anything about the probability in the infinite case. And they're making conclusions about the infinite case using exactly this sort of process involving finite shapes. And then they end up with a conclusion that there is a positive probability that the universe ceases to exist, even though there is no boundary to the universe.
In particular, the claim is that they're given by the limit of relative frequencies in certain finite regions as the size of those regions goes to infinity
Actually, I don't think this is a part of their central claim. They do take the limit of a couple measures as the size goes to infinity, but that's sort of a side thing, and for the main ones like the causal patch, which I think is the one that most people think is the "right" one to use... at least moreso than any other one... they don't ever take this limit because you can't. They are just providing a way of dividing the space up into local "bins" and doing statistics on these bins, and assuming the statistics will be the same for the overall global space, if it makes sense to talk about such a space at all. You don't ever have to take a limit of bin size. I think this is a bit less arbitrary than doing the same thing for pure numbers, because spacetime automatically has this nice property of locality, which tells you what the right local bins are to use.
I don't see how any such process involving finite shapes tells you anything about the probability in the infinite case.
Just out of curiosity, do you know of any alternative? Is your view that probability is just meaningless for infinite sets, or is there a better way of doing this?