This has recently been the source of much frustration for some of my friends, as I’ve attempted to casually plow through a probabilistic argument that most people would instinctively recoil at. So, I thought, it might work better when written down. Of course, plenty of others have also written it down, including Brandon Carter – its originator – and Stephen Baxter – a science fiction author (who referred to it as the “Carter Catastrophe” in his novel *Time*).

The main premise of the argument is the Copernican principle. Copernicus, of course, heretically suggested that the Earth was not the centre of the universe. Thus, the Copernican principle is the idea that the circumstances of our existence are not special in any way (except insofar as they need to be special for us to exist in the first place).

We are now quite comfortable with the Copernican principle applied to space, but the doomsday argument applies it to time. Just as we do not live in any particularly special location, so we do not live at any particularly special moment. This is distorted by the fact that the human population has exploded in the last century to the point where about 10% of all the humans to have ever lived (over the course of homo sapiens’ ~200,000 year history) are still alive today. We can deal with this distortion by (conceptually) assigning a number to each human, in chronological order of birth, from 1 to *N* (where *N* is the total number of humans that have lived, are currently alive, or will ever be born in the future). We can then say, instead, that we are equally likely to have been assigned any number in that range.

In probability theory, this is equivalent to saying that you have been randomly selected from a uniform distribution. Yes, it must be *you* (the observer) and not someone else, because from your point of view you’re the only person who has a number selected from the *entire* range – past, present and future. *You* could have been assigned a number at any point along the human timeline (by the Copernican principle), but you still cannot observe the future, and so by selecting any other specific individual you’d automatically be restricting the range to the past and present. The number you’ve actually been assigned is something on the order of 60 billion (if we estimate that to be the total number of humans to have ever lived so far).

So where does that leave us? Well, in a uniform distribution, any randomly selected value is 95% likely to be in the final 95% of the range. If your randomly selected number is 60 billion, then it’s 95% likely that the total number of humans *to ever live* will be less than 60 billion × 20 = 1.2 trillion. Similarly, it’s 80% likely that the total number will be 60 billion × 5 = 300 billion, and 50% likely that the total number will be 120 billion. Now, 50%, 20% and 5% probabilities do crop up, but we must draw the line at some point, because you cannot demand absolute certainty (or else science would be impossible.)

This should make us think. The doomsday argument doesn’t give an exact number, nor does it directly give us a time, but this can be estimated from trends in population growth. However, the prospect of a scenario in which humanity spreads out beyond the solar system and colonises the galaxy, to produce a population of countless trillions over tens of thousands or even millions of years, would seem vanishingly unlikely under this logic. Even the prospect that humanity will survive at roughly its current population on Earth for more than a few thousand years seems remote.

It’s also worth pointing out, as others have, that the doomsday argument is entirely independent of the mechanism by which humanity’s downfall might occur. That is, if you accept the argument, then there is nothing we can do to stop it.

Needless to say, the objections to this reasoning come thick and fast, especially if you bumble like I have through a hasty verbal explanation (hopefully I’ve been more accurate and articulate in this blog post). One should bear in mind that this isn’t simply some apocalyptic pronouncement from a random, unstable individual (it wasn’t my idea). This is work that has been published some time ago by three physicists independently (Brandon Carter, J. Richard Gott and Holger Bech Nielsen) in peer-reviewed journals. That’s not to say it’s without fault, but given the level of scrutiny already applied, one might at least pause before simply dismissing it out of hand.

The objections I’ve heard (so far) to the doomsday argument usually fall along the following lines:

- Often they discard the notion that the observer is randomly selected, thus reaching a different (and trivial) conclusion. One can point out that there always has to be a human #1, and a human #2, and so on, and that this says nothing about the numbers that come after. However, in pointing this out, one is not randomly selecting those numbers, and random selection is the premise of the argument.
- They object that a sample size of one is useless. Indeed, in the normal course of scientific endeavour, a sample size of one
*is*useless, but that’s just because in a practical setting we’re trying to achieve precision. If we’re just trying to make use of what we know, one sample is infinitely more useful than no samples at all. The doomsday argument does not at any point assume that its single randomly-selected number represents anything more than a single randomly-selected number. If we had*more*than one random sample, we’d be able to make a stronger case, but that does not imply there’s currently no case at all. - Sometimes they object on the grounds of causality – that we simply can’t know the future. I think this is just a manifestation of personal incredulity. There is no physical law that says we cannot know the future, and here we’re not talking about some divine revelation or prophecy. We’re only talking about broad probabilistic statements about the future, and we make these all the time (meteorology, climatology, city planning, resource management, risk analysis, software development, etc. ad infinitum).

However, I’m sure that won’t be the end of it.

Having now actually read an explanation of this theory, I have a better understanding of the argument (and I have nothing better to do while trapped in the mag lab for seven hours anyhow). However, as you say – that isn’t the end of it.

The argument is actually a lot more generous than I originally believed it to be, but I remain unconvinced. The argument’s initial assumption is vacuous (i.e. devoid of significance). To say that there is a 95% probability that we are in the last 95% of any uniform interval is mathematically trivial, but without knowing anything about the total population we have no way of gauging what fraction we are through that population. For example at any given time you could make the claim that there is a 95% probability that you are within the last 95% of that hour, but that does not tell you what the time is. Similarly, if you are in a running race (which you don’t know the length of), it’s perfectly valid to say that there is a 95% chance that you are in the last 95% of the race – but this still doesn’t tell you how far you have to go.

Probability theory is at its most useful when making predictions about a well-defined system. The result you get is highly dependent on the quality of information you input. A good way to make many physicists roll their eyes is to mention the Drake Equation. By the time you have entered all the variables (which includes numbers we have no reliable information on), the output has so much uncertainty that it is essentially meaningless.

Although it it allows for an interesting philosophical debate (providing my brain with a few hours worth of exercise), I believe that the system we are studying (human population past, present and future) is simply too complex to make any kind of reliable prediction based on such simple input.

I see what you’re saying, and it would probably be a valid counter to some formulations of the argument.

The initial assumption is the Copernican principle, and I wouldn’t call it vacuous. It’s the Copernican principle that suggests the probability distribution – indeed, suggests the existence of a probability distribution. The statement that “any randomly selected value is 95% likely to be in the final 95% of the range” is trivial, but only once you know the distribution.

Also, let me make a subtle clarification: there is a 95% chance that you were

bornin the final 95% of all people. Your birth is a definable event, as opposed to “now”, which as you say is slippery and probably not amenable to statistical analysis. As in your analogies, there is no particular probability attached to simply existing among the final 95% of all people. It’s not obvious what that would actually mean.However, given that your birth was a random selection from said probability distribution, we

canestimate the distribution’s parameter. It’s a ghastly, piss poor estimate, granted. It could easily be an order of magnitude off, but it is an estimate nonetheless. We have one degree of freedom – the bare minimum for estimating one parameter. Thus, I wouldn’t characterise the doomsday argument as a prediction, but more as a kind of fuzzy constraint. You’re right that no self-respecting scientist would accept such pitiful data for any practical purpose, but the context here is rather special. This is all the data we’re going to get, and the problem is interesting enough that we may as well use it.