Wednesday, November 16, 2005

Odds Holding Steady At One

Apparently this whole idea of "Your Frame Of Reference Is Flawed Because You're You" thing is nothing new to anybody. The linked paper is about the flaws in the Doomsday Argument, a modification of which I presented in my first Odds Are One-themed post. The abstract contains is a nice summary of many of the things I've tried to say on the subject:
For example, we have a tendency to infer non-randomness from apparent patterns in random events (witness the incorrigible optimists who spot trends in the spins of a roulette wheel or the ups and downs of the FT Share Index); at the same time, the history of statistics suggests that, when random samples are required, we often mistake the merely haphazard - or whatever happens to be near at hand - for the truly random.

In my continuing attempts to gauge my audience, I would guess that your eyes would glaze over were you to attempt to read the paper, or this, which is a pretty good explication of the Doomsday Argument, and which presents it in relation to the weak anthropic principle. It has been proved, after all, that you are Humanities-studying iPod owners (I mean, okay, I imagine Sam probably spent some hours of class time discussing the Doomsday Argument, but the rest of you probably not so much).

What's interesting about The Doomsday Argument (and if you haven't clicked one of the links, the explanation I gave previously, while not quite the same, works fine: it seems like you'd be statistically more likely to be born in the latter 2/3rds of all humans who have ever been born than in the former 1/3, but if that were the case then humans would have to become extinct in a few hundred years) is that at first it's hard to understand why this is actually any sort of philosophical problem--it just seems like The Damned Lies of Statistics (which, as I will one day blog about, are not Damned Lies of Statistics, they are Damned Lies of Language. Many people are just calling them "lies" these days). Then after you've groked the argument, it's equally hard to understand why your initial arguments against it don't quite work--it's based on the idea that the fact that You are Here, Now has some particular intrinsic meaning--or rather, that it doesn't, that you are a random sample from the grab bag labeled, "all humans in history." This makes counter arguments difficult, because You are, in fact, Here Now (e.g., counter-arguments like, "Yeah, but the Doomsday Argument has been true for everyone who has ever lived or is currently alive," seem like they're putting you on an equal observational footing with every other person in the "experiment." They're not. If there are only every going to be, say, 100 billion humans in the history of time, including humans 1 through 99,999,999,999 in your experiment doesn't give you any more information than you already had. You'd need the 124 billionth human in your sample to get any premise-shattering data, and you can't have him or her (or it). As soon as you state the terms of the argument, you put yourself at the argumentative "end of the line," as it were, and...okay, already your eyes are glazing over. You have no idea what I'm saying right now. I've completely lost you...uh...never mind).

What's also interesting about the Doomsday Argument is that I think problems like this are windows into new models. Whatever our current models for understanding probability and observation and, you know, being itself are, they can't quite handle this problem, which means some tear-down and rebuilding is indicated.

By the way, I stumbled upon all of this via this post, which I found through about Five Degrees of Blog Separation, a phenomenon with which, since my post on audience the other day, I have become obsessed. Interestingly, this guy thinks this paper solves the issue, which I don't at all.


Porten said...

I stumbled across this blog while clicking the little randomizer up top (something I do when I want to remind myself how many languages I don’t speak). I was instantly intrigued by the idea of the Doomsday problem. When I read the bit about the Damned Lies of Language (which I prefer to call the Damned Lies of Perception), I was reminded of seeing the Monty Hall problem for the first time. I still can’t shake the feeling that the problem is a trick…the probability of your first choice being correct also goes up. That the math supports switching you choice seems like a problem of our conception of the situation. As soon as you are asked if you would like to change your guess, you are actually faced with a new, 50/50 choice: door number 1 or door number 2? Whether you stick with the original choice (by which I mean CHOOSE to stick with your original choice) or switch, your chances of winning are fifty percent. That keeping your original guess is not a new choice is merely an illusion. I’m probably wrong, I don’t really know much about this stuff.

Anyway, the Doomsday problem seems to make a similar presumption that messes with our conceptions (besides the presumption that the human population WILL end, or that it continues its path of exponential growth, etc.). The objection seems inherent in the “Yeah, but the Doomsday Argument has been true for everyone who has ever lived or is currently alive…” Again, presuming the constant geometric growth of humanity, the Doomsday model never predicts any change in the imminence of Doomsday, so it isn’t useful. Our observations neither tend to prove or disprove the argument, right?

Transient Gadfly said...

Yeah, I think that's right. If humanity ended tomorrow due to nuclear holocaust or we were all Oryx-and-Crake'd, I think we'd agree that the Doomsday Argument didn't predict it (well, we wouldn't agree on much, because we'd all be gone). Nor, if humanity goes on to exist for another billion years after this argument has been made, would it actually prove that it was wrong. Which goes a long way towards illustrating that this argument doesn't prove at all what it seems to on the surface.

Anonymous said...

It's true that any one human in history could have made the argument, as could any human who will ever live. Of course, only 5% of all humans who will ever live would be wrong about their 95% confidence interval. All we can do is hope that we are 'special' in some way that makes us, coincidentally, fall in the first 5% of all humans. For example, one could argue that being in the reference classes 'humans before space exploration' or 'humans before counter-measures for doomsday were implemented' confers us with 'early adopter' characteristics. On the other hand, the reference classes 'WMD-era humans' and 'global-warming-era humans' suggest we have 'late adopter' characteristics. In the end we can probably say that there are unknown quantities which add uncertainty to the doomsday argument. Incidentally, from the doomsday argument and the antrophic principle we can also infer that there might not be additional technological civilizations in this planet after ours.

Anonymous said...

I guess everytime the argument is wrong, it should count as evidence against itself :)

Ricardo Aler said...

OK, here goes my stab at the Doomsday Argument.

There are only two possible hypotheses:

ED: Early doom: doom happens after 100 million humans
LD: Late doom: doom happens after 800 million humans

Finding myself as human number 60 million, according to the Doomsday Argumen I should believe ED is true. Actually, what the argument says is that I should follow the following rule:

R-ED: IF my birth rank <= 100 million THEN I believe ED

Let's see the performance of this rule. If ED is true. 100 million people will be able to use this rule (because their birth rank is <= 100 million). And all of them will be correct (because the rule predicts ED, which is true). That is, assuming ED true, the rule has a 100% accuracy (everytime it is applied, it is correct).

If ED is false (LD is true), the first 100 million people will also apply the rule (because the left hand side is true), but all of them will be wrong because now LD is true and the rule predicts ED. That is, assuming LD is true, the rule has 0% accuracy. Everybody who can apply this rule will be wrong (under LD).

Summarizing, the performance of rule R-ED is:

100%, if ED is true
0% , if LD is true

Let's now see what happens with the opposite rule:

R-LD: IF my birth rank <= 100 million THEN I believe LD

Using R-LD, if ED is true, the first 100 million persons will be wrong (0%). But if LD is true, the first 100 million will be right (100% accuracy).

Thus, the performance of rule R-LD will be:

0%, if ED is true
100%, if LD is true

These results are equivalent to R-ED, and it shows that, if my birth rank is <= 100 million, it does not really matter whether a person believes ED or LD, contrary to the Doomsday Argument. It all depends on whether ED is more likely a-priori than LD. Which is as it should be.

The odds are one of me being me :)


Ricardo Aler
16 April 2006