1132: Frequentists vs. Bayesians
|Frequentists vs. Bayesians|
Title text: 'Detector! What would the Bayesian statistician say if I asked him whether the--' [roll] 'I AM A NEUTRINO DETECTOR, NOT A LABYRINTH GUARD. SERIOUSLY, DID YOUR BRAIN FALL OUT?' [roll] '... yes.'
This comic is a joke about jumping to conclusions based on a simplistic understanding of probability. The "base rate fallacy" is a mistake where an unlikely explanation is dismissed, even though the alternative is even less likely. In the comic, a device tests for the (highly unlikely) event that the sun has exploded. A degree of random error is introduced, by rolling two dice and lying if the result is double sixes. Double sixes are unlikely (1 in 36, or about 3% likely), so the statistician on the left dismisses it. The statistician on the right has (we assume) correctly reasoned that the sun exploding is far more unlikely, and so is willing to stake money on his interpretation.
I seem to have stepped on a hornet’s nest, though, by adding “Frequentist” and “Bayesian” titles to the panels. This came as a surprise to me, in part because I actually added them as an afterthought, along with the final punchline. … The truth is, I genuinely didn’t realize Frequentists and Bayesians were actual camps of people—all of whom are now emailing me. I thought they were loosely-applied labels—perhaps just labels appropriated by the books I had happened to read recently—for the standard textbook approach we learned in science class versus an approach which more carefully incorporates the ideas of prior probabilities.
The "frequentist" statistician is (mis)applying the common standard of "p<0.05". In a scientific study, a result is presumed to provide strong evidence if, given that the null hypothesis, a default position that the observations are unrelated (in this case, that the sun has not gone nova), there is less than a 5% chance that the result was merely random. (The null hypothesis was also referenced in 892: Null Hypothesis.)
Since the likelihood of rolling double sixes is below this 5% threshold, the "frequentist" decides (by this rule of thumb) to accept the detector's output as correct. The "Bayesian" statistician has, instead, applied at least a small measure of probabilistic reasoning (Bayesian inference) to determine that the unlikeliness of the detector lying is greatly outweighed by the unlikeliness of the sun exploding. Therefore, he concludes that the sun has not exploded and the detector is lying.
The line, "Bet you $50 it hasn't", is a reference to the approach of a leading bayesian scholar, Bruno de Finetti, who made extensive use of bets in his examples and thought experiments. See Coherence (philosophical gambling strategy) for more information on his work. In this case, however, the bet is also a joke because we would all be dead if the sun exploded. If the Bayesian wins the bet, he gets money, and if he loses, they'll both be dead before money can be paid. This underlines the absurdity of the premise and emphasizes the need to consider context when examining probability.
The title text refers to a classic series of logic puzzles known as Knights and Knaves, where there are two guards in front of two exit doors, one of which is real and the other leads to death. One guard is a liar and the other tells the truth. The visitor doesn't know which is which, and is allowed to ask one question to one guard. The solution is to ask either guard what the other one would say is the real exit, then choose the opposite. Two such guards were featured in the 1986 Jim Henson movie Labyrinth, hence the mention of "A LABYRINTH GUARD" here.
Mathematical and scientific details
As mentioned, this is an instance of the base rate fallacy. If we treat the "truth or lie" setup as simply modelling an inaccurate test, then it is also specifically an illustration of the false positive paradox: A test that is rarely wrong, but which tests for an event that is even rarer, will be more often wrong than right when it says that the event has occurred.
The test in this case is a neutrino detector. It relies on the fact that neutrinos can pass through the earth, so a neutrino detector would detect neutrinos from the sun at all times, day and night. The detector is stated to give false results ("lie") 1/36th of the time.
There is no record of any star ever spontaneously exploding—they always show signs of deterioration long before their explosion—so the probability is near zero. For the sake of a number, though, consider that the sun's estimated lifespan is 10 billion years. Let's say the test is run every hour, twelve hours a day (at night time). This gives us a probability of the Sun exploding at one in 4.38×1013. Assuming this detector is otherwise reliable, when the detector reports a solar explosion, there are two possibilities:
- The sun has exploded (one in 4.38×1013) and the detector is telling the truth (35 in 36). This event has a total probability of about 1/(4.38×1013) × 35/36 or about one in 4.50×1013.
- The sun hasn't exploded (4.38×1013 − 1 in 4.38×1013) and the detector is not telling the truth (1 in 36). This event has a total probability of about (4.38×1013 − 1) / 4.38×1013 × 1/36 or about one in 36.
Clearly the sun exploding is not the most likely option.
Presidential election predictions
This comic may be about the accuracy of presidential election predictions that used statistical models, such as Nate Silver's 538 and Professor Sam Wang's PEC. The bet may refer to a well-publicized bet that Nate Silver tried to make with Joe Scarborough regarding the outcome of the election (see tweet on the right).
- The Sun will never explode as a supernova, because it does not have enough mass.
- In the same blog comment as cited above, Randall explains that he chose the "sun exploding" scenario as a more clearly absurd example than those usually used:
…I realized that in the common examples used to illustrate this sort of error, like the cancer screening/drug test false positive ones, the correct result is surprising or unintuitive. So I came up with the sun-explosion example, to illustrate a case where naïve application of that significance test can give a result that’s obviously nonsense.
- "Bayesian" statistics is named for Thomas Bayes, who studied conditional probability — the likelihood that one event is true when given information about some other related event. From Wikipedia: "Bayesian interpretation expresses how a subjective degree of belief should rationally change to account for evidence".
- The "frequentist" says that 1/36 = 0.027. It's actually 0.02777…, which should round to 0.028.
- Using neutrino detectors to get an advance warning of a supernova is possible, and the Supernova Early Warning System does just this. The neutrinos arrive ahead of the photons, because they can escape from the core of the star before the supernova explosion reaches the mantle.
- Did the sun just explode? (It's night, so we're not sure)
- [Two statisticians stand alongside an adorable little computer that is suspiciously similar to K-9 that speaks in Westminster typeface.]
- Frequentist Statistician: This neutrino detector measures whether the sun has gone nova.
- Bayesian Statistician: Then, it rolls two dice. If they both come up as six, it lies to us. Otherwise, it tells the truth.
- Frequentist Statistician: Let's try. [to the detector] Detector! Has the sun gone nova?
- Detector: roll YES.
- Frequentist Statistician:
- Frequentist Statistician: The probability of this result happening by chance is 1/36=0.027. Since p<0.05, I conclude that the sun has exploded.
- Bayesian Statistician:
- Bayesian Statistician: Bet you $50 it hasn't.
- Comment by Randall Munroe to "I don’t like this cartoon", blog post by Andrew Gelman in Statistical Modeling, Causal Inference, and Social Science. Archived Jan 17 2013 by the Wayback Machine.
add a comment! ⋅ add a topic (use sparingly)! ⋅ refresh comments!