Difference between revisions of "Main Page"

Explain xkcd: It's 'cause you're dumb.
Jump to: navigation, search
(Trimming it down a bit.)
Line 3: Line 3:
 
<font size=5px>''Welcome to the '''explain [[xkcd]]''' wiki!''</font>
 
<font size=5px>''Welcome to the '''explain [[xkcd]]''' wiki!''</font>
  
We have an explanation for all [[:Category:Comics|'''{{#expr:{{PAGESINCAT:Comics|R}}-10}}''' xkcd comics]],
+
We have an explanation for all [[:Category:Comics|'''{{#expr:{{PAGESINCAT:Comics|R}}-9}}''' xkcd comics]],
<!-- Note: the -10 in the calculation above is to discount subcategories (there are 8 of them as of 2013-02-27),
+
<!-- Note: the -9 in the calculation above is to discount subcategories (there are 8 of them as of 2013-02-27),
     as well as [[List of all comics]] and [[List of unexplained comics]], which are obviously not comic pages. -->
+
     as well as [[List of all comics]], which is obviously not a comic page. -->
 
and only {{PAGESINCAT:Incomplete articles|R}}
 
and only {{PAGESINCAT:Incomplete articles|R}}
 
({{#expr: {{PAGESINCAT:Incomplete articles|R}} / {{LATESTCOMIC}} * 100 round 0}}%) [[:Category:Incomplete articles|are incomplete]]. Help us finish them!
 
({{#expr: {{PAGESINCAT:Incomplete articles|R}} / {{LATESTCOMIC}} * 100 round 0}}%) [[:Category:Incomplete articles|are incomplete]]. Help us finish them!

Revision as of 02:01, 20 June 2013

Welcome to the explain xkcd wiki!

We have an explanation for all 6 xkcd comics, and only 0 (0%) are incomplete. Help us finish them!

Latest comic

Go to this comic explanation

Modified Bayes' Theorem
Don't forget to add another term for "probability that the Modified Bayes' Theorem is correct."
Title text: Don't forget to add another term for "probability that the Modified Bayes' Theorem is correct."

Explanation

Ambox notice.png This explanation may be incomplete or incorrect: When using the Math-syntax please also care for a proper layout. Please edit the explanation below and only mention here why it isn't complete. Do NOT delete this tag too soon.

Bayes' Theorem is an equation in statistics that gives the probability of a given hypothesis accounting not only for a single experiment or observation but also for your existing knowledge about the hypothesis, i.e. its prior probability. Randall's modified form of the equation also purports to account for the probability that you are indeed applying Bayes' Theorem itself correctly by including that as a term in the equation.

Bayes' theorem is:

P(H \mid X) = \frac{P(X \mid H) \, P(H)}{P(X)}, where

  • P(H \mid X) is the probability that H, the hypothesis, is true given observation X. This is called the posterior probability.
  • P(X \mid H) is the probability that observation X will appear given the truth of hypothesis H. This term is often called the likelihood.
  • P(H) is the probability that hypothesis H is true before any observations. This is called the prior, or belief.
  • P(X) is the probability of the observation X regardless of any hypothesis might have produced it. This term is called the marginal likelihood.

The purpose of Bayesian inference is to discover something we want to know (how likely is it that our explanation is correct given the evidence we've seen) by mathematically expressing it in terms of things we can find out: how likely are our observations, how likely is our hypothesis a priori, and how likely are we to see the observations we've seen assuming our hypothesis is true. A Bayesian learning system will iterate over available observations, each time using the likelihood of new observations to update its priors (beliefs) with the hope that, after seeing enough data points, the prior and posterior will converge to a single model.

If P(C)=1 the modified theorem reverts to the original Bayes' theorem (which makes sense, as a probability one would mean certainty that you are using Bayes' theorem correctly).

If P(C)=0 the modified theorem becomes P(H \mid X) = P(H), which says that the belief in your hypothesis is not affected by the result of the observation (which makes sense because you're certain you're misapplying the theorem so the outcome of the calculation shouldn't affect your belief.)

This happens because the modified theorem can be rewritten as: P(H \mid X) = (1-P(C))\,P(H) + P(C)\,\frac{P(X \mid H)\,P(H)}{P(X)}. This is the linear-interpolated weighted average of the belief you had before the calculation and the belief you would have if you applied the theorem correctly. This goes smoothly from not believing your calculation at all (keeping the same belief as before) if P(C)=0 to changing your belief exactly as Bayes' theorem suggests if P(C)=1. (Note that 1-P(C) is the probability that you are using the theorem incorrectly.)

The title text suggests that an additional term should be added for the probability that the Modified Bayes Theorem is correct. But that's this equation, so it would make the formula self-referential, unless we call the result the Modified Modified Bayes Theorem (or Modified2). It could also result in an infinite regress -- we'd need another term for the probability that the version with the probability added is correct, and another term for that version, and so on. If the modifications have a limit, then we can make that the Modifiedω Bayes Theorem, but then we need another term for whether we did that correctly, leading to the Modifiedω+1 Bayes Theorem, and so on through every ordinal number. It's also unclear what the point of using an equation we're not sure of is (although sometimes we can: Newton's Laws are not as correct as Einstein's Theory of Relativity but they're a reasonable approximation in most circumstances. Alternatively, ask any student taking a difficult exam with a formula sheet.).

If we denote the probability that the Modifiedn Bayes' Theorem is correct by P(C_n), then one way to define this sequence of modified Bayes' theorems is by the rule P_n(H \mid X) := P_{n-1}(H \mid X) P(C_n) + (1-P(C_n))P(H)

One can then show by induction that P_n(H \mid X) = \prod_{i=1}^n P(C_i)\left(\frac{P(X \mid H)}{P(X)} - P(H) \right) + P(H).

Transcript

Ambox notice.png This transcript is incomplete. Please help editing it! Thanks.
Modified Bayes' theorem:
P(H|X) = P(H) × (1 + P(C) × ( P(X|H)/P(X) - 1 ))
H: Hypothesis
X: Observation
P(H): Prior probability that H is true
P(X): Prior probability of observing X
P(C): Probability that you're using Bayesian statistics correctly


Is this out of date? Clicking here will fix that.

New here?

Last 7 days (Top 10)

Lots of people contribute to make this wiki a success. Many of the recent contributors, listed above, have just joined. You can do it too! Create your account here.

You can read a brief introduction about this wiki at explain xkcd. Feel free to sign up for an account and contribute to the wiki! We need explanations for comics, characters, themes, memes and everything in between. If it is referenced in an xkcd web comic, it should be here.

  • List of all comics contains a complete table of all xkcd comics so far and the corresponding explanations. The missing explanations are listed here. Feel free to help out by creating them! Here's how.

Rules

Don't be a jerk. There are a lot of comics that don't have set in stone explanations; feel free to put multiple interpretations in the wiki page for each comic.

If you want to talk about a specific comic, use its discussion page.

Please only submit material directly related to —and helping everyone better understand— xkcd... and of course only submit material that can legally be posted (and freely edited.) Off-topic or other inappropriate content is subject to removal or modification at admin discretion, and users who repeatedly post such content will be blocked.

If you need assistance from an admin, post a message to the Admin requests board.